Typically SAM and MAM prevalence from a recent nutrition survey (measured using weight for height) is used with incidence correction factor and coverage to estimate MAM and SAM caseloads for planning purposes for treatment programmes.

With MUAC used as independent admission criteria to treatment programmes; and understanding that W/H and MUAC do not always detect acute malnutrition in the same children, I'm wondering if the caseload estimate calculated using W/H prevalence should routinely be adjusted (a correction factor added) to made provision for MUAC-only admitted cases? (if not caseloads will be underestimated).

Grateful to know what others think of this and if there is any work done on what factor could be applied given that the proportion of W/H-only and MUAC-only cases varies according to context. (Might 20% be a suitable margin to apply?)

it is a good idea . for better case loading we need to use the cGAM ( combine GAM = WHZ+ MUAC). this will be cover the total malnourished children in the program calculation. but it needs to calculate the overlap

DR. Baidar Bakht Habib

Answered:

5 years ago

Thanks for this. How is the overlap calculated?

Anonymous

Answered:

5 years ago

We have (e.g.) SMART surveys in almost all settings in which there are SAM and MAM treatment programs. Survey data can be used to calculate the overlap. The process is to apply the case definitions for MUAC and for WHZ separately and then make a two-by-two table. The overlap is the cell containing children who meet both case-definitions. You have to do this in each setting as the overlap changes from place-to-plase due, in part, to the effect of body shape on WHZ.

Your caseload prediction is than based on numbers:

MUAC only + WHZ only + overlap

Getting high spatial and temporal coverage of case-finding using WHZ can be difficult and expensive. This means that you may have to use different values for the coverage terms for (1) MUAC and overlap cases (i..e all MUAC cases regardless of WHZ), and (2) WHZ only cases in caseload predictions.

Don't forget to include oedema in any caseload prediction. This can be a significant component of caseload in some settings.

I hope this is of some use.

Mark Myatt
Technical Expert

Answered:

5 years ago

Dear Mark, thanks a lot for this. Very useful for when survey data are available. How about in instances where only the report is available without having access to the data (because survey conducted by another organisation)?

Where survey data not accessible, might it be suitable to use the MUAC survey prevalence estimate and add a margin (10%?) for WHZ cases? I understand the % of MUAC only, WHZ only and overlap cases varies from context to context so there is not a specific factor that could be applied. I was just wondering if it is better to add some sort of margin rather than none at all ?

Anonymous

Answered:

5 years ago

Most programs that I work with use MUAC and oedema as primary admission criteria. There may be a few WHZ-only admissions that may be referred from (e.g.) growth monitoring program, clinical settings, and programs like SFP that may still be using W/H. I choose not to work with WHZ in CMAM programs for many reasons. Mostly I find it impractical to use with high spatial and temporal coverage and the cost of using W/H diverts resources from community sensitisation, community mobilisation, and community-based case-finding and recruitment efforts (see this review) which damages program coverage. The old saw about the road to hell being paved with good intenstions applies here. We try to include more cases by adding WHZ-only cases and this results in our treating fewer cases. A case-definition with MUAC and W/A is likely to detect all (or nearly) all cases of children with anthropometric deficits at high risk of near term mortality including those with WHZ < -3 (see this article). That said all that ...

I find it hard to imagine setting up or managing a program without having access to survey data. I suppose it may occur. Anyone with responsibility for setting up or managing a program should be able to access survey data. Perhaps a good way around this is for surveys to routinely report overlap. This should be easy to add to SMART software.

In the case of a program already running ... you could use program data to assess the degree of overlap. Admission MUAC, weight, and height are standard items recorded on beneficiary record cards making an overlap analysis possible. Some programs strongly favour WHZ and may have very few MUAC-only admissions. In these case you will only be able to see WHZ-only and MUAC and W/H (overlap) admissions with MUAC-only admissions under-represented.

UNICEF Nigeria has developed a caseload prediction tool for use in program already running. Here is a description of the method. This uses a diffrent approach. See this article for details. This could be adapted to your application.

In the case of a new program ... Sometimes we may have little choice but to use a fudge factor (as you suggest above). This is usually the case when we need something that is not often measured and not usually measured well. The relevant example here is the prevalence to incidence correction factor to get some estimate of incidence that we use for caseload predcition. Adding a "guessed at" fudge factor means adding more uncertainty to (already uncertain) caseload predictions.

We measure prevalence routinely so I think we can do better than a "guessed at: fudge factor. If we can't get recent and local SMART data then we could (e.g.) use DHS, MICS, or national SMART survey data to get at a more informed fudge-factor for the overlap. This should be available to organisations setting up or managing a program through UN/OCHA, UNICEF, or WHO.

I hope this is of some use.

Mark Myatt
Technical Expert

Answered:

5 years ago

Mark - Thanks for this. It is indeed very helpful.

Regarding the first paper you shared, I agree that programmes screening only using MUAC in the community and then only admitting using W/H are very problematic. I haven’t seen this practice in many years though. Most CMAM programmes I come across use MUAC and oedema screening in the community and then MUAC, oedema and W/H admissions at health centre/ mobile clinic level as per national CMAM protocol and in-line with WHO independent diagnostic criteria.
I work mainly in emergency contexts which tend to have high levels of food insecurity. Therefore there tend to be higher numbers of people coming of their own accord to SFP/ OTP meaning the various cases (MUAC only, W/H only, both, oedema) present for treatment.

Regarding MUAC only programmes, I was wondering what your thoughts are on the Golden and Grellety research ‘Death of children with SAM diagnosed by WHZ or MUAC: Who are we missing?’
https://www.ennonline.net/fex/57/samdiagnosedbywhzormuac.

With regard to agencies and organisations sharing raw survey data, it happens but not routinely and generally takes time (for varying reasons). Your suggestion of surveys starting to routinely report on overlap is a good one and one that I will suggest if involved in surveys at planning stage as MUAC and W/H are generally both reported in rapid surveys in emergency contexts.

For other contexts and at national level, MUAC is not always reported. I saw recently that it’s also not one of the indicators in the May 2019 WHO/ UNICEF document ‘recommendation for data collection, analysis and reporting on anthropometric indicators in children under 5 years old’ (https://data.unicef.org/resources/data-collection-analysis-reporting-on-anthropometric-indicators-in-children-under-5/) as it is not one of the definitions of wasting used for tracking progress towards the Global Nutrition Targets set by the World Health Assembly.
Interesting that many are moving towards CMAM programmes that detect and treat wasting using two of the WHO definition criteria (MUAC and oedema) yet at national and global level wasting progress is tracked using W/H indicator.

Thanks for sharing the work done in Nigeria. I will take a closer read of that.

Thanks also for the paper showing a ‘case-definition with MUAC and W/A is likely to detect all (or nearly) all cases of children with anthropometric deficits at high risk of near-term mortality including those with WHZ < -3.’ I will keep a look out as more research is done in this area (I note that ‘further work is required before the findings of the work reported can be applied’).

Thanks again.

Anonymous

Answered:

5 years ago

Glad to be of some use.

My main concern with the work by Golden and Grellety that you reference (and other work by them) is that it is based on clinical data. Such data are subject to (often severe) selection biases.With clinical data these selection biases often give rise to "Berkson’s Fallacy" in which strong and clear associations found in clinical data are often absent (sometimes reversed) in community data. We need to be very cautious extrapolating from clinical data to the general population. A sample of patients is usually not representative of a population. Claims are made based on grossly biased samples. A 2015 paper by Grellety (e.g) makes a mortality argument for retaining WHZ using data from a CMAM patient cohort with 99% admitted using WHZ. Population-base cohort studies are always better and these provide no evidence supporting the retention of WHZ in CMAM programs.

WRT the UNICEF guideline ... their MICS team (and DHS) have a blind spot WRT MUAC. This is rather silly as MUAC is used in case-definitions for TFP and SFP. Their ignoring of MUAC makes these surveys pretty much useless WRT estimating burdens / caseloads for TFP / SFP. It is not all of UNICEF. Other documents from other parts of UNICEF (i.e. other than MICS) support MUAC in surveys. Did you notice that the guideline you reference is a bit odd WRT oedema too.

Mark Myatt
Technical Expert

Answered:

5 years ago


Just to aid understanding ... here is an example of Berkson's Fallacy that shows the mechanics of the bias.

This is what we find in the population:

Outcome + - Exposure + 22 171 - 201 2389 RR = 1.47 (95% CI = 0.97, 2.22) p = 0.0725

In the clinic (with the the same catchment area as the population) we see:

Outcome + - Exposure + 7 36 - 13 208 RR = 2.77 (95% CI = 1.17, 6.53) p = 0.01841

We see a significant association in the clinic but not in the population. This is due to a selection bias.

The fraction of the population in the clinic is 264 / 2783 = 0.09486166.

For the clinical sample to represent the population WRT the population association between exposure and outcome we would expect the clinical sample to look like this:

Outcome + - Exposure + 2 16 - 19 226 RR = 1.43 (95% CI = 0.36, 5.67) p = 0.6122

What we see in the clinical sample is the result of selection biases. In this example we see attendance rates of:

Outcome + - Exposure + 7/22 = 32.8% 29/171 = 17.0% - 13/201 = 6.4% 208/2389 = 8.7%

when each cell should contain about 9.49% of the cell value from the population.

We need to be very cautious extrapolating from clinical data to the general population. A sample of patients is usually not representative of a population. The issue, in the above example, is differing clinic attendance rates. The selection bias in the example is mild compared to what we see in some of Golden and Grellety's published clinical datasets in which WHZ cases vastly outnumber MUAC cases.

If we are looking at case-finding in the community then we need to work with community (i.e. population) data. This is what we need when we want to decide admission criteria.

I think that Golden and Grellety mistakenly use clinical data to answer population questions.

If all we are interested in is what happens to a patient cohort (and we may legitimately be interested in this) then we should use the clinical data but we must never mistake this for population data and extrapolate clinical findings to the population.

Berkson's fallacy is a classic epidemiological pratfall that has been known and counselled against for c. 80 years. See the original article here https://academic.oup.com/ije/article/43/2/511/680126. This is a 2014 reprint of the original 1946 article. It gets reprinted from time to time because Berkson's Fallacy is something that we often forget to remember.

I hope this is of some use to someone.

Mark Myatt
Technical Expert

Answered:

5 years ago
Please login to post an answer:
Login