I have found a nutritional survey report conducted in Sudan with GAM of 27.0% and SAM of 7.5%, while the U5MR IS 1.19/10 000/day using ENA for SMART software. Does this result gives sense? If so what about the relation between MUAC Vs Mortality?? GAM and SAM rate were calculated using MUAC, while it was 45.7% and 15.5% in WFH z-score using WHO 2006 reference respectively. And my question is, as MUAC is related to mortality how can we see the result of SAM rate calculated using MUAC and U5MR?? Note, the survey used 60 days as a recall period.
I believe that I am familiar with the survey that you refer to. I have been approached about this survey and I will summarise my thoughts here ... PLEASE - I SAY SOME SHARP THINGS BELOW. I DO NOT INTEND THESE AS CRITICISMS OF THE TEAM THAT DID THE SURVEY. MORTALITY SURVEYS ARE, IN MY OPINION, VERY DIFFICULT TO DO WELL AND EVEN WHEN DONE WELL THEY WILL OFTEN PRODUCE CONFUSING AND MISLEADING RESULTS. It is difficult to make a definitive comment about this. The U5MR is elevated (i.e. the point estimate is > 1 / 10,000 / day) but not by much. There are a few things to note about the SMART method for mortality estimation: (1) Sample sizes tend to be too small to estimate U5MR with decent precision and CMR is (a) slow to respond and (b) strongly influenced by the age-structure of the population (or sample). (2) The method can be subject to many biases. In particular, there is a tendency to use the same sample for the anthropometry and the mortality survey. This introduces a survivor bias (i.e. by only including households with living children) and an age bias (i.e. by including households with young children you tend to include households with young women) and age is a "grand confounder" for mortality. I have done some work on mortality estimation methods. Without naming names, a major INGO produced surveys like this for over a decade (and had a written guideline which stated that the same sample should be used for anthropometry and mortality surveys. These biases tend to lead surveys to underestimate mortality. (3) One would expect mortality to be clustered if due to violence or infection (in many cases malnutrition is due to infection). It is not a good idea to survey a clustered phenomena with a clustered sample. This is very bad practice but we still do it. In short, mortality surveys are usually badly designed surveys (1 and 3 above) which may also be done badly (2 above). The SMART initiative has gone some way towards improving this situation but I would be surprised if the reliability of mortality surveys has improved much over the past decade or so. Also, be aware that the survey will return an average mortality rate over the recall period. If the recall period is long (e.g. 2 or 3 months) then it will tell you little about recent mortality. Since sample sizes are in units of person-time it is a temptation to have a long recall period (time) so as to reduce the survey sampling requirements (person). What you have in the nutrition survey is a snapshot of the current situation BUT what you have in the mortality survey is a historical average (i.e. averaged over the survey recall period). The mortality estimate is for a period BEFORE the wasting prevalence estimate. It is possible, in a rapidly worsening situation (e.g.), to have a high prevalence estimate and a low mortality estimate. This will happen if the recall period is long or if the bulk of the deaths have not yet occurred. There are other "just so" stories that could explain your results ... for example ... the bulk of deaths occurred before the start of recall period and you are surveying the survivors (another form of survivor bias). Some of these explanations / stories can be checked by some brief qualitative work. In short, again, you have a bad method usually done badly and then there is a temptation to reverse the causality. The deaths in the mortality survey occurred prior to the cases on the anthropometry survey so cannot have been influenced by the current prevalence (the temptation is to see this the wrong way round).
Mark Myatt
Technical Expert

Answered:

14 years ago
I believe that I am familiar with the survey that you refer to. I have been approached about this survey and I will summarise my thoughts here ... PLEASE - I SAY SOME SHARP THINGS BELOW. I DO NOT INTEND THESE AS CRITICISMS OF THE TEAM THAT DID THE SURVEY. MORTALITY SURVEYS ARE, IN MY OPINION, VERY DIFFICULT TO DO WELL AND EVEN WHEN DONE WELL THEY WILL OFTEN PRODUCE CONFUSING AND MISLEADING RESULTS. It is difficult to make a definitive comment about this. The U5MR is elevated (i.e. the point estimate is > 1 / 10,000 / day) but not by much (BTW ... you should quote the 95% CI for estimates). There are a few things to note about the SMART method for mortality estimation: (1) Sample sizes tend to be too small to estimate U5MR with decent precision and CMR is (a) slow to respond and (b) strongly influenced by the age-structure of the population (or sample). (2) The method can be subject to many biases. In particular, there is a tendency to use the same sample for the anthropometry and the mortality survey. This introduces a survivor bias (i.e. by only including households with living children) and an age bias (i.e. by including households with young children you tend to include households with young women) and age is a "grand confounder" for mortality. I have done some work on mortality estimation methods. Without naming names, a major INGO produced surveys like this for over a decade (and had a written guideline which stated that the same sample should be used for anthropometry and mortality surveys. These biases tend to lead surveys to underestimate mortality. (3) One would expect mortality to be clustered if due to violence or infection (in many cases malnutrition is due to infection). It is not a good idea to survey a clustered phenomena with a clustered sample. This is very bad practice but we still do it. In short, mortality surveys are usually badly designed surveys (1 and 3 above) which may also be done badly (2 above). The SMART initiative has gone some way towards improving this situation but I would be surprised if the reliability of mortality surveys has improved much over the past decade or so. Also, be aware that the survey will return an average mortality rate over the recall period. If the recall period is long (e.g. 2 or 3 months) then it will tell you little about recent mortality. Since sample sizes are in units of person-time it is a temptation to have a long recall period (time) so as to reduce the survey sampling requirements (person). What you have in the nutrition survey is a snapshot of the current situation BUT what you have in the mortality survey is a historical average (i.e. averaged over the survey recall period). The mortality estimate is for a period BEFORE the wasting prevalence estimate. It is possible, in a rapidly worsening situation (e.g.), to have a high prevalence estimate and a low mortality estimate. This will happen if the recall period is long or if the bulk of the deaths have not yet occurred. There are other "just so" stories that could explain your results ... for example ... the bulk of deaths occurred before the start of recall period and you are surveying the survivors (another form of survivor bias). Some of these explanations / stories can be checked by some brief qualitative work. In short, again, you have a bad method usually done badly and then there is a temptation to reverse the causality. The deaths in the mortality survey occurred prior to the cases on the anthropometry survey so cannot have been influenced by the current prevalence (the temptation is to see this the wrong way round).
Mark Myatt
Technical Expert

Answered:

14 years ago
This survey is under review as we speak due to the odd results. The field has re-run results but cannot figure out why there is such a big difference between z-scores and % median. The difference between those and the MUAC might be explained based on the people being tall and thin naturally therefore creating high MUAC. However there has been much discussion around why there are high GAM/SAM rates and the mortality is so low. This survey is being sent to the Global cluster for review so I would not base other discussions on this survey due to it's oddities.
Anonymous

Answered:

14 years ago
Dear anonymous, These are my perceptions in response to your queries. Thank you and sorry for being too wordy!! First, I think the results of the report are consistent and yes, make sense (especially if validation checks are acceptable [plausible]! First, the interpretation of both GAM and SAM, regardless of MUAC/WHO 2006 is of a critical nature (very high index). What I only seem to note here, is that the GAM and SAM reported using MUAC is almost half that estimated by WHO 2006. This brings me to the practical application of MUAC and WHO criteria for TARGETING within specific settings!! For long, I have observed that certain populations are naturally thin-whence the WHO MUAC cut-of <11.5mm (for children 6-59mo age) may be on the higher side for those populations!! Countries have specific cut offs adapted and revised periodically while the use of cut MUAC cut-offs <10mm for targeting/programing purposes is not uncommon. Secondly, I believe the U5MR of 1.19/10 000/day-interpreted as a very serious situation that needs thorough investigation-corroborates the high GAM and SAM reported in 1)above (again whether obtained using MUAC or WFH). Note however, that this is subject to the contextual information collected about the population surveyed. In addition, and of exclusive importance, the U5MR is calculated and reported for a given population using a sample size calculated for a mortality rate survey (using specficic indicators)-where actual deaths, births for a number of households are obtained. To dwell more on your 2nd question, there is no direct relationship between MUAC and U5MR. A high U5MR such as that reported needs to be thoroughly investigated (malnutrition caused by diet/disease or others eg wars in post-confilict zones). As such contextual information collected for specific populations in the planning phase of the ENA software is very critical. Associations however, can be established between MUAC and and relative risk of mortality-in my perception using trend/time series analysis. Association studies may be conducted using other statistical packages (stata, spsss, epi-info). Note, in studying diseases-malnutrition inclusive, and their causes (epidemiology), associations do not imply causality. Perhaps, this is the ideology you are holding (of a possible direct relationship between MUAC or GAM/SAM and mortality). To sum all, a report of any anthropometric survey is as good as the resources invested in preparing for the survey (collect contextual information, personnel training, and quality/validation of data collection, entry and analysis). This should constitute the basis for which the 'sense' of judgement for any anthropometric results should be based! Repeated surveys using same methodology may useful, too. There are several advantages of using different MUAC or WFH GAM/SAM in assessing malnutrition. The authors of that report must have given an insight into the likely causes of the disparity. Review the validity checks for results obtained (COMPULSORY FOR AN ANTHRO REPORT) and try gain insight into the likely cause of variations for their GAM/ SAM reported using MUAC and WAZ as per the authors. Kind regards, Samuel
Sam Oluka

Answered:

14 years ago
Dear anonymous, These are my perceptions to your queries. Thank you and sorry for my being wordy!! First, I think the results of the report are consistent-Yes, make sense! First, the interpretation of both GAM and SAM, regardless of MUAC or WFH is of a critical nature (very high index). What I only seem to note here, is that the GAM and SAM reported using MUAC is almost half that estimated by WFH. This brings me to the practical application of MUAC and WHO criteria for TARGETING within specific settings!! For long, I have observed that certain populations are naturally thin-whence the MUAC cut-of of <11.5mm (for children 6-59mo age) may be on the higher side for those populations!! For example, national survey guidelines based on the ENA for SMART methodology have beed adapted for specific countries. Besides, for targeting and program (TFC/SFC) elegibility, MUAC cuts offs below 10 mm are not uncommon (scarcity of resources). Secondly, I believe the U5MR of 1.19/10 000/day is interpreted as a very serious situation that needs thorough investigation-corroborates the high GAM/SAM reported in 1)above (again whether obtained using MUAC or WFH). Note however, that the U5MR is calculated and reported for a given population using a sample size calculated for a mortality rate survey(using a set of indicators) in which case gender-specific information on actual deaths or births migration for a number of households is obtained. Finally, specifically for you 2nd question, there is no direct relationship between MUAC and U5MR. A high U5MR such as that reported needs to be thoroughly investigated (malnutrition caused by diet/disease or others eg wars- in post conflict zones). As such contextual information collected in the planning phase of the ENA software is very critical. Associations however, can be established between anthropmetric indicators relative to the risk of mortality. Association studies may be conducted using other statistical packages (stata, spsss, epi-info). Note, in studying diseasesand their causes-malnutrition inclusive, (epidemiology), associations do not imply causality. Perhaps, this is the ideology you are holding (of a possible direct relationship between MUAC or GAM/SAM and mortality). To sum, a report on any anthropometric survey is as good as the resources invested in preparing for the survey (collect contextual information, personnel training, and quality/validation of data collection, entry and analysis). The use of GAM/SAM obtained using MUAC or WFH in anthropometry has several different advantages. Try to establish the validity of the study. Validity checks are COMPULSORY for any Anthro-survey report and are the basis for 'sensible' results! Also, review the entire survey report for Authors' probable explanation for disparity between the GAM/SAM results obtained using MUAC and WFH. Kind regards, Samuel
Sam Oluka

Answered:

14 years ago
I believe that I am familiar with the survey that you refer to. I have been approached about this survey and I will summarise my thoughts here ... PLEASE - I SAY SOME SHARP THINGS BELOW. I DO NOT INTEND THESE AS CRITICISMS OF THE TEAM THAT DID THE SURVEY. MORTALITY SURVEYS ARE, IN MY OPINION, VERY DIFFICULT TO DO WELL AND EVEN WHEN DONE WELL THEY WILL OFTEN PRODUCE CONFUSING AND MISLEADING RESULTS. It is difficult to make a definitive comment about this. The U5MR is elevated (i.e. the point estimate is > 1 / 10,000 / day) but not by much (BTW ... you should quote the 95% CI for estimates). There are a few things to note about the SMART method for mortality estimation: (1) Sample sizes tend to be too small to estimate U5MR with decent precision and CMR is (a) slow to respond and (b) strongly influenced by the age-structure of the population (or sample). (2) The method can be subject to many biases. In particular, there is a tendency to use the same sample for the anthropometry and the mortality survey. This introduces a survivor bias (i.e. by only including households with living children) and an age bias (i.e. by including households with young children you tend to include households with young women) and age is a "grand confounder" for mortality. I have done some work on mortality estimation methods. Without naming names, a major INGO produced surveys like this for over a decade (and had a written guideline which stated that the same sample should be used for anthropometry and mortality surveys. These biases tend to lead surveys to underestimate mortality. (3) One would expect mortality to be clustered if due to violence or infection (in many cases malnutrition is due to infection). It is not a good idea to survey a clustered phenomena with a clustered sample. This is very bad practice but we still do it. In short, mortality surveys are usually badly designed surveys (1 and 3 above) which may also be done badly (2 above). The SMART initiative has gone some way towards improving this situation but I would be surprised if the reliability of mortality surveys has improved much over the past decade or so. Also, be aware that the survey will return an average mortality rate over the recall period. If the recall period is long (e.g. 2 or 3 months) then it will tell you little about recent mortality. Since sample sizes are in units of person-time it is a temptation to have a long recall period (time) so as to reduce the survey sampling requirements (person). What you have in the nutrition survey is a snapshot of the current situation BUT what you have in the mortality survey is a historical average (i.e. averaged over the survey recall period). The mortality estimate is for a period BEFORE the wasting prevalence estimate. It is possible, in a rapidly worsening situation (e.g.), to have a high prevalence estimate and a low mortality estimate. This will happen if the recall period is long or if the bulk of the deaths have not yet occurred. There are other "just so" stories that could explain your results ... for example ... the bulk of deaths occurred before the start of recall period and you are surveying the survivors (another form of survivor bias). Some of these explanations / stories can be checked by some brief qualitative work. In short, again, you have a bad method usually done badly and then there is a temptation to reverse the causality. The deaths in the mortality survey occurred prior to the cases on the anthropometry survey so cannot have been influenced by the current prevalence (the temptation is to see this the wrong way round).
Mark Myatt
Technical Expert

Answered:

14 years ago
I agree. The the prevalences found by W/H and MUAC are quantitatively different from each other by qualitatively similar. Both point to a very poor situation. I am not sure what is intended by "naturally thin". Certainly some populations have low W/H because of body shape but I have not heard of this in terms of MUAC. A MUAC below 115 mm is pretty thin and thin enough to be associated with increased mortality in any population. I think we need to be careful regarding "To dwell more on your 2nd question, there is no direct relationship between MUAC and U5MR". In individuals low MUAC predicts mortality better than any other anthropometric indicator. The problem, I think, is that the U5MR is from the past and the wasting prevalence from the present. We like our causes to come before effects and your data is the other way round.
Mark Myatt
Technical Expert

Answered:

14 years ago
Probably the survey might be proving wrong a longheld opinion that MUAC is a better predictor of mortality .it would therefore be good to use it as a basis of finding out issues or relations between W/H,MUAC and mortality secondly , the nesting of mortality surveys within nutrition surveys might need to be relooked
Anonymous

Answered:

14 years ago
Dear all, Thank you all for your kind response. Please understand that my question is not to the validity etc --- of the survey. What I really need to know is about the relationship between MUAC and Mortality. As this forum is the area we can learn from experts and sharing ideas, there is no intention to ask anyone about the validity etc ... So, in your response please just concentrate on the possible explanation that could explain the result instead of firing back to the survey team or others. I would like to note the following points form responses; "MORTALITY SURVEYS ARE, IN MY OPINION, VERY DIFFICULT TO DO WELL AND EVEN WHEN DONE WELL THEY WILL OFTEN PRODUCE CONFUSING AND MISLEADING RESULTS" "In short, mortality surveys are usually badly designed surveys which may also be done badly" The deaths in the mortality survey occurred prior to the cases on the anthropometric survey so cannot have been influenced by the current prevalence And my question is again; 1. If mortality survey result is confusing and misleading and also doesn't reflect the current situation, why we need to do it? Although I agree we need to do mortality survey in some situation. 2. Regarding the design, if the current method we are using is not good, what possible design method can be used in order to get good results?
Assaye

Answered:

14 years ago
Mark, I agree you are spot on. Thank you for your corrections. Samuel
Sam Oluka

Answered:

14 years ago
Two replies in one ... "Anonymous 108" first ... I am very concerned by this statement: "Probably the survey might be proving wrong a long held opinion that MUAC is a better predictor of mortality. it would therefore be good to use it as a basis of finding out issues or relations between W/H, MUAC and mortality." With these surveys you can say NOTHING about the relationship between an anthropometric indicator and mortality. This main reason for this is that the mortality estimate is for a period of months BEFORE the wasting estimate. The relationship we are interested in is one in which low MUAC comes before death. The sort of data being discussed here has death coming before low MUAC. Your proposal requires time to flow backwards! Cross-sectional surveys are a very bad way of investigating this relationship. There are good methods of investigating the relationship of interest (i.e. prospective cohort studies) and these have been done. The findings from the various studies are consistent with each other regardless of when or where the study was performed (e.g. a MUAC of 100 mm means much the same in terms of mortality risk in Bangladesh as it does in Uganda). Since the relationship is so well established and we now have good treatments for wasting and high coverage interventions (e.g. CTC, CMAM) it would be unethical to conduct further studies. I do agree with this: "secondly , the nesting of mortality surveys within nutrition surveys might need to be relooked". I think that the common pratfalls are covered in SMART documentation but running two surveys together may compromise the sampling for the mortality survey. Now to "Anonymous 375" ... The issue of MUAC and mortality has been well investigated and the relationship described. In terms of the performance of practicable indicators as predictors of near term mortality MUAC (uncorrected for height, age, or sex) has been shown to be superior. The order is MUAC, W/A, H/A, and W/H. All studies find W/H to be the worst indicator for predicting near term mortality. You cannot use the sort of data being discussed (i.e. SMART type prevalence surveys and SMART type retrospective mortality surveys) to investigate this relationship. See above. You ask: "1. If mortality survey result is confusing and misleading and also doesn't reflect the current situation, why we need to do it? Although I agree we need to do mortality survey in some situation." A very good question! I could write at length about how misleading they are and why even when the survey is done well. I will just give two examples: (1) You have a three month recall period and the U5MR at the start was 5 / 10,000 / day and the U5MR declined steadily to 1 / 10,000 / day over the recall period. The estimated U5MR will be about 2.5 / 10,000 / day. (2) You have a three month recall period and the U5MR at the start was 1 / 10,000 / day and the U5MR rose steadily to 5 / 10,000 / day over the recall period. The estimated U5MR will be about 2.5 / 10,000 / day. Here you have two very different situations. In (1) the problem has passed but in (2) the problem is here now! The survey cannot differentiate between these two very different situations. "2. Regarding the design, if the current method we are using is not good, what possible design method can be used in order to get good results?" One obvious thing to do is to reduce the recall period to (e.g.) one month or two weeks. Now the estimate refers to a period close in time. The problem with doing this using the current method is that sample sizes increase. Sample sizes for mortality surveys use units of person-time (usually person-days). If you need (e.g.) 100,000 person-days then, for a 90 day recall period you need a sample of 1111 persons (i.e, 100,000 / 90). For a recall period of 30 days you need a sample of 3333 persons. For a recall period of 14 days you need a sample of 7143 persons. Such sample sizes might be possible for CMR (all persons in a household) but for U5MR (only children < 5 years of age) you may need to sample 3000 or 7000 households to get the required sample size. There has been some recent work on new methods that rely on active case-finding, rapid population estimation, and classification of prevalence (i.e. the analysis tells you a class (e.g. OK, poor, bad, very bad, disaster) rather than a number with a confidence interval. These approaches show some promise. Details of one project (that I did some work on) can be found at [url]http://www.fantaproject.org/publications/EM_method.shtml[/url].
Mark Myatt
Technical Expert

Answered:

14 years ago
Dear Mark, I learn a lot about the issue. Thank you very much for taking your time to explain which really enable us to do our work with confidence in the filed. Hope this website is of the best as what experts share is KNOWLEDGE!! Warm regards,
Assaye

Answered:

14 years ago
Thank you for your kind comments. What makes this forum work is, in my opinion, the community of the forum not just the experts. We're all in this together! While I'm writing ... I made a mistake in my previous reply. The average U5MR in the examples of falling steadily from 5 / 10K / day to 1 / 10K / day and rising steadily from 1 / 10K / day to 5 / 10 K / day would be 3 / 10K / day not 2.5 / 10K / day. This does not alter the point being made. A retrospective mortality survey cannot distinguish between a situation that is getting better and and a situation that is getting worse.
Mark Myatt
Technical Expert

Answered:

14 years ago
Please login to post an answer:
Login