I'm looking at a national SMART survey. The confidence intervals appear quite wide for SAM. As an example one province has SAM prevalence of 2.1 (95%CI 1.1 - 4.0), another has SAM prevalence of 2.0% (95%CI 1.0-3.8%).
Could I assume that there is an error in the sample size calculation and/ or missing data values?
When conducting national anthropometric surveys, what needs to be done to ensure greater precision for the SAM estimate?
Many thanks
I do not think there is any error. In a hypothetical survey sample of 1000 children with a design effect of 1.5, the confidence intervals around an estimate of 2% prevalence for severe acute malnutrition would be 1.0, 3.4, similar to your examples. To achieve a statistically significant difference from the 2% cut-off, our hypothetical survey's estimate of prevalence would have to be 3.4% or greater. Fortunately, few populations have such an elevated prevalence of severe acute malnutrition in preschool-age children.
Because these prevalence rates are low, any differences between survey estimates and the 2% cut-off will be relatively small, thus requiring high precision in the survey estimate to achieve statistical significance at a p<0.05 level. If a survey's primary outcome of interest is the prevalence of severe acute malnutrition, the survey must be powered accordingly and will probably require a larger-than-average sample size.
Answered:
5 years agoIn agreement with Bradley. When a survey is designed the sample size can be calculated according to the expected prevalence and the desired precision (width of the confidence intervals). Surveys are also usually designed to give prevalence in one population (eg nationally or in a region). In order to compare two populations then the sample size would need to be calculated specifically to do that comparison with a specified power to demonstrate a difference (usually at least 80 percent which means accepting a 20 percent chance that a true difference will be missed). A comparison is likely to require a very large sample size. Making a comparison using a smaller sample size that was not designed to make a comparison will usually inevitably end up with a result of 'no evidence of a difference' because confidence intervals of one population will overlap with the prevalence estimate of the other population.
Hope that's helpful.
Jay
Answered:
5 years agoMany thanks. Your answers are very helpful. The survey is a national anthropometric survey conducted using SMART methodology. From the method section of the summary report (full report not yet available) it says that two samples were calculated (one for the anthropometric survey for children 0-59 months; the second for the mortality survey). Regarding the anthropometric survey, could I assume that the sample size was likely calculated according to the expected prevalence and the desired precision for GAM and not SAM?
Below is a table showing the 2018 SAM prevalence findings (with 95% confidence intervals) for the survey that I’m referring to, for some of the provinces. The previous national SMART survey was conducted in 2014. I’ve added a column showing the 2014 SAM prevalence for each of these provinces.
Province 2018 2014
A 2.0 (1.0 - 3.8 95% CI) 1,7% (1,0 - 3,1 95% CI)
B 2.3 (1.5 - 3.5 95% CI) 1,7% (0,9 - 3,1 95% CI)
C 2.7 (1.5 - 4.7 95% CI) 1,9% (0,7 - 4,9 95% CI)
D 2.2% (1.3 - 3.7 95% CI) 1,6% (0,8 - 3,4 95% CI)
The author judges that the methodology of both surveys are comparable and that an increase in SAM is observed in 2018 compared to 2014 for these four provinces.
As the confidence intervals clearly overlap for 2014 and 2018 provincial results, is it incorrect to conclude that there was an increase in SAM prevalence in each of these provinces?
Would it instead be correct to say that there is no significant difference in SAM prevalence for 2018 compared to 2014; and that the overlap of the confidence intervals and their width suggest that a larger sample size would have been needed to detect a difference.
The author also states that SAM now exceeds the 2% WHO emergency threshold in these 4 provinces (necessitating increased intervention).
It would be good to have your advice for a correct interpretation of these results.
Many thanks again
Answered:
5 years agoLikely as you say that the survey was designed to obtain a national prevalence of GAM, so wide CIs for SAM and not powered to either compare SAM to previous years or between provinces - you would need to look at the details of the design to confirm that.
Also as you say, there is no evidence of differences from previous years. Looking at the CIs, the province with an estimate of 2 percent SAM has somewhere more than 50 percent probability of being truly 2 percent, the others have a higher probability. So intervention depends on how certain you want to be. However I strongly recommend reading André Briend's post earlier today on emergency thresholds.
Jay
Answered:
5 years agoOk - thanks.
In summary can we conclude that:
- we can't really be clear that there is a deterioration in the situation regarding SAM from 2014 in 2018.
- it is incorrect to say there is an increase in SAM from 2014 to 2018 in these provinces.
- a survey with sufficient power/ larger sample size would need to be conducted to confirm how 2014 SAM prevalence compares to 2018.
Regarding Andre Briend's post on emergency thresholds, are you able to send me the link. I've tried searching for it, but can't find it.
Many thanks again for your very quick and helpful reply.
Answered:
5 years agoAnswered:
5 years agoDear All:
Just a small point of clarification. Overlap in the confidence intervals around the estimates from two subgroups does not necessarily mean that the two estimates are not statistically significant. This is because the confidence intervals of the separate groups are calculated using the smaller sample size of each group separately, but the calculation of the p value for a difference between groups uses the variance derived from the pooled sample sizes of the two groups together. So the comparison of the two groups has substantially more precision than is reflected in the confidence intervals of the separate groups.
If the confidence intervals of two groups do not overlap, the estimates in those two groups are definitely statistically significantly different. However, if the confidence intervals do overlap, you can roughly estimate statistical significance by whether or not the confidence intervals for each group include the point estimates of the other group.
In the example from Anonymous 24408, let's look at Province A. The 2014 estimate of the prevalence of severe acute malnutrition is 1.7% (95%CI: 1.0, 3.1). These confidence intervals DO include 2.0% which is the estimate of prevalence in 2018. In addition, the 2018 result is 2.0% (95% CI: 1.0, 3.8), so again, the confidence intervals include 1.7% which is the estimate from the 2014 survey. In this example, we can tentatively conclude that these two surveys do not provide evidence that the prevalence of severe acute malnutrition in Province A has changed between 2014 and 2018, but I would never base a report's or publication's conclusions on such a guesstimate. I only use this technique if I am reading a report and do not have access to the actual survey data. More definite conclusions about statistical significance need to be based on an appropriate chi square test accounting for whatever complex sampling design was used.
Answered:
5 years agoThanks. How would you best conclude the survey findings:
-can't really be clear that there is a deterioration in the situation regarding SAM from 2014 in 2018.
-or should the report be left to say that the provinces do show an increase in SAM prevalence?
thanks
Answered:
5 years agoHi Anonymous 24408,
I believe that my esteemed colleagues Bradley and Jay have answered most of your questions. Just a quick heads up from SMART perspective.
You can check if there a statistical significance between the prevalence of the two surveys by using the CDC statistical calculator for two surveys. It comes with the SMART training package (managers training), please check the annexes on the SMART methodology website. The difference will be mainly based on the interpretation of the p. value as my colleagues indicated above. C.I. overlap is one quick way to do it as well.
You have to consider other circumstances when you draw comparisons, such as seasonality, sample size, design effect, other nutritional deficiencies, livelihood status or other health interventions in the area, and whether CMAM services have expanded since the last survey or not, or if there was an outbreak of acute watery diarrhoea since the last survey etc. You may wish to talk to health professionals and community members to get insight into your interpretation.
I have worked on SMART surveys and CMAM in Darfur and Yemen, I know sometimes you feel pressured to find and report progress in the nutrition status of children within your programme areas to satisfy donors. SMART reports when they are not consistent over time they may cause confusion and frustration. However, when you look at your programme indicators, admission trends and performance, you should know if your intervention is working or if it needs a bit of improvement. In all cases, interpret your SMART findings based on your context, you and your field team could tell if the nutrition situation is getting worse.
Hope that helps!
Sincerely,
Sameh
Answered:
5 years agoExtending Bradley's point on have the actual data, although each individual province shows no evidence of a difference between the two time points, if you are able to obtain the actual numbers (number of children surveyed, number with GAM and number with SAM) for each if the 4 provinces for each of the two time points, then it would be possible to combine these into a single 'meta-analysis' of the overall change in proportion of children who are undernourished within the 4 provinces combined that would have more power to detect a change. Is that data available?
Answered:
5 years agothank you all for the very helpful comments. Regarding the last comment I don't have direct access to the data. Only summary findings at the moment.
Answered:
5 years ago