The Food and Nutrition Technical Assistance Project II (FANTA-2) would like to announce the release of "Cluster Designs to Assess the Prevalence of Acute Malnutrition by Lot Quality Assurance Sampling: A Validation Study by Computer Simulation."
To assess acute malnutrition with a population based survey a large sample size is generally required. This is true even when lot quality assurance sampling (LQAS), an otherwise time and cost efficient method, is used. Cluster sampling, or sampling observations in batches, offers an alternative to the large simple random sample size that would typically be needed for LQAS analysis.
The study examines the classification error of three cluster designs, a 67X3, a 33X6 and a sequential sampling scheme, to assess the prevalence of acute malnutrition with LQAS. The study concludes that for independent clusters with moderate intracluster correlation, the three sampling designs maintain approximate validity for LQAS analysis of acute malnutrition prevalence.
Although the 30x30 cluster design is currently the most common sampling method used to assess the prevalence of acute malnutrition in emergency settings, the 67x3, 33x6 and sequential sampling designs provide an alternative, well-tested approach to the collection and analysis of acute malnutrition data. Comparative field studies in Ethiopia and Sudan have shown the alternative sampling designs to provide reliable and reasonably precise results and to require less time and resources in comparison to a 30x30 cluster design.
The study was funded by USAID's Bureau for Global Health's Office of Health, Infectious Disease and Nutrition and grants provided to the Harvard School of Public Health from the US National Institutes of Health.
The article is available at
[url]http://www.fantaproject.org/publications/rsss09.shtml[/url]
Concerning the 30x30 design for assessing anthropometric status in populations (30 children assessed in each of 30 clusters), this sample size was based on the following assumptions:
Prevalence of 50%; 95% confidence limits; +/-5% absolute error; design effect of 2
Using a sample size program such as in OpenEpi (www.openepi.com), the sample size based on the above assumptions is 768. Dividing this by 30 clusters results in 26 children to be measured in each cluster. This was rounded up to 30 children per cluster just in case the observed design effect was >2.
In some surveys the prevalences of stunting and low weight-for-age are considered important indicators of nutrition status and these may have a prevalence around 50% in some populations. If the primary indicator is low weight-for-height (wasting) with stunting and low weight-for-age secondary indicators, then the sample sized can be dramatically lowered because the prevalence of wasting tends to be much lower. For example, using the following assumptions:
Prevalence of 15%; 95% confidence limits; +/-5% absolute error; design effect of 2
The sample size is 392, and if to be performed in 30 clusters, would be rounded up to 14 children per cluster, or a 14x30 cluster survey. Therefore, in terms of comparing a cluster survey to LQAS designs for wasting, a 14x30 design would be more appropriate comparison design. (Note that reducing the number of children assessed per cluster tends to lower the design effect, therefore the above calculation is conservative.)
In terms of deciding the number of clusters to visit, one issue is the average time it takes a team to reach a cluster. If the geographic area is large and/or it is difficult for a survey team to reach many of the clusters, then there is the motivation to minimize the number of clusters with a greater number of children per cluster. If the area is small and clusters easy to reach, then assessing more clusters with fewer children per cluster has advantages. The number to assess per cluster is also affected by the intracluster correlation and design effect.
An important issue that needs to be addressed is the cost of coming to the wrong conclusion based on survey results. What are the costs of a survey underestimating the prevalence of wasting? What are the costs of a survey overestimating the prevalence of wasting? We know that these errors will occur and one goal should be to minimize the frequency of these errors. Comparison of different approaches to assessing wasting should address the cost of performing the survey as well as the cost of reaching the wrong conclusion.
Kevin Sullivan
Answered:
15 years ago