Hi all, We would like to conduct a rapid nutritional assessment in an area affected by flood three times in the last quarter. Could anyone suggest about the size of the sample I should consider for this purpose with reference. Earlier in South Sudan we used to consider at least 100 children for any rapid assessment using MUAC but I could not find any reference. Regards Monsurul
The RAM (Rapid Assessment Method) and similar methods in Ethiopia, Sierra Leone, and Sudan use that n = 192 collected using 16 clusters of 12 children. The first stage sample is taken using a stratified spatial sample or CSAS. The within-cluster sample is collected as 4 clusters of 3 children using a map/segment/sample method or QTR + EPI5. A PROBIT estimator is used. This yields useful precision.
Mark Myatt
Technical Expert

Answered:

11 years ago
Just a question regarding the RAM, while the method may work in areas such as Ethiopia, Sudan and Somalia, would this sample selection and sample size provide accurate enough estimates or probabilities of GAM and SAM (even with PROBIT calculators) in areas with a high density of children under 5, where in a small geographical area there could be in excess of 25,000 children, and would this methodology be adequate from which to base projects and to support decisions made by NGO's and the donor community.
Anonymous

Answered:

11 years ago
I suppose that you will have to ask "NGO's and the donor community". Here is some information that might help them decide ... It is still early days for RAM. Work is proceeding. We have experienced few technical setbacks. RAM is a cluster-sampled method and has all the limitations of such a method WRT clustered phenomena. For example, it will have quite poor precision WRT some WASH indicators in rural settings. It is not a "magic bullet" for all that is weak in current approaches. It does provide some sample size savings. It also does not require population data in advance of sampling. Work on testing the PROBIT method (still ongoing as we try to improve accuracy and precision) used computer-based simulations based on a survey area population of 100,000 total persons with 17% aged between 6 and 59 months (i.e. 17,000 children). This was felt to be typical of areas in which RAM might be applied and is not so different, in terms of sampling, to the 25,000 you mention above. Results show the method yields similar precision to (e.g.) a SMART survey with c. 3 times the n = 192 sample size. The gain in precision in field-use is likely to be somewhat higher than this because the within-cluster sampling methods employed reduce variance loss (and DEFF) compared to the proximity sampling commonly used in SMART surveys. SMART surveys could, of course, adopt methods such as MSS or QTR + EPI5 for within cluster sampling which should result in improved precision. There is an issue of accuracy (or bias). The classical estimator is generally unbiassed. The PROBIT estimator is not unbiassed. The level of bias is, however, small. Here is an example of relative precision and bias for GAM prevalence estimates with variants of the PROBIT estimator and the classical estimator using computer-based simulation: PERFORMANCE OF RAM/PROBIT CANDIDATE ESTIMATORS FOR GAM PREVALENCE Method Location Dispersion Error (%) Rel. Prec. (%) ------- ------------------ ---------------- --------- -------------- PROBIT Mean SD 0.8667 23.99 Mean (transformed) SD (transformed) 0.7321 24.05 Median MAD * 1.42860 0.1852 24.58 Median IQR / 1.34898 0.0670 24.66 Tukey's Trimean IQR / 1.34898 0.1059 24.62 Mid-hinge IQR / 1.34898 0.1947 24.58 ------- ------------------ ---------------- --------- -------------- CLASSIC NA NA -0.0006 27.22 ------- ------------------ ---------------- --------- -------------- The classical method is unbiased. PROBIT with (e.g.) median and IQR is slightly more precise (i.e. the 95% CI will be about 10% narrower) than the classical method at the tested sample sizes (i.e. n = 192 for PROBIT and n = 544 for CLASSIC). The bias for this PROBIT variant is 0.067% (i.e. almost zero). This shows the method to underestimate prevalence slightly. I think the work outlined above shows that RAM can do as well as SMART with GAM (it does much better with SAM in terms of precision) with smaller sample sizes. If SMART is good enough "NGO's and the donor community" then RAM/PROBIT should also be good enough. I do not want to make too strong a claim for RAM. More testing is required and the method remains a very promising approach. Here are some notes WRT indicators ... PROBIT makes very full use of the data compared to the classical estimator. This can account for the improved performance. This is an example of indicator redesign. The approach here as been to reverse the frequentist formula of: probability = proportion so that: proportion = probability This approach can be applied to other indicators. For example, estimating proportions using survival probabilities for continuing breastfeeding is about eight times more efficient (i.e. in terms of sample size requirements) than using the proportion-based approach of the current IYCF indicators. It is important to note that the survival-based indicator has problems that limit its utility for frequent M&E but the general approach works. Another approach is to use whole-sample indicators. IYCF indicators are plagued with issues of sample size because the denominators become vanishingly small for some indicators. For example, the indicator for continuing breastfeeding at 12 months (CBF12M) would have a sample size of n = 50 from a survey sample of n = 900 (large for SMART). If CBF12M is clustered then n = 50 might become an effective sample size of n = 25 or fewer. It is, however, possible to rethink the IYCF indicator set to have indicators that apply to the entire sample but still provide useful information. We have been using this approach in a couple of countries with some success. I am not sure about the issue of population density. RAM is a general purpose survey method employing a spatial sample in the first stage. This has allowed the method to be used in urban settings which are (by definition) areas of high population density. I hope this helps.
Mark Myatt
Technical Expert

Answered:

11 years ago
BTW ... forgot to say that EPI uses n = 210 (as 30 clusters of 7). EPI coverage is often quite patchy so we end up doing M&E on our most important and effective child-survival program using surveys with an effective sample size of n = 100 or fewer. The main differences between EPI and RAM are that RAM uses a (smaller) spatial sample in stage 1 and a more representative sample in stage 2.
Mark Myatt
Technical Expert

Answered:

11 years ago
Thank you for your detailed explanation on RAM, they certainly gives clarity to my questions. Would you have at hand any reference/literature on the methodology, including analyzing the data to have a greater understanding. Thanks
Anonymous

Answered:

11 years ago
Much literature has been produced in co-operation with partners (governments, UNOs, NGOs) and I am not free to distribute that without first seeking permissions. There is the [url=http://www.brixtonhealth.com/proposalRAM.pdf]original RAM proposal[/url], the [url=http://www.brixtonhealth.com/updateRAM01.pdf]first development update[/url] (with new PROBIT estimators tested), and the [url=http://www.brixtonhealth.com/meSL.pdf]Sierra Leone M&E manual[/url] (material on sampling). These cover some aspects of RAM. Data analysis is by a blocked and weighted bootstrap (BWB) estimation procedure (this will be described in brief by HelpAge in their upcoming report from CHAD). The BWB procedure takes into account the sample design (i.e. blocking for the cluster-sample design and weighting by a "roulette wheel" algorithm for posterior weighting). If you need a detailed technical briefing on RAM then you should contact VALID or me directly.
Mark Myatt
Technical Expert

Answered:

11 years ago
much appreciated, especially for the rapidness of you replies. thank you!
Anonymous

Answered:

11 years ago
Here is the [url=http://www.brixtonhealth.com/updateRAM02.pdf]second RAM development update[/url]. This has results of testing a few more variants of the PROBIT indicator. All results are for GAM and SAM by MUAC (< 125 mm and < 115 mm). There is also some material describing IYCF indicators in RAM type surveys.
Mark Myatt
Technical Expert

Answered:

11 years ago
Dear Mark and Anonymous 585, Thanks a lot for the discussion and references in determining the sample size for rapid nutritional assessment. It helped me a lot. In our case,I have used stratified random sampling as we have only four villages or settlements in the flood affected areas. Within each strata I used systematic random sampling to select the U5 children for screening by MUAC and Weight for Height. The sample size was 110 for this rapid assessment using Prevalence of GAM = 20%, d = .75 and 95% C. In reality we actually screened more than 120 children. Regarding generalization of the findings, I am aware about the limitations and concluding that it only represents the area surveyed. I would appreciate if you (Mark) provide your opinion about the methodology we used for the rapid nutritional assessment and if interested I could send you the data and methodology document. Regards
Mohammad Monsurul Hoq

Answered:

11 years ago
Glad to have been of use. I am a little confused by the methodology you describe. Best if you send me the methodology. My email address is: mark@twinketoes.cinderella.brixtonhealth.com without the "twinkletoes.cinderella" (this is just to hinder spam).
Mark Myatt
Technical Expert

Answered:

11 years ago

Hi, I want to develop a rapid nutrition assessment guideline to act as a plug in to MIRA and IRA assessment in locations where IPC is 3, 4 and 5. The assessment would be a precursor to determine if a full SMART is needed. After reviewing numerous resources I have seen quite a few organisations and cluster guidelines which recommend a sample of 100 children 6-59 months when doing a rapid nutrition assessment. I was wondering if anyone knows what the rational behind this number is, I haven't been able to find it? Thanks, Sinead.

Sinead O Mahony

Answered:

7 years ago

Sometimes "rapid" can mean "quick and dirty but cheap". That is not always the case. I think you need to be sure to use a representative sampling method to avoid selection biases.

The n = 100 is useful when using a classification approach. A truncated sequential sampling approach as used (e.g.) for HIV drug resistance can provide accurate and reliable prevalence classifications into < 5%, 5% to 15%, and > 15% classes using a sample size of just = 47. Using n = 100 would provide finer classifications and / or smaller errors.

If you use n = 100 and a simple estimator on (e.g.) a 15% prevalence the 95% CI will be something like:

+/- 1.96 * sqrt(0.15 * (1 - 0.15) / 100) = 7%


assuming a simple ramdom sample. With a design effect of 2.0 it will be about +/- 10%. I think you will be better off with a classifier.

The alternative is to use the RAM (Rapid Assessment Methodology). This uses a small spatial sample (n = 192 collected as 16 clusters of 12 children), a PROBIT estimator, and computer intensive methods (that used to mean waiting a week for the answer but now means waiting a minute or two). The RAM methodology has been used by HelpAge (as RAM-OP), GOAL (prevalence of low H/A), UNICEF (M&E surveys, nutritional surveillance), VALID (M&E surveys, prevalence surveys), GAIN (M&E, prevalence, coverage), ACF (nutritional surveillance), SCF (nutritional surveillance), and others for a variety of purposes. The method costs about 60% the cost of a SMART survey and gives similar precision to a SMART survey for GAM prevalence and much better precision for SAM prevalence. With RAM you would not need to do a second SMART survey. Additional savings can be made using a Bayesian-PROBIT estimator. ACF have achieved good results with n = 132.

Let me know if you need more information on anything in this post.

Mark Myatt
Technical Expert

Answered:

7 years ago

Hi Sinead,

thanks for your question.
i think it could be good for you to be in touch with the IPC team directly, who have as well a Nutrition Adviser for these questions. maybe best to liaise with them if you are eager to align it to IPC. please reach out to Sophie Chotard the Global IPC program manage: Sophie.Chotard@fao.org

thank you!

Silke Pietzsch, Action Against Hunger

Answered:

7 years ago

The Rapid SMART Methodology would be helpful in circumstances similar to Anonymous 3089’s. This is an emergency tool developed to rapidly estimate the prevalence of GAM and SAM in contexts where information is required quickly or time for data collection is limited.

The tool has been piloted in South Sudan, Madagascar, Afghanistan, India, Myanmar, and Iraq.

The Rapid SMART Guideline is available HERE

Thanks

Kennedy Musumba

Answered:

7 years ago

I have new question about the sample size of Rapid Assessment during COVID 19. This assessment will be conducted by telephone and that will help me to design a proposal. How many do you suggest would be sufficient?

Thanks

Yohannes

Answered:

4 years ago

Telephone surveys can be difficult to get right as all sorts of biases can be introduced and there are quite a few difficulties that need to be addressed.

SAGE have a volume "How to Conduct Telephone Surveys" as part of their "survey toolkit" series that you may find useful ... see:

    https://dx.doi.org/10.4135/9781412984423

It is cheap from Amazon.

Sample sizes needs are not that different from any other survey. We really need to know, for each key indicator, what value you expect and how precise you want the estimate to be. The simplest calculation is:

    n = (p * (1 - p)) / (e / 1.96)^2

where:

    n = required sample size
    p = expected proportion
    e = half-width for 95% confidence interval

If (e.g.) you are collecting data on dietary diversity and expect 25% of children to be consuming the appropriate number of food groups and you want precision of +/- 5% then:

    n = (0.25 * (1 - 0.25)) / (0.05 / 1.96)^2 = 288

This is the sample size needed for a simle random sample.

I hope this is useful.

Mark Myatt
Technical Expert

Answered:

4 years ago
Please login to post an answer:
Login