Answered:
11 years agoAnswered:
11 years ago
PERFORMANCE OF RAM/PROBIT CANDIDATE ESTIMATORS FOR GAM PREVALENCE
Method Location Dispersion Error (%) Rel. Prec. (%)
------- ------------------ ---------------- --------- --------------
PROBIT Mean SD 0.8667 23.99
Mean (transformed) SD (transformed) 0.7321 24.05
Median MAD * 1.42860 0.1852 24.58
Median IQR / 1.34898 0.0670 24.66
Tukey's Trimean IQR / 1.34898 0.1059 24.62
Mid-hinge IQR / 1.34898 0.1947 24.58
------- ------------------ ---------------- --------- --------------
CLASSIC NA NA -0.0006 27.22
------- ------------------ ---------------- --------- --------------
The classical method is unbiased. PROBIT with (e.g.) median and IQR is slightly more precise (i.e. the 95% CI will be about 10% narrower) than the classical method at the tested sample sizes (i.e. n = 192 for PROBIT and n = 544 for CLASSIC). The bias for this PROBIT variant is 0.067% (i.e. almost zero). This shows the method to underestimate prevalence slightly.
I think the work outlined above shows that RAM can do as well as SMART with GAM (it does much better with SAM in terms of precision) with smaller sample sizes. If SMART is good enough "NGO's and the donor community" then RAM/PROBIT should also be good enough.
I do not want to make too strong a claim for RAM. More testing is required and the method remains a very promising approach.
Here are some notes WRT indicators ...
PROBIT makes very full use of the data compared to the classical estimator. This can account for the improved performance. This is an example of indicator redesign. The approach here as been to reverse the frequentist formula of:
probability = proportion
so that:
proportion = probability
This approach can be applied to other indicators. For example, estimating proportions using survival probabilities for continuing breastfeeding is about eight times more efficient (i.e. in terms of sample size requirements) than using the proportion-based approach of the current IYCF indicators. It is important to note that the survival-based indicator has problems that limit its utility for frequent M&E but the general approach works.
Another approach is to use whole-sample indicators. IYCF indicators are plagued with issues of sample size because the denominators become vanishingly small for some indicators. For example, the indicator for continuing breastfeeding at 12 months (CBF12M) would have a sample size of n = 50 from a survey sample of n = 900 (large for SMART). If CBF12M is clustered then n = 50 might become an effective sample size of n = 25 or fewer. It is, however, possible to rethink the IYCF indicator set to have indicators that apply to the entire sample but still provide useful information. We have been using this approach in a couple of countries with some success.
I am not sure about the issue of population density. RAM is a general purpose survey method employing a spatial sample in the first stage. This has allowed the method to be used in urban settings which are (by definition) areas of high population density.
I hope this helps.Answered:
11 years agoAnswered:
11 years agoAnswered:
11 years agoAnswered:
11 years agoAnswered:
11 years agoAnswered:
11 years agoAnswered:
11 years ago
mark@twinketoes.cinderella.brixtonhealth.com
without the "twinkletoes.cinderella" (this is just to hinder spam).Answered:
11 years agoHi, I want to develop a rapid nutrition assessment guideline to act as a plug in to MIRA and IRA assessment in locations where IPC is 3, 4 and 5. The assessment would be a precursor to determine if a full SMART is needed. After reviewing numerous resources I have seen quite a few organisations and cluster guidelines which recommend a sample of 100 children 6-59 months when doing a rapid nutrition assessment. I was wondering if anyone knows what the rational behind this number is, I haven't been able to find it? Thanks, Sinead.
Answered:
7 years agoSometimes "rapid" can mean "quick and dirty but cheap". That is not always the case. I think you need to be sure to use a representative sampling method to avoid selection biases.
The n = 100 is useful when using a classification approach. A truncated sequential sampling approach as used (e.g.) for HIV drug resistance can provide accurate and reliable prevalence classifications into < 5%, 5% to 15%, and > 15% classes using a sample size of just = 47. Using n = 100 would provide finer classifications and / or smaller errors.
If you use n = 100 and a simple estimator on (e.g.) a 15% prevalence the 95% CI will be something like:
+/- 1.96 * sqrt(0.15 * (1 - 0.15) / 100) = 7%
assuming a simple ramdom sample. With a design effect of 2.0 it will be about +/- 10%. I think you will be better off with a classifier.
The alternative is to use the RAM (Rapid Assessment Methodology). This uses a small spatial sample (n = 192 collected as 16 clusters of 12 children), a PROBIT estimator, and computer intensive methods (that used to mean waiting a week for the answer but now means waiting a minute or two). The RAM methodology has been used by HelpAge (as RAM-OP), GOAL (prevalence of low H/A), UNICEF (M&E surveys, nutritional surveillance), VALID (M&E surveys, prevalence surveys), GAIN (M&E, prevalence, coverage), ACF (nutritional surveillance), SCF (nutritional surveillance), and others for a variety of purposes. The method costs about 60% the cost of a SMART survey and gives similar precision to a SMART survey for GAM prevalence and much better precision for SAM prevalence. With RAM you would not need to do a second SMART survey. Additional savings can be made using a Bayesian-PROBIT estimator. ACF have achieved good results with n = 132.
Let me know if you need more information on anything in this post.
Answered:
7 years agoHi Sinead,
thanks for your question.
i think it could be good for you to be in touch with the IPC team directly, who have as well a Nutrition Adviser for these questions. maybe best to liaise with them if you are eager to align it to IPC. please reach out to Sophie Chotard the Global IPC program manage: Sophie.Chotard@fao.org
thank you!
Answered:
7 years agoThe Rapid SMART Methodology would be helpful in circumstances similar to Anonymous 3089’s. This is an emergency tool developed to rapidly estimate the prevalence of GAM and SAM in contexts where information is required quickly or time for data collection is limited.
The tool has been piloted in South Sudan, Madagascar, Afghanistan, India, Myanmar, and Iraq.
The Rapid SMART Guideline is available HERE
Thanks
Answered:
7 years agoI have new question about the sample size of Rapid Assessment during COVID 19. This assessment will be conducted by telephone and that will help me to design a proposal. How many do you suggest would be sufficient?
Thanks
Answered:
4 years agoTelephone surveys can be difficult to get right as all sorts of biases can be introduced and there are quite a few difficulties that need to be addressed.
SAGE have a volume "How to Conduct Telephone Surveys" as part of their "survey toolkit" series that you may find useful ... see:
https://dx.doi.org/10.4135/9781412984423
It is cheap from Amazon.
Sample sizes needs are not that different from any other survey. We really need to know, for each key indicator, what value you expect and how precise you want the estimate to be. The simplest calculation is:
n = (p * (1 - p)) / (e / 1.96)^2
where:
n = required sample size
p = expected proportion
e = half-width for 95% confidence interval
If (e.g.) you are collecting data on dietary diversity and expect 25% of children to be consuming the appropriate number of food groups and you want precision of +/- 5% then:
n = (0.25 * (1 - 0.25)) / (0.05 / 1.96)^2 = 288
This is the sample size needed for a simle random sample.
I hope this is useful.
Answered:
4 years ago