Answered:
9 years ago
You combine the three surveys. You have:
n = 1800 (from the three surveys)
and a estimate of:
p = 12% (95% CI = 9.5%; 14.5%)
the width of the 95% CI is:
w1 = 14.5% - 9.5% = 5%
We can calculate the expected with of the 95% CI assuming
simple random sampling:
n = 1800
p = 0.12 (i.e. 12%)
The width of the 95% CI would be:
2 * 1.96 * sqrt((p * (1 - p)) / n)
2 * 1.96 * sqrt((0.12 * (1 - 0.12)) / 1800) = 0.03 (3%)
The design effect in this example is:
5% / 3% = 1.67
I hope this is of some use.Answered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoAction Against Hunger has been working in the DRC since 1997 with programs in nutrition-health, food security and livelihoods, and WASH. ACF implements emergency and development intervention programs to reduce morbidity and mortality due to malnutrition. Multisectoral approaches integrating nutrition and WASH in accordance with the national strategy are implemented in 6 provinces. One of ACF's main areas of expertise in the DRC is the rapid response to nutritional crises. Since 2008, more than 100 nutritional surveys and more than 60,000 malnourished children have been treated by ACF's emergency response teams. Over the past four years, ACF has carried out more than 35 rapid response interventions across the country, including the provinces of Kasai Central and Oriental, Sankuru, Kwilu, Kwangu, Equateur, Tshuapa, Maniema and Tanganyka. Currently ACF operates in the provinces of North and South Kivu, Kasaï and Kasaï Central.
The DRC mission has 285 national and 45 international staff. The candidate will have the opportunity to work with a qualified team of different nationalities and a pleasant professional atmosphere.
This is a job creation which aims to ensure the development and implementation of the mission's nutritional surveillance strategy in the DRC.
RESPONSIBILITIES
Under the supervision of the Deputy Country Director, your main mission will be to develop the nutritional surveillance strategy of the mission in the DRC, by coordinating the implementation and monitoring of quality nutritional assessments (SMART, SMART Rapide, SQUEAC and other assessments as needed), while ensuring nutritional data analysis and local capacity building (PRONANUT, SNSAP and local partners).
More specifically, you will be in charge of:
In close collaboration with the Nutrition and Health department, develop a strategy focused on nutritional surveillance for 2023-2024 in the DRC, including the development of a model for conducting SMART/Rapid SMART with partners.
Plan and conduct SMART-SMART Rapide surveys or other nutritional assessments in partnership with nutrition/health partners and PRONANUT.
Strengthen PRONANUT's surveillance expertise
Contribute to the revitalization of the SNSAP.
Manage the surveillance program team
PROFILE
You have medical, paramedical or similar training, with at least 5 years of professional experience in NGOs in the field of nutrition and/or health, including 2 years in the nutritional surveillance sector (SMART, SQUEAC , KAP, other evaluations, etc.), and MEAL, with an international NGO.
You are recognized for your coordination, management, representation, negotiation and diplomacy skills.
A good knowledge of project management and cycle, as well as the mechanisms of Nutrition Clusters are necessary, because you will work in close collaboration with the government and partners.
DURATION: 12 MONTHS
START DATE: 01-08-2022
Application details can be found here: https://recrutement.actioncontrelafaim.org/fr/offre/6250/UN-E-RESPONSABLE-DU-DEPARTEMENT-SURVEILLANCE-NUTRITIONNELLE
Answered:
9 years agoAnswered:
9 years agoDear anonymous 2744,
You can also look in previous National Surveys conducted (e.g. DHS) as they usually have in their appendix page the design effects (national and sub national level) of all variables used in the survey. This would help you in guestimating your DEFF.
I would also suggest you to get in touch with SMART people. Maybe you can pose your question in their website at http://smartmethodology.org/forums/. I'm sure they are also happy to help you with your concern.
Regards,
Derich
Answered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoHello dear community,
I am on a health nutrition sector assessment including an assessment of health facilities.
Could you share with me examples of evaluation tools that would capture the performance of the health center in the management of malnutrition and also IYCF without having a tool that is too long.
Thanks
Answered:
9 years agoAnswered:
9 years agoAnswered:
9 years agoIs there a reference for the following:
"If you don't have any information on prevalences of your main indicator, then it is best to use 1.5 design effect."
Answered:
7 years agoAs Mark says, lot's of handwaving and guesstimates. However, there is one paper which presents design effects for various indicators commonly measured in emergency nutrition and health assessment surveys: Kaiser R, Woodruff BA, Bilukha O, Spiegel P, Salama P. Using design effect from previous cluster surveys to guide sample size calculation in emergency settings. Disasters 2006;30:199-211. In the surveys presented in this paper, the design effect for acute malnutrition in children ranges from 0.8 to 2.4, with most surveys falling between 1.1 and 1.6; hence the recommendation for using 1.5 as the assumed design effect.
But design effect is the wrong measure to extract from prior surveys. The design effect is heavily influenced by cluster size, so the design effect from a prior survey with a very different cluster size may not be applicable to your planned survey. The intracluster correlation coefficient (ICC, or sometimes call rho) is a much better measure of the inherent heterogeneity of distribution of an indicator. It reflects the proportion of total variance which is due to differences between clusters. Unfortunately, the editors removed the important discussion of ICC from the paper referenced above, but here is an excellent discussion of ICC: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1466680/.
For this reason, when calculating sample size for a planned survey, I would first decide what the average cluster size will be (perhaps the number of households a team can complete in 1 day), then derive the ICC from prior survey(s). If these prior survey(s) fail to report the ICC (which most survey reports fail to do), but give you the design effect and average cluster size, you can calculate the ICC: ICC = (Design effect - 1) / (Cluster size - 1). Then apply this ICC to the planned survey by calculating the design effect expected in the planned survey using this formula: Design effect = 1 + (Cluster size – 1) x ICC. The process may be overkill for the usual indicators which do not have very high ICCs or design effects, but if you are measuring something for which you expect a lot of heterogeneity of distribution(like water supply, sanitation, vaccination coverage), this step may mean the difference between assuming a design effect of 5 and having wasting survey resources when the actual design effect for your survey data turns out to be only 3.
You can also use the ICC to determine the effect of decreasing the cluster size for the planned survey. If you complete the sample size calculation for an indicator with a high design effect and decide that it is infeasibly large, you can always decrease the cluster size to achieve a lower design effect. However, as shown in the paper cited above, increasing the sample size by increasing the cluster size is an exercise in futility. The increased design effect resulting from increased cluster size usually cancels out any precision advantage from the increased sample size. So if you want to increase the precision of your survey, increase the number of clusters and decrease the size of each cluster.
Answered:
7 years agoThanks Woody. That covers the issues well.
An earlier article on a related topic is by Binkin et al. (1992). This looks at the 30-by-30 design with PPS selection of cluster locations and proximity sampling within clusters. This, like SMART, is a modified EPI design. A lot of work was done on the EPI design and that provides a rich evidence base covering a large number of child survival indicators.
It is important to realise that the design effect (DEFF) can be altered by design. We can, for example, reduce design effects by having more and / or smaller clusters. DEFF is about accounting for lost variance. A lot of variance is lost when a proximity sample is used. A move to a simple random sample (as in later SMART guidelines) will help to reduce DEFF. A better and simple approach would be to use a within cluster sample design that captures variance by implicit stratification. We can (e.g.) modify the proximity sample to take every third house (EPI3) or every fifth house (EPI5). We can also split the sample by taking a small samples from different parts of a sampled community (segmentation) - these can be thought of as spatially selected clusters within clusters. RAM and S3M samples often use a combination of segmentation and EPI3/EPI5 for within cluster sampling.
These sorts of modifications to within clusters sampling methods can go a long way to improving the statistical efficiency of cluster samples while maintaining their cost efficiency. They are not a panacea as variance loss is can be due to the cluster selection methods. For example, PPS will tend to select larger communities making it a poor choice for some indicators. Spatial stratification can help with this. The main thing is to have a large enough sample of clusters. A good minimum for a SMART type survey is about 30 clusters. A good minimum for a survey using a spatial sample of clusters and segmentation with EPI3/EPI5 in clusters is 16 - since we typically take 3 or 4 sub-clusters from within each cluster this give about 48 to 64 very small clusters.
I hope this is of some use to somebody.
Answered:
7 years ago