I wonder if anyone advise me or direct me to guideline/manual regarding how to setup sentinel site community nutrition surveillance system. This includes sample size, how to determine number of sentinnel sites, selection of sentinnel sites, selection of subjects. Regards
A few years ago I designed a surveillance system for Save the Children (HUMS) which was taken up be ACF (Listening Posts) in a couple of countries. I have zipped up the documentation (and other material) I made for Save the Children. You can download this from [url=http://www.brixtonhealth.com/HUMS.LP.ENN.zip]here[/url]. You may find this useful. Perhaps SC-UK and ACF can add something here about their experiences with the system.
Mark Myatt
Technical Expert

Answered:

10 years ago
As just written by Mark, we (ACF) implemented the "Listening Post" methodology in Liberia and Burkina Faso, and currently, in Central African Republic. This sentinel site surveillance methodology includes: - a random selection of 96 children < 2 years (by CSAS + EPI5) - monthly follow-up measurements (MUAC and weight) among the same children selected at baseline (=longitudinal approach), with top-up replacement of old children - additional indicators can be added, depending on what you want to monitor (e.g. diarhhea, diet diversity) Our experience is quite successful and has shown reliable data, with relative precise estimates (+/- 10%). Challenges include: - the interpretation of the data (especially during the first year as you have no comparison trend) - the definition of thresholds, to “alert” stakeholders/authorities - to maintain your surveillance system "in live", even if it's not directly linked with some kind of intervention. Costs are relatively small but technicity of the system requires high level of supervision. For more information, do not hesitate to contact me (mal@actioncontrelafaim.org) Best, Mathias
Altmann

Answered:

10 years ago
Hi there, Some more info from Action Against Hunger – ACF- USA: we developed a small sample survey surveillance system (based on SMART methodology) back in 2009/2010 through a collaboration with UNICEF and the Centre for Disease Control and Prevention (CDC, Atlanta, USA). The system is based on conducting anthropometric surveys for the need of representativeness and direct comparability with data collected in previous surveys, i.e., to establish trends of GAM/SAM and monitor aggravating factors linked to malnutrition. A multistage cluster sampling approach is used as for most anthropometric surveys and follow the SMART methodology approach. It has a 25 clusters by 12 households design. This sampling size though "small" yet ensures that variations of acute malnutrition of a minimum of 4% would be detected between two rounds of surveys using the CDC “2 surveys” calculator. Also, the CDC "probability calculator" can be used to present results / give a GAM threshold with 85% probability to be exceeded (relevant for recommendations purposes). Those small scale surveys can be run 2 or 3 times a year (during key seasonal events) and apart from nutrition indicators, a set of key indicators about health, wash, food security, child care and feeding practices can be also collected for the purpose of early warning. ACF teams in Uganda and in Kenya have been using this surveillance system since several years now, it has generated lots of information, and allowed trends of wasting over time/seasons (among other indicators) to be established. Recent discussions with ACF Kenya and Uganda teams revolve around various interesting topics such as i) reducing the number of indicators to be collected on a regular basis (to use only those relevant to early warning); ii) integrating this surveillance system to existing national early warning systems; iii) handing over its management to local authorities, etc. For more information: CDC calculators: [url]http://www.cdc.gov/globalhealth/gdder/ierh/researchandsurvey/calculators.htm[/url] on ACF-USA website, one can find surveillance reports as well as results of a meta-analysis of the Uganda surveillance data recently done jointly with the Government of Uganda [url]http://www.actionagainsthunger.org/media/technical-surveys[/url]
Cecile Basquin

Answered:

10 years ago
Andrew Hall (Save the Children) sent a link to [url=http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0062767]this recent paper[/url] that may be of interest.
Mark Myatt
Technical Expert

Answered:

10 years ago
Dear Mark, Thank you very much for your usual support.
Anonymous

Answered:

10 years ago

Hello everyone

I am currently a nutritionist and dietetien in DR Congo and plan to organize a training on the management of acute malnutrition.
Would you like to help me with the Harmonized Training Package (HTP): Resource Material for Training on Nutrition in Emergencies (in French)

Anonymous

Answered:

10 years ago
A few points ... Sample size : n = 96 is a minimum. If you can do more then, within reason, you should do more. (n = 132 has been used). Age-group : A more restrictive and younger age-group is used because (1) the younger age-group is more susceptible to GAM and SAM, (2) a single narrow age-group simplifies the weight gain analysis, (3) the younger age-group is in the "first 1000 days", and (4) a narrow age-group means a smaller population means a larger sampling fraction. Going for the 6-59 month age-group will, I think, reduce the sensitivity of the surveillance system and complicate analysis. A good reason to go with the 6-59 mont age group is if you have been doing SMART periodically (i.e. several times a year) for several years. Even then you could re-analyse the SMART data for the narrower age-group. A larger sample size (i.e. larger than n = 96 will be required). Top-up workload : I will leave this to people who have run LP to respond to this. It does not seem to be a problem. The numbers retiring and lost at each round should be quite small. Note that you do need to top-up as not doing so results in an ageing cohort which will, over time, move out of risk. Alternative follow-up options : The longitudinal approach means we can do more with a small sample size because sampling variation between rounds in minimised. If you use a repeated cross-sectional sample approach then you will need a larger sample size to cut through the noise introduced by sampling variation. The sample size in Cecile's post above (n = 300) looks a bit small to me when using a classical estimator of prevalence but I am sure that CDC will have got that right. One issue with a repeated cross-sectional sample approach is that sick children tend to be hidden from surveys in some locations and this leads to SAM kids being excluded. This is not a big issue for surveillance as we do not worry too much about a consistent bias. Bias : The observer effect (see the article in the post starting with "Andrew Hall (Save the Children) sent ..." above) is an issue. The issue of non-consistent bias that is raised is interesting. I wonder when (if) this stabilises. If it stabilises quickly then we can, I think, discount it as we are not usually concerned about consistent bias in surveillance systems. If it does not stabilise then periodic change in sites is (as the article suggests) an option. I am not convinced that the Heisenberg Uncertainty Principle (mentioned in the article) is the correct model. The Hawthorne Effect is probably a more useful model here. In the UK NHS we have the BOHICA effect which is what happens when we rely on the Hawthorne Effect to continually increase productivity. BOHICA stands for "Bend Over Here It Comes Again". "BOHICA" is a florid term for the tendency of observer effects to fade over time. I think the big risk is poorly considered (or gaming) intervention based on surveillance data. This occurs when the sentinel sites get the most attention because they are the only sites for which we have data or because intervening there makes the problem disappear by legerdemain. In summary ... I think you will be OK with your proposed method but that you will need to increase the sample size at each round. I think that you could use a sample size of n = 192 collected from m = 16 clusters and use a PROBIT estimator for prevalence. This approach has been used in Sudan. I hope this is of some use.
Mark Myatt
Technical Expert

Answered:

10 years ago
From Edward Kutondo: Hi. Sentinel sites are often selected using purpose method in view of vulnerability. Ideally you need to detect changes and follow trends hence you select a few sites that will be able to achieve this. The indicators need to be sensitive to change. Small sample sizes are preferable eg 30 households per site - this has been used in South Sudan , Uganda and Kenya. However note that random methods are highly recommended. In this case the study subjects are selected using simple or systematic sampling methods depending on characteristics of the population. Below are links for additional information. [url]http://www.google.co.ke/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&ved=0CEUQFjAD&url=http%3A%2F%2Fwww.unicef.org%2Fnutritioncluster%2Ffiles%2FM10P2.doc&ei=LU6MUqjwItKjhgf3t4DQBQ&usg=AFQjCNGxLh24aCCobb_siqlv6CmGJw9fJQ&bvm=bv.56753253,d.d2k[/url] [url]http://www.pophealthmetrics.com/content/10/1/18[/url] Edward K.
Tamsin Walters
Forum Moderator

Answered:

10 years ago
Thanks Edward for the link. A blessed weekend fro Uganda. Samuel
Sam Oluka

Answered:

10 years ago
Please login to post an answer:
Login