In looking at materials on CSAS, there are several references to S3M but I can't find anything that actually explains it. Does anyone have any resources for that?
S3M is a development of CSAS. It is a wide-area mapping method. We (i.e. VALID International, Brixton Health, CONCERN, UNICEF) have experience with using S3M for CMAM coverage and IYCN indicators in two Sahel countries. We have been working on adapting more indicators for use with the S3M sampling framework and have recently completed a pilot in a Sahel country. The key issue is that we need indicators than can work with small sample sizes (this allows them to be mapped at fine scales). We've made some progress with (e.g.) the PROBIT method for GAM/MAM/SAM prevalence, survival analysis approaches for continuing breastfeeding, and simplified indicators for IYCN but more work is needed and is ongoing. We now see S3M as a general public health survey method rather than a coverage survey method (although it can be used for that too). We have yet to work out (or to start work on) how to estimate recent period mortality with S3M.
We are also working on within-PSU sampling methods and have had some success with map-segment-sample techniques (adapted from previous work on trachoma prevalence surveys (Malawi and Viet Nam) and the ASEAN post-Nargis joint assessments in Myanmar which used an S3M-type first-stage sample) in trials in Niger, Ethiopia, and Sierra Leone.
Our experience is that S3M promises to be a cost-effective wide-area mapping method suited to use with multiple indicator. There are a few problems and rough edges at the moment but these are being dealt with. New and modified methods are due be tested in two pilots.
I am not able, without seeking permissions, to release more detailed information or results from S3M surveys at the moment but …
[url=http://www.brixtonhealth.com/presentationS3M.pdf]Here[/url] is an early presentation.
[url=http://www.brixtonhealth.com/pictureBookS3M.pdf]Here[/url] is a simple words and pictures guide (it includes an example map from an S3M survey).
Both are early works and do not reflect recent developments. They should, however, give you an idea of how the first-stage sample is taken.
I hope this helps.
Mark Myatt
Technical Expert
Answered:
12 years agoDear Mark,
Can you please outline the statistical validy of S3M on estimating wasting, underweight, stunding, some key IYCF indicators such as timely initation of complimentary food, adequacy/frequency of complimentary food etc. Thanks
Anonymous
Answered:
11 years agoI can give you some idea of S3M ...
Overview : The S3M is a cluster sampled survey method (just like SMART) but with clusters selected using a spatial sampling method rather than PPS (as is often done with SMART). Estimates and classifications are made for the overall survey area and over much smaller areas within the overall survey area. The PPS sample is problematic as it tends (by design) to concentrate the sample in the most populous communities. PPS is also problematic if accurate population data is not available. This is typically the case in emergency contexts.
Overall estimates : PPS is a prior weighting scheme. S3M uses a posterior weighting scheme as might be used with a stratified sample using a fixed quota in each strata. This allows S3M to produce overall estimates with more populous communities receiving higher weighting than less populous communities. The difference between the two approaches is that PPS weights before sampling and S3M weights after sampling. The issue of loss of sampling variation (design effect) is addressed in the within-PSU sample design (see below) and by using appropriate data-analysis techniques. Data from S3M surveys can be analysed using model-based approaches (e.g. as in the "svy" commands in STATA or the Complex Samples module in SPSS). We use blocked and weighted bootstrap estimators because these allows more flexibility in terms of the statistics that can be used with classical methods (e.g. an exact 95% CI on a median - used in analysing HDDS - is extremely difficult using model-based approaches but trivial using the bootstrap). The sample size used for overall estimates is usually forced upon us by the need to make more local estimates and classifications. This means that we tend to have larger overall sample sizes (and better precision) than a SMART type survey.
Within-PSU sampling : The map-segment-sample (MSS) technique produces a sample that is much closer to a simple random sample than the proximity sample used by SMART. This leads to lower design effects (better precision) less bias. Work done during the development and testing of the EPI survey method indicate that the SMART type proximity sample are not appropriate for variables showing a centre-to-edge gradient or within-community clustering. These include education, pregnancy status, variables relating to child-care, epidemic diseases, socio-economic factors and variables related to health care. This means that the SMART sample is a poor choice for many indicators. There is nothing to stop anyone using MSS as a within-PSU sampling strategy with SMART.
Local estimates / classifications : A key reason to use S3M is to exploit the spatial sample to map indicator values at local levels. The key issue here is sample size. A local area is represented by three PSUs which include data from between one and six neighbouring communities (the exact number here is defined by average village size and the target population). Random (sampling) variation is damped by data smoothing that arises from the sample design. We have developed and tested a number of small sample indicators. These are indicators that are designed to perform well with small sample sizes (by "small" we mean n = 60 to n = 96 total from three PSUs). These are sometimes standard indicators (e.g. FANTA's HDSS). Sometimes they are adaptations of MICS and DHS indicators (e.g. JMP's WASH indicator set, the S3M/RAM IYCF indicator set). Sometimes they are entirely new indicators (e.g. PROBIT for GAM). We now have indicators that cover most applications. Many of these are standard indicators. In May 2013 we plan to pilot a revised IM method for mortality (CMR) estimation. Where we cannot estimate with useful precision we use sequential sampling classifiers with useful accuracy and reliability (as we often do with SQUEAC and SLEAC) of with sample sizes of n < 50 total from three PSUs.
This does not really answer your question. Perhaps a look at the pedigree of the components might ...
Sampling on a hexagonal grid : This is a modification if the standard CSAS / quadrat sampling method to improve evenness of sampling. This has been used in the ASEAN PONJA assessments and in a number of S3M surveys in Niger, Sudan, Ethiopia with good results.
Use of spatial tessellation techniques : Voronoi polygonisation is a common technique dating back three of four hundred hundred years. Its first epidemiological use was in Snow's groundbreaking work on Cholera. The use of triangulated irregular networks is a common technique in geostatistics and spatial epidemiology.
Posterior weighting is a standard survey technique. It is desired in all basic textbooks on survey design.
Bootstrapping is a modern (i.e. non-classical) statistical technique. We have extended the general approach to the analysis of data from cluster sampling by borrowing techniques (i.e. blocking) from time-series analysis. Weighting is achieved by a standard "roulette wheel" algorithm. We tested our approach using the same data (Niger IYCF data) analysed by us using the bootstrap and by CDC using SAS. Results agreed to 4 decimal places.
The MSS within-PSU sampling method is taken from the literature associated with the testing and developing of the EPI survey design method. MSS was used and tested in ASTRA trachoma surveys.
Sequential sampling classifiers use standard methods. Sample sizes are found using computer-based simulations of sampling from finite populations.
The small sample IYCF indicator set is developed from DHS indicators.
The PROBIT indicator was developed on the recommendation of a WHO expert committee. The method has been tested and published. We now use an improved estimator. This technique is currently being tested on another database of SMART surveys by CDC.
The IM method was developed and tested by FANTA and LSHTM. We improve case-finding sensitivity by using multiple informants. We are concerned about a bias due to small numbers but the bias will be consistent and so allow mapping / identification of mortality hotspots.
S3M is (like many survey methods) a combination of well understood and tested components.
The proof of the pudding is in the eating ...
(1) You are welcome to join us as an observer in any of our upcoming S3M surveys.
(2) You are welcome to contact any of our partners to discuss their experiences with the method.
You could try one yourself.
BTW : I am one of the developers of S3M so I am probably not the best person to ask for an objective view. I am happy to meet with your team or with statisticians of your choosing to discuss this work. I have not really though that validity of the overall method was an issue ... I have more concerned with practicability and validity / utility of specific indicators.
Mark Myatt
Technical Expert
Answered:
11 years agoJust a small clarification. Mark says "The PPS sample is problematic as it tends (by design) to concentrate the sample in the most populous communities." I find this statement rather vague and somewhat misleading. Yes, selecting primary sampling units (PSUs) probability proportional to size does give greater selection probability to larger PSUs. This does not necessarily mean that PSUs in larger towns and cities have a greater likelihood of being selected. The relative likelihood of selection depends entirely on the size of PSUs in these larger population groupings relative to the size of PSUs in smaller population groupings, such as rural villages. In fact, many types of PSUs, such as census units, are often of remarkably uniform size, both in urban and rural areas.
Moreover, if the size of each cluster is the same (that is, if you select the same number of households in each selected PSU), selection of PSUs probability proportional to size in the first sampling stage gives every household in the population exactly the same probability of selection, thus removing the necessity for assigning statistical weights to each cluster during data analysis. Now if you are already calculating statistical weights because of stratified sampling with unequal probability, this doesn't add too much work. However, I have heard (and someone please correct me if I'm wrong) that statistical precision declines with an increase in number of different statistical weights used.
In short, selection of PSUs using probability proportional to size produces a sample which is representative of the entire population with respect to the size of the village or town in which the households are located. It does NOT produce a biased or disproportionate sample unless something is done incorrectly .
Bradley A. Woodruff
Technical Expert
Answered:
11 years agoSorry to have been vague and I had no intention to mislead. I was aware that that post was long.
To clarify ...
In our field we are really talking about adaptations / derivations of the WHO Expanded Programme on Immunisation (EPI) coverage survey method when we talk about "SMART" or "30-by-30" or "PPS".
The EPI method uses a two-stage cluster sampling approach which begins by dividing a population into clusters for which population estimates are available. A subset of clusters is selected in the first sampling stage. The probability of a particular cluster being selected is proportional to the size of the population in that cluster.
Clusters with large populations are more likely to be selected than clusters with small populations. This sampling procedure, called probability proportional to size (PPS), helps to ensure that individuals in the program area have an equal chance of being sampled when a quota sample is taken in the second stage of the survey.
In recognition of the difficulties of drawing a random sample in many developing countries, the EPI method (and derivatives such as SMART) uses a non-random sampling method in the second stage. The most commonly used second stage sampling method is a proximity technique. The first household to be sampled is chosen by selecting a random direction from the centre of the cluster, counting the houses along that route, and picking one at random. Subsequent households are sampled by their physical proximity to the previously sampled household. Sampling continues until a fixed sample size as been collected. Sampling is simple and requires neither mapping nor enumeration of households. It is, consequently, both quicker and cheaper than using simple random sampling in the second stage of the survey.
All this supports the use of the EPI method. It does, however, have problems:
(1) The PPS process should result in a self-weighted sample but it cannot be relied upon to do so if estimates of cluster population sizes are inaccurate. Lack of accurate population data may frequently be the case in emergencies. Also, when estimating (e.g.) coverage of a selective feeding program such as CMAM the population data that we should use is the number of cases in each potential PSU (not the population at each PSU). Prevalence of a condition such as SAM which is strongly influences by infectious phenomena which cluster spatially will cluster spatially. Unless we know the pattern of prevalence and total population in advance we will not be able to take a properly supported PPS sample.
(2) PPS locates the bulk of data-collection in the most populous communities. Woody seems to suggest that this a wholly good thing. When investigating coverage (e.g.) PPS may leave areas of low population density unsampled (i.e. those areas consisting of communities likely to be distant from health facilities, feeding centres, and distribution points). This may cause surveys to evaluate coverage as being adequate even when coverage is poor or non-existent in areas outside of urban centres.
(3) With the exception of the first child, none of the observations in the within-cluster sample are selected using an equal probability selection method (a quick "experiment" with geometry will show that under all but restrictive condition the method is not EPSeM even for the first child). This, together with the fact that the within-cluster sample size is usually too small to estimate or classify in any cluster with reasonable accuracy and precision, means that the EPI method can return only a single estimate of coverage or prevalence, even when coverage or prevalence is spatially inhomogeneous. This is an important limitation since identifying areas with poor coverage (or high prevalence) is an essential step towards improving program coverage and, hence, program impact.
(4) The within-PSU sampling method described above is know to produce biased estimates for a wide range of indicators that we might be interested in. Published evidence suggests that this is not a problem for GAM.
(5) The within-PSU sampling method results in large loss of sampling variation. This is large design effects and small effective sample sizes. This is only a real issue when measurement costs are large relative to sampling costs or when indicators relating to subsamples (e.g. as in some IYCF indicator sets) are used.
(6) PPS does not attempt to take a spatially representative sample. PPS (unlike CSAS, S3M, or list-based spatially stratified samples) does not guarantee an even spatial sample. In fact, it does the opposite in that it will tend to select larger communities which will tend to be clustered along roads, rails, rivers, natural harbours, &c. The emphasis is on size of village not location of village. This means that PPS cannot be used to map phenomena in any detail.
All of these problems are there even when everything is done correctly.
Proponents of PPS often criticise alternative schemes as not being representative of a population (I don't think Woody is doing this). These criticisms are not well founded as a spatial representative sample can be made population representative by the use of posterior weighting (the opposite transformation is not possible). The number of different statistical weights used by S3M and PPS is the same. The difference is when the weights are used (i.e. before or after sampling). In some cases the weights used in S3M will be more accurate as they can be collected or confirmed as a survey process. In many cases the weights used will be identical. The overall analysis of an S3M sample produces a population representative result. BTW ... the same approach is used in the RAM method.
If you look at the list of problems above you will see that S3M (and similar methods) are designed to address these problems:
(A) No population data are needed in advance. Population weights can use (e.g.) hut counts collected during the survey.
(B) The spatial sample is agnostic with regard to population so small and large communities are sampled. If there are more small than large communities in the survey area then more small communities will be sampled.
(C) The map-segment-sample (MSS) technique more closely approximates EPSEM than the proximity technique. MSS is an innovation that could be adopted by SMART today.
(D) MSS does not produced biassed estimates.
(E) as (C) above.
(F) S3M is designed as a mapping method. [url=http://www.brixtonhealth.com/S3M.IYCF.ExampleMap.png]Here[/url] is an example from an S3M survey.
I hope this is less vague and not misleading.
Mark Myatt
Technical Expert
Answered:
11 years agoHi Mark,
I saw this post some time back but sent a question which didn’t go through, now am posting again though a different one from the earlier.
I have tried getting detailed information on the spatial component of the S3M sampling process; however, not much is explained, I have looked at this http://www.validinternational.org/coverage/workshop/day_two_files/caseNigerS3M.pdf .
Kindly explain how basically the geospatial sampling process is implemented till you have the required sample showing the quantitative sample aspects from the spatial. Considering the massive land coverage to the illegible georeferenced data points of most Countries and as such the distribution of populace around the grids generate, How all this factors are addressed; Also what is the quality of the geospatial data sets so far used and how they were generated? I know there is a lot of work/discussions on quality assurance issues in authoritative and Volunteered Geospatial data, and spatial data created by non-spatial communities, how is it ensured in the humanitarian context with lots of evident challenges in generating sampling data and in this case spatial (Mobile communities we can consider later)?
Anonymous
Answered:
10 years ago