We are planning a KAP baseline survey which will be followed by an endline KAP survey. When calculating the sample size, should I use the following formula? n = D [(Z? + Z?)2 * (P1 (1 - P1) + P2 (1 - P2)) /(P2 - P1)2] KEY: n = required minimum sample size per survey round or comparison group D = design effect (assumed in the following equations to be the default value of 2 - see Section 3.4 below) P1 = the estimated level of an indicator measured as a proportion at the time of the first survey or for the control area P2 = the expected level of the indicator either at some future date or for the project area such that the quantity (P2 - P1) is the size of the magnitude of change it is desired to be able to detect Z? = the Z-score corresponding to the degree of confidence with which it is desired to be able to conclude that an observed change of size (P2 - P1) would not have occurred by chance (? - the level of statistical significance), and Z? = the z-score corresponding to the degree of confidence with which it is desired to be certain of detecting a change of size (P2 - P1) if one actually occurred (? - statistical power).
I have two answers to this ... First answer : It does not really matter which sample size you use as a KAP survey is an extremely unreliable instrument. The WHO (e.g.) recommends that KAP surveys NOT be used. My experience with them in investigating barriers to the uptake of cataract surgical services is that they tend to reflect pre-existing biases of the researcher. It used to be thought that they were safe to use in "before and after" studies. The argument was "We know this is biased but if we use the same design and the same instrument then the before bais is the same as the after bias. We have controlled for bias so we can make valid comparisons". This argument holds for some epidemiological designs (notably surveillance systems) but it does not hold for KAP surveys. The population can become very adept at regurgitating program messages and pleasing the surveyor even when there has been no change in practice. There has been a change in knowledge but that is in acquiring the knowledge of what YOU want to hear. In short, KAP surveys are unreliable and doubly unreliable when used in a "before and after" comparison. I advise semi-quantitative work like PRA or RAP. Second answer : It is difficult to read the formula that you present (it looks like you copied and pasted it from some other document and some of the characters didn't get copied correctly - you should check this in future using "Preview" before "Submit now"). It appears to be mathematically correct but is, in my opinion, philosophically suspect. In practical public health interventions we are seldom interested in detecting a difference for the sake of detecting a difference. The term "significant" is not about p-values but about whether the difference is large enough to make a significant difference in the target population. My approach to this would be to undertake a survey to find out current practice and set a target for change. I would, at a later date, come back to see if that target has been met. That is a very different type of investigation and requires two simple surveys of relatively small sample size. For example ... if in the first survey with a sample size of 96 I found the indicator proportion to be 34.8% (95% CI = 24.9%, 44.8%) and hoped after two years to have that above 70%. I just survey again. I'd want a sample size for the second survey that will estimate a proportion with a 95% CI on a proportion of 70% that does not overlap the 95% CI of the first survey (NOTE : This is a simplification but this method works well enough in practice). The sample size required will be quite small but I'd probably want more precision than the minimum sample size would allow. I'd also double all sample sizes to account for cluster sampling if I were using a cluster design. The sample size calculation is: n = [p * (1 - p)] / e^2 where: p = proportion e = required standard error The required standard error can be calculated as: e = half-width of 95% CI / 1.96 For a 95% CI of +/- 10% this would be: e = 0.1 / 1.96 = 0.051 Applying this ... for the first survey uses p = 50% (0.5) as this gives the largest sample size required: n = [0.5 * (1 - 0.5)] / 0.051^2 = 96 If (as above) the first survey found 34.8% (95% CI = 24.9%, 44.8%) and we expected to double this we might use the same sample size or calculate one using the same desired standard error: n = [0.7 * (1 - 0.7)] / 0.051^2 = 81 Sample sizes would be doubled for a cluster sampled survey. You may have spotted something familiar about n = 96 and a design effect of 2.0 with cluster sampling. With 30 clusters we have a cluster sample size of: n.cluster = (96 * 2) / 30 = 6.4 We could choose 6 or 7. It is best to choose 7 because (6 * 30) < (96 * 2) and the larger sample size gives a little more precision. So the sample size is: n = 30 * 7 = 210 This design and sample size is the EPI design ... the survey method we use to estimate the coverage of vaccination programs. Good enough for many other interventions. I hope this helps.
Mark Myatt
Technical Expert

Answered:

13 years ago
Dear Dr. Mark Myatt, Thank you so much for the very clear answer, as always; much appreciated. My aplogies for the copy-past error. I got the formula from a FANTA publication, named Sampling Guide (see link below). The formula is on page 9. [url]http://www.fantaproject.org/downloads/pdfs/sampling.pdf[/url] I am really surprised to know that WHO does not recommend the use of KAP surveys as I found a guide from WHO/Stop TB Partnershop on developing KAP surveys (link below) when I was looking for references on KAP surveys online. Maybe WHO's stand on KAP surveys is a recent one? [url]http://www.tbtoolkit.org/assets/0/184/286/074e7ce8-e0dd-4579-a26f-29ac680154ca.pdf[/url] Would it be possible for me to get some references for PRA or RAP? Thank you in advance and best regards.
Anonymous

Answered:

13 years ago
Thank you for you kind comments. The formula presented in the FANTA document is standard and may be found in many basic statistics textbooks. I'm not convinced by the utility of the assumed approach for the task of program management and assessment as it tends to muddy the issue of statistical and public health significance. If you like the significance testing approach and you are expecting to see an improvement after intervention (i.e. the change of interested in in one direction) then you should probably calculate a sample size for a "single-sided" test. In this case a different formula would be used ... the one you quote is for a magnitude of difference in either direction. The WHO's position on KAP surveys is long standing ... The earliest that I have on my shelves is MHN/PSF/94.3 "Qualitative Research for Health Programs" from 1994 (and, IMO, this is a very good manual). The advise is "KAP surveys have been associated with a number of problems and should be used cautiously". This is about as far as the WHO ever goes in criticising survey methods and is best interpreted as "don't do it". A good early report on KAP failure is: Stone L, Campbell JG, The use and misuse of surveys in international development: An experiment from Nepal, Human Organisation, 43:1;27-37. 1984 Here KAP is compared to more qualitative techniques and is found to give very wrong answers. I'm not surprised that one arm of WHO ignores what another branch recommends and KAP surveys come and go. I do not follow KAP work and it might have been shown that KAP surveys work well enough in the context of TB control programs ... but I sincerely doubt it. Have a look at: [url]http://guweb2.gonzaga.edu/rap/[/url] [url]http://www.rapidassessment.net/[/url] For an introduction to RAP.
Mark Myatt
Technical Expert

Answered:

13 years ago
Thanks for this interesting question and reply regarding KAP surveys. In most of ACF-Spain's missions we are promoting Infant and Young Child Feeding (IYCF) practices and we use indicators and way of measurement proposed by WHO in the manual recently released (2010): [url]http://www.who.int/nutrition/publications/infantfeeding/9789241599290/en/index.html[/url] Tha manual include some "sampling precisions" in annex. I hope this could help.
Elisa Dominguez

Answered:

13 years ago
i want to study regarding infant &children practices knowledge among women of ages of 15-45yrs with respect to their educational level.let me know the sample size.
Anonymous

Answered:

11 years ago

Pls help me sir. I am doing a descriptive research related to knowledge and attitude on lifestyle modification and prevention of complications of stroke among hyperte

Sruthi

Answered:

4 years ago

How do we calculate sample size in a KAP study? how do we put proportion value?

Anonymous

Answered:

1 year ago
Please login to post an answer:
Login