Letter to Dr. Schwab

0920-0214 NHIS Incentive Experiment ltr 031915 sig.docx

National Health Interview Survey

Letter to Dr. Schwab

OMB: 0920-0214

Document [docx]
Download: docx | pdf

DEPARTMENT OF HEALTH & HUMAN SERVICES Public Health Service

Centers for Disease Control and Prevention

Shape1 National Center for Health Statistics

3311 Toledo Road

Hyattsville, Maryland 20782

March 19, 2015


Margo Schwab, Ph.D.

Office of Management and Budget

725 17th Street, N.W.

Washington, DC 20503


Dear Dr. Schwab:


Please accept these proposed plans for the 2015 NHIS Incentive Experiment respectfully submitted as a non-substantive change request in compliance with the terms of clearance for the National Health Interview Survey (NHIS) (OMB No. 0920-0214 exp. 12/31/2017).


The NHIS has, like all surveys, experienced gradually declining response rates. As the gold standard for many health measures, it is incumbent upon NCHS to try proven techniques to ensure the resulting data are of the highest possible quality. A series of activities intended to improve response rates and data quality—with additional consideration for reducing survey costs—will be introduced to the NHIS this year. As one of these activities, for the first time, the NHIS will include a two-pronged incentives test. The purpose of the planned randomized controlled experiment is to examine the impact on response rates, nonresponse bias and other data quality indicators, and survey cost of a $40 completion incentive compared to, as well as in addition to, a $5 unconditional advance token incentive.


Completion incentives are a well-known tool to boost response rates, and many large federal surveys use them routinely. For instance, several State and Local Area Integrated Telephone Survey (SLAITS) surveys, such as the 2007 National Survey of Adoptive Parents (Bramlett, Foster & Frasier, et al, 2010), and the 2008 National Survey of Adoptive Parents of Children with Special Health Care Needs (Bramlett, Brooks, Foster et al, 2010) mailed incentive payments of $25-$30 to participants after the interview. Furthermore, the National Health and Nutrition Examination Survey (NHANES) in the 1999 to 2010 data collection cycles provided remuneration up to several hundred dollars to participants at the conclusion of the examination appointment (Zipf, Chiappa, Porter et al, 2013). Another federal survey, the U.S. Census Bureau’s Survey of Income and Program Participation, conducted an incentive experiment in 1996. This in-person survey of roughly 40,000 households tested the difference between providing $10 and $20 prepayments. Only the $20 incentive was associated with an increase in response rate (of 2.4%); not so the $10 incentive (James, 1997; Davern, Rockwood, Sherrod & Campbell, 2003). Prepaid and promised incentives have been employed in other, non-federal surveys as well. For instance, evidence from a random digit dial survey shows that promised incentives greater than $25 increase response rates (Cantor, O’Hare, and O’Connor, 2008). A British pilot study that tested the use of a promised ten British pound incentive in a time use survey found that rates of interview completion and time use diary completion were both higher in households which had been promised the incentive compared to households not offered the promised incentive (Lynn, 2001).

Aside from these examples of completion incentives of varying amounts, the planned $40 incentive, specifically, has support in the literature as well, as it is currently in use by another NCHS survey. The National Survey of Family Growth (NSFG) has been using and testing the impact of various levels of incentives since 1993 (Lepkowski, Mosher, Groves, et al., 2013). The NSFG initiated the use of incentives with a prepayment of $20 cash, but over 14 years of experiments found that a $40 universal cash payment at the start of the interview was the most cost-effective strategy to improve response rates and the representativeness of the sample. (In the interest of a responsive design approach, households requiring refusal conversion in the last two weeks of the quarter are offered $80 instead of $40.) Compared to the smaller amount of $20, the $40 incentive was associated with a rise in response rate by 10 percentage points (among women, the observed increase was nearly 20 percentage points). In addition, other NSFG performance data showed that use of incentives reduced costs by decreasing the number of visits required to obtain complete interviews, thus reducing interviewer time and travel expenses.


Support also exists in the literature for the planned $5 unconditional advance token incentive. Although lower token incentive amounts (e.g. $1 or $2) have been successful in increasing the likelihood of participation in very short, low-burden surveys (e.g., parents asked to provide their children’s addresses) (Mann, Lynn & Peterson, 2008), most studies with greater respondent burden (particularly relatively long in-person surveys) have not used such low incentives. The most closely-related evidence in favor of a $5 unconditional prepayment comes from a study that tested the impact of increasing prepaid incentives in single dollar increments on response rates in a large national mail survey (Trussell & Lavrakas, 2004). In this study, a $5 unconditional incentive was the threshold at which the return rate was not only maximized but also plateaued compared to higher advance incentives (of $6-$10). 


Thus, even though the literature suggests that either a $40 completion payment or a $5 prepayment may be a promising strategy for the NHIS to adopt, this pilot study nonetheless remains necessary for a number of reasons. First, there are no hard and fast rules for the precise timing or amount of an ideal survey incentive (Singer & Ye, 2013). This is due in part to gaps in the literature, which have largely focused on telephone and mail surveys but shed little light on appropriate amounts for in-person surveys. In addition, the complexity of demonstrated (and varying) associations between incentives—of different types and amounts—and response rate (Gelman, Stevens & Chan, 2003; Singer et al, 1999; Yu & Cooper, 1983), response quality (Singer & Ye, 2013), and sample composition (Singer & Kulka, 2002; Eyerman, Bowman, Butler & Wright, 2005) make it difficult to establish clear-cut best-practices. Moreover, a wide range of factors may moderate the observed associations (including sample size, sample composition, survey topic, screening rules, incentive characteristics, and cost structures), not all of which have been tested in controlled studies (Cantor, O’Hare, and O’Connor, 2008). For example, one meta-analysis found that higher interview burden (such as long length) was associated with a larger difference in response rate between an incentive and zero-incentive condition (Singer et al, 1999). Likewise, it is also not well understood how the effects of prepaid token and promised completion incentives may combine or interact. Further, it is unclear whether prepaid (at the start of interview) and promised completion incentives (paid upon interview completion) have the same effect: while pre-interview payout of larger incentives yields higher response rates than promised payment of the same amount, both are associated with improved response rates (Singer et al, 1999). The planned incentives test has the ability to add to the extant knowledge base by separating out the relative impact of a small prepay token incentive from that of a larger, post-completion incentive. It is likely that findings from this experiment will not only inform future NHIS strategies but may also be useful to other federal household surveys.


The planned incentive experiment will be carried out in the Census regions overseen by the Denver, New York, and Philadelphia Census Regional Offices (RO) during the three months of May, June, and July, and involves two components. The first component is a test of the impact of an unconditional advance cash mailing. The second component is a test of the impact of a promised completion incentive. Random assignment will occur independently to (and within) each of the two components, for a two-by-two factorial research design. For the first component testing the impact of an unconditional advance incentive, a random half of the households included in the experiment will be sent a $5 bill with the advance letter. For the second component testing the impact of a completion incentive, a random half of households will receive $20 each for participation in the family and the sample adult modules for a maximum of $40. Note: the sample adult module cannot be commenced unless the family module is first completed.) Households will be informed of the incentive in the advance letter, and will receive a $20 or $40 debit card mailed to them following in the thank you letter. [Completion incentives (to be paid out after the interview) were selected over prepaid incentives (to be paid out before the interview begins) because Census-internal restrictions around interviewers carrying money preclude prepayment at the start of the interview.] The maximum total incentive amount paid to any one family is $45: $5 prepaid, $20 for the family section and $20 for the adult section. No experimental design is attached to the sample child section in this test, as the family and sample adult sections are of unique interest. Thus, the decision to restrict the experimental design was made to ensure that the other components have sufficient power. Depending on the outcome of this experiment, future testing may include incentivizing response to the child section.


For both components of the planned incentive experiment, assignment to the test and control groups will be made via a software-generated pre-assigned table of random numbers, and will occur independently within each RO. To avoid scenarios in which next-door neighbors are assigned to different treatment groups (thus potentially complicating matters for the interviewers and leading to discontent among respondents), assignment to the different experimental conditions will occur at the segment level (which is typically equivalent to the block level). This minimizes the likelihood that participants living in close proximity will be assigned to different treatment conditions while maintaining the representativeness of the sample at the Regional Office and county levels.


Random assignment to the two components of the incentive experiment and to the treatment conditions within each component results in a clean 2x2 factorial design that maximizes analytic power and allows comparison between each of the four cells on a number of indicators and outcomes. Response rate indicators we will examine include differences between the three treatment conditions and the control group in: overall and module-specific response rate, completed interview rate, sufficient partial interview rate, and refusal rate. Data quality indicators we will examine include differences between the three treatment conditions and the control group in rates of “don’t know” and “refused” responses, and in population estimates of select outcome variables included in NCHS’ key indicator reports (e.g., health insurance coverage, failure to obtain needed medical care, cigarette smoking and alcohol consumption, and general health status). To assess the impact of incentives on sample composition, we will test for differences not only between the experimental groups described above, but also between survey completers and survey breakoffs, and between survey completers and national census data. Demographic characteristics of interest include age, race/ethnicity, and education level. Lastly, the primary cost indicator we will examine is the number of attempts (household visits and phone calls) required to obtain a completed interview (ascertainable using paradata) in each of the three treatment conditions and the control group.


The approximate total sample size for the incentives experiment is 12,200 households, whereby roughly 40% of cases will be located in the Denver RO and 30% of cases in the New York and Philadelphia RO, respectively. We anticipate a household response rate around 70%, which will result in approximately 8,500 completed interviews. Assuming 80% power, an alpha level = 0.05, and a design effect = 1, an n= 8,500 cases allows detection of a change in response rates as low as 2 percentage points for the completion incentive component of the experiment. For the unconditional advance incentive component, the number of completed cases in the treatment and control groups is roughly 4,250, which results in sufficient statistical power to detect differences in response rates around 2.9 percentage points given the same assumptions. For key substantive variables, we factored a design effect of 2.5 into the power calculations. Thus, for outcomes with prevalence rates of 10%, we will be able to detect changes in estimates as low as 2.5 percentage points; for more prevalent health conditions or outcomes, we will be able to detect changes in rates between 3 and 4 percentage points. For more rare outcomes, such as rates of “don’t know” or “refused” responses, we will be able to detect even smaller changes, given that the variance is maximized at 0.5, or 50% and moving away from 0.5 means a lower variance.




References

  1. Bramlett MD, Brooks KS, Foster EB, et al. Design and Operation of the National Survey of Adoptive Parents, 2007. National Center for Health Statistics. Vital Health Stat 1 (50). 2010.

  2. Bramlett MD, Brooks KS, Foster EB, et al. Design and Operation of the National Survey of Adoptive Parents of Children with Special Health Care Needs, 2008. National Center for Health Statistics. Vital Health Stat 1 (51). 2010.

  3. Cantor D, O'Hare B, O'Connor K. The use of monetary incentives to reduce non-response in random digit dial telephone surveys. In Advances in Telephone Survey Methodology, eds. James M. Lepkowski, Clyde Tucker, J. Michael Brick, Edith de Leeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link, and Roberta L. Sangster, 471–98. New York, NY: Wiley.

  4. Davern M, Rockwood TH, Sherrod R, Campbell S. Prepaid monetary incentives and data quality in face-to-face interviews: Data from the 1996 survey of income and program participation incentive experiment. Public Opinion Quarterly 2003;67(1):139-147.

  5. Eyerman J, Bowman K, Butler D, Wright D. The differential impact of incentives on refusals: Results from the 2001 national household survey on drug abuse incentive experiment. Journal of Economic and Social Measurement 2005;30(2-3):157-169.

  6. Gelman A, Stevens M, Chan V. Regression modeling and meta-analysis for decision making: A cost-benefit analysis of incentives in telephone surveys. Journal of Business and Economic Statistics 2003;21(2):213-225.

  7. James T. Results of the Wave 1 Incentive Experiment in the 1996 Survey of Income and Program Participation. In; 1997; Baltimore: American Statistical Association; 1997. p. 834–39.

  8. Lepkowski JM, Mosher WD, Groves RM, West BT, Wagner J, Gu H. Responsive design, weighting, and variance estimation in the 2006-2010 national survey of family growth. In: Vital and Health Statistics, Series 2: Data Evaluation and Methods Research; 2013.

  9. Lynn, P. The impact of incentives on response rates to personal interview surveys: Role and perceptions of interviewers. International Journal of Public Opinion Research 2001. 13(3): 326-336.

  10. Mann SL, Lynn DJ, Peterson AV. The "downstream" effect of token prepaid cash incentives to parents on their young adult children's survey participation. Public Opinion Quarterly 2008;72(3):487-501.

  11. Singer E, Kulka RA. Paying respondents for survey participation. Studies of Welfare Populations: Data Collection and Research Issues 2002:105-128.

  12. Singer E, Van Hoewyk J, Gebler N, Raghunathan T, McGonagle K. The effect of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics 1999;15(2):217-230.

  13. Singer E, Ye C. The Use and Effects of Incentives in Surveys. Annals of the American Academy of Political and Social Science 2013;645(1):112-141.

  14. Trussell N, Lavrakas PJ. The influence of incremental increases in token cash incentives on mail survey response: Is there an optimal amount? Public Opinion Quarterly 2004;68(3):349-367.

  15. Yu J, Cooper H. A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research 1983;20(1):36-44.

  16. Zipf G, Chiappa M, Porter KS, et al. National Health and Nutrition Examination Survey: Plan and operations, 1999–2010. National Center for Health Statistics. Vital Health Stat 1(56). 2013.




cc:

V. Buie

T. Richardson

3


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy