Incentive_Experiment_Addendum_9-21-21

Incentive_Experiment_Addendum_9-21-21.docx

2022 National Survey on Drug Use and Health (NSDUH)

Incentive_Experiment_Addendum_9-21-21

OMB: 0930-0110

Document [docx]
Download: docx | pdf

National Survey on Drug Use and Health (NSDUH) Incentive Experiment Addendum


Similar to other national surveys, response rates for NSDUH have been declining (Center for Behavioral Health Statistics and Quality, 2013; 2014; 2015; 2016; 2017) and this is particularly true for screening response rates. Adding a screening incentive and increasing the main interview incentive could increase participation at both the screening and interviewing stages. If lower response rates reflect systematic shifts in who is being screened, then increasing screening response could reduce nonresponse bias across both stages and, therefore, reduce the potential for nonresponse bias in key NSDUH estimates.


The literature on incentives demonstrates that monetary incentives are effective in increasing survey cooperation rates. Multiple studies have shown that incentives tend to increase participation among sample members who are less interested in or involved with the survey topic (Groves, Singer, & Corning, 2000; Groves, Presser, & Dipko, 2004; Groves et al., 2006). Research has also demonstrated that respondent incentives (either at the screening or interview stage) can increase response rates and, in some situations, can reduce nonresponse bias in survey estimates (Armstrong, 1975; Church, 1993; Groves & Couper, 1998; Groves et al., 2006; Kulka, 1994; Singer, 2002; Singer & Ye, 2013).


Like many other national surveys, recent NSDUH response rates have been steadily declining (Williams and Brick, 2018). Furthermore, this trend was exacerbated in 2020 due to the public health emergency related to COVID-19. Data collection methods were modified during 2020 due to the COVID-19 pandemic to ensure the safety of the public and FIs. Quarter 1 (January to March 2020) was completed using standard NSDUH protocols with in-person data collection. However, Quarter 1 data collection ended 15 days early when work was suspended on March 16, 2020.

Data collection resumed in Quarter 4, but in-person data collection was limited to a small number of eligible states and counties due to COVID-19 infection rates. Web-based screening and interviewing procedures were developed to account for suspended data collection in Quarter 2 and most of Quarter 3 and to maximize the number of completed interviews for national estimates. NSDUH data collection in Quarter 4 used in-person and web-based procedures.

However, because a vast majority of the interviews completed in Quarter 4 were completed via the web-based option (93%), comparisons of response rates between Quarter 1 and Quarter 4 can be thought of as differences between in-person (Quarter 1) and web-based (Quarter 4) response rates. The weighted screening response rate in Quarter 1 was 67.8 percent. The weighted interview response rate and overall response rate in Quarter 1 were 63.2, and 42.9 percent, respectively.

Although the addition of web-based data collection in Quarter 4 increased the number of interviews that were completed, web-based data collection yielded lower response rates than in-person data collection. Further, Quarter 4 in-person response rates were negatively affected by the COVID-19 pandemic. Specifically, the pandemic could have exacerbated people’s reluctance to open their doors to FIs in areas where in-person interviewing was allowed. FIs also could have had a shorter amount of time to follow up with households that started out as being eligible for in-person data collection but became ineligible based on changing state- and county-level health metrics.

The weighted screening response rate in Quarter 4 was 11.1 percent. The weighted interview response rate and overall response rate in Quarter 4 were 59.5, and 6.6 percent, respectively.


Despite increases in the level of effort to complete screenings and interviews (as measured in terms of number of contact attempts), response rates continue to decline, particularly at the screening stage. Increasing screening response could reduce nonresponse bias by bringing in households whose residents are less interested in substance use or mental health issues.


Currently, NSDUH interview respondents are offered a $30 cash incentive only when they complete the main interview. No incentive is offered for completing the household screening or as part of refusal conversions efforts. SAMHSA would like to evaluate whether adding a screening incentive and increasing the interview incentive increases the likelihood of participation in the household screening and subsequent interview(s). The NSDUH incentive experiment will involve testing both a new screening incentive ($5 vs. $0) and an increased interview incentive ($50 vs. $30). For more information on the screening and interview incentive experimental design and power analysis, see the Power Analysis Report (Attachment IE-1).


The higher interview incentive amount is proposed based on a review of other nationally-representative in-person surveys that have recently conducted experimental tests to determine the impact of increasing the interview incentive amount. Examples of other nationally-representative in-person surveys reviewed included:

  • Medical Expenditure Panel Survey (MEPS). The interview incentive was increased from $30 to $50 in 2011. A higher incentive amount of $70 has been tested, but not implemented.

  • Panel Study of Income Dynamics (PSID). Between 2002 and 2015, the PSID interview incentive increased by $5 increments three times about every 3 to 4 years, from a starting incentive of $55 in 2003 to the current incentive of $70 in 2015.

  • National Survey of Family Growth (NSFG). The interview incentive has been $40 since 2006, with an increased incentive promised for nonresponse follow-up. A higher incentive amount of $60 has been tested, but not implemented.


These examples illustrate the value of testing different interview incentive amounts to determine whether increased and/or additional incentives increase response rates and reduce nonresponse bias in key estimates.


Evidence from many surveys across different data collection modes indicates that prepaid incentives are usually more effective than promised (or conditional) incentives, especially for self-administered surveys (Gelman, Stevens, & Chan, 2002; Singer, 2002). Most of these studies use incentives delivered in advance by mail. For surveys where nearly all sample units are eligible to participate in the survey based on the established criteria for eligible dwelling units (DUs), sending prepaid screening incentives by mail can be cost-effective. Key factors in determining whether prepaid incentives are cost-effective are the eligibility rates of selected DUs and of individuals who live in the eligible DUs.


We propose a 2x2 design to assess the separate impacts of the screening and increased interview incentive on response rates, and therefore, potential reduction in nonresponse bias. Table 1 presents the four possible combinations of screening and interview incentive amounts that could comprise the four experimental conditions.


Table 1. Four Possible Experimental Conditions for the Incentive Experiment



Screening Incentive Amounts

Interview Incentive Amounts

1

$0 screening +

2

$5 screening +

$30 interview

$30 interview

3

$0 screening +

4

$5 screening +

$50 interview

$50 interview

Note: Condition 1 represents current NSDUH practice.


The screening and interview incentive experiment will be used to assess:

  1. The impact of offering a screening incentive on screening response rates (SRR).

  2. The impact of a higher interview incentive on screening and interview response rates (IRR).

  3. The impact of offering a screening incentive on nonresponse bias by examining the demographic composition of households screened.


We will use results of the experiment to make recommendations on the use of incentives for future NSDUHs. The screening incentive and increased interview incentive will be considered effective if the results show meaningful differences between experimental conditions that are statistically significant.


We project that an increase of approximately 5% for either the screening or interview response rate will be meaningful. Our study design will be able to detect differences in screening response rates of at least 4.7% and differences in interview response rates of at least 5.2%. Differences at or above these rates will likely be statistically significant and we would interpret these rates as meaningfully different. As such, these values represent our indicators of incentive effectiveness. In addition, we expect the screening incentive to impact the demographic composition of households screened. For households offered the $5 screening incentive compared to those not offered the screening incentive, one or more of the demographic characteristics (1) differs significantly between the no incentive and $5 incentive condition and (2) the estimate from the $5 incentive condition is closer to American Community Survey (ACS) estimates. For age groups, marginal mean differences ranging from 1.8% to 4.0% would be detectable as statistically significant. For gender, a marginal mean difference of 3.3% would be statistically significant. For race/ethnicity, marginal mean differences ranging from 2.8% to 5.0% would be statistically significant. Because nonresponse bias is a property of each survey estimate, differences in response rates could result in no practical impact on nonresponse bias for some estimates and a meaningful impact for other estimates.


An important outcome of the incentive experiment will be to make a clear, informed decision about whether the addition of a screening and/or increase in an interview incentive is effective in increasing response rates and potentially limiting nonresponse bias. The experimental design will determine whether the screening incentive has a significant effect on response rates independent of the increased interview incentive, whether the increased interview incentive has the primary impact on screening and interview response rates, or whether a specific combination of the screening and interview incentive amounts has the greatest impact on response rates. The incentive experiment provides a foundation for an evidence-based decision on whether adding a screening incentive, increasing the interview incentive, or a combination of these incentives, may improve response rates and mitigate nonresponse bias in future NSDUHs.


Attachments

IE-1. Power Analysis Report


















References

Armstrong, J. S. (1975). Monetary incentives in mail surveys. Public Opinion Quarterly, 39, 111-116. https://doi.org/10.1086/268203


Church, A. H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57, 62-79. https://doi.org/10.1086/269355


Gelman, A., Stevens, M., & Chan, V. (2002). Regression modeling and meta-analysis for decision making: A cost-benefit analysis of incentives in telephone surveys. Journal of Business & Economic Statistics, 21, 213-225. https://doi.org/10.1198/073500103288618909


Groves, R. M., & Couper, M. P. (1998). Nonresponse in household interview surveys. New York, NY: John Wiley & Sons.

Groves, R. M., Couper, M. P., Presser, S., Singer, E., Tourangeau, R., Acosta, G. P., & Nelson, L. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720-736.


Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68(1), 2-31.


Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation - Description and an illustration. Public Opinion Quarterly, 64(3), 299-308.


Kulka, R. (1994). The use of incentives to survey "hard-to-reach" respondents: A brief review of empirical research and current practice. Paper presented at the Seminar on New Directions in Statistical Methodology, sponsored by the Council of Professional Associations on Federal Statistics, Bethesda, MD.


Singer, E. (2002). The use of incentives to reduce nonresponse in household surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 163-177). New York, NY: Wiley.

Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The Annals of the American Academy of Political and Social Science, 645, 112–141.


Williams, D., & Brick, M. (2018). Trends in U.S. face-to-face household survey nonresponse and level of effort. Journal of Survey Statistics and Methodology, 6, 186-211.


5


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorGEM
File Modified0000-00-00
File Created2022-09-29

© 2024 OMB.report | Privacy Policy