Attachment 1 -- Incentives

Attachment 1 Incentives 2-9-2012.docx

Research to support the National Crime Victimization Survey (NCVS)

Attachment 1 -- Incentives

OMB: 1121-0325

Document [docx]
Download: docx | pdf

Attachment 1. Incentives



Our intent in Attachment 1 is not to imply comparability amongst the SCV and other federal surveys that incorporate incentives. Rather, the purpose of Attachment is to demonstrate— 1) the breadth and depth of research related to the use of incentives in federal surveys, and 2) the full range of research that was considered and evaluated during the developmental stages of an SCV incentive strategy. We view this as an effort toward due diligence in developing and proposing a strategy for the current study, and do not arrive at our proposed incentive amount by comparison to other surveys or the incentive amounts utilized in those surveys.


This section describes our plan for testing a $10 promised incentive against a $0 incentive condition in the SCV field test. First, we provide background information about the NCVS and the benefit of using incentives with self-administered survey modes. Next, a brief summary of the literature on the use of incentives is provided, highlighting a number of federal surveys, including several conducted by the Census Bureau, that offer incentives to respondents. Prepaid and promised incentives are then discussed, followed by the rationale for selecting the $10 promised incentive amount for the SCV data collection.


Background


Historically, the NCVS has relied on a combination of face to face interviews and telephone interviews during data collection. Once rapport has been established by an interviewer with a household during an initial face to face interview, subsequent NCVS interviews are conducted by telephone unless the respondent requests an in-person interview. Declining response rates in the U.S coupled with rising costs of data collection, however, are posing challenges to probability surveys administered in various modes of data collection. These challenges have led BJS to explore less expensive modes of data collection to reduce survey costs for the NCVS; namely self-administered modes of web and inbound CATI1.


Self-administered modes of data collection have historically achieved lower response rates than classic interviewer-administered modes. This is believed to be largely due to the lack of an interviewer to gain initial cooperation from a sample member who may be reluctant to participate. The use of incentives is one of the common remedies used to counteract low response rates in self-administered surveys (Armstrong 1975; Church 1993; Cox 1976; Fox, Crask and Kim 1978; Dillman 2007; Heberlein and Baumgartner 1978; Kanuk and Berenson 1975; Levine and Gordon 1958; Linsky 1975; Yu and Cooper 1983). An additional benefit of using incentives is the potential to decrease nonresponse bias by including sample persons with low topic interest (e.g., Baumgartner and Rathbun, 1997; Groves, Singer and Corning, 2000). Incentives have never been utilized in NCVS data collection; however, their utility and the need to explore their use as part of this research arise from characteristics associated with survey self-administration approaches. This section provides rationales for the introduction of promised incentives into this experimental study design.


The proposed research evaluates two self-administration modes of data collection for the NCVS—web and inbound CATI--and the impact of the incentive use on response rates. Self-administered modes are less expensive, but also yield lower response rates than interviewer-administered modes. This is also likely to be the case for interviewer-administered surveys where the interviewer does not play an active role in gaining cooperation, as in inbound CATI. An additional consideration for this mode is that respondents may incur expenses, as would be the case with the cell-phone only population.2 The desire to achieve response rates and standard errors comparable to the current design necessitates the use of incentives in experimental conditions that require respondent action. This is not only true for the initial contact in the first wave, but also in following waves when the mode of data collection changes to self-administration (web and inbound CATI) and the rapport with the interviewer from the previous wave is no longer a design feature that can boost cooperation.


Use of Incentives


The mechanisms that evoke higher participation when incentives are used are unclear. Two competing theories suggest that incentives may be construed as either a token of appreciation (social exchange theory) or compensation for one’s time and effort (economic exchange theory). Which mechanism is dominant may not make a difference in cross-sectional surveys, but would likely affect cooperation in panel surveys, when the decision to participate in the first wave of the survey is, to a certain extent, a commitment to take part in following waves and the experience in the first wave is likely to be the most influential factor on future decisions to participate (Singer et al., 1998).


Longitudinal surveys often use incentives to build initial rapport with the panel respondents as participation in the baseline wave usually sets the retention rate for the life of the panel (Singer et al., 1998). That is why sizable incentives in the first wave of data collection are often recommended (Singer et al., 1998). For example, in an incentive experiment on Wave 1 of the 1996 Survey of Income and Program Participation (SIPP, U.S. Census Bureau), James (1997) found that the $20 prepaid incentive significantly lowered nonresponse rates in Waves 1-3 compared to both the $10 prepaid and the $0 conditions. Mack et al. (1998), examining cumulative response through Wave 6, found that an incentive of $20 reduced household, person, and item (gross wages) nonresponse rates in the initial interview and that cumulative household nonresponse rates remained significantly lower at Wave 6 (24.8 percent in the $20 group vs. 27.6 percent in the $0 incentive group, and 26.7 percent in the $10 group), even though no further incentive payments were made.


In addition, there seems to be no evidence of incentive expectation in subsequent waves of data collection. For example, research on the Health and Retirement Survey (HRS) suggests that respondents who are paid a refusal conversion incentive during one wave do not refuse at a higher rate than other converted refusers when reinterviewed during the next wave (Lengacher et al., 1995). Similarly, Singer et al (1998) found that respondents in the Survey of Consumer Attitudes who received a monetary incentive in the past were more likely to participate in a subsequent survey, despite receiving no further payments.


This research seeks to test two experimental conditions that represent different combinations of interviewer- and self-administered modes. The most efficient design would offer incentives only to respondents who receive web or inbound CATI – modes that lack the interviewer motivation. However, mixed-mode designs employ combinations of modes and often respondents in the same household are interviewed in different modes. In order to treat respondents in the same household equally, and provide comparisons across modes that are not confounded by the offer of an incentive, we need to offer incentives to everyone in the household, regardless of mode.


A common argument against the use of incentives is the cost associated with them. Yet, incentives can reduce the cost per case through the need for fewer interviewers to do follow-up with sample members who do not respond. Such evidence is provided by the incentive experiments conducted for the National Survey on Drug Use and Health (NSDUH, Substance Abuse and Mental Health Services Administration). Cost per interview in the $20 group was 5 percent lower than the control (no incentive), and in the $40 group costs were 4 percent lower than the control. The cost savings were gained by interviewers spending less time trying to obtain cooperation from respondents (Kennet et al., 2005). These savings were realized through reduced interviewer labor as well as reduced travel costs (mileage, tolls, parking, etc.) Similar results were experienced in an incentive experiment conducted for the National Survey of Family Growth (NSFG, National Center for Health Statistics) Cycle 5 Pretest which examined $0, $20, and $40 incentive amounts. As in the NSDUH experiments, the additional incentive costs were more than offset by savings in interviewer labor and travel costs (Duffer et al, 1994). Currently, the NSDUH offers $30 for an interview that averages 60 minutes, while the NSFG offers $40 for interviews that are about 60 minutes for males and 80 minutes for females.


In addition to NSDUH and NSFG, many other federally-sponsored surveys offer incentives to gain cooperation. For example, incentives for the National Health and Nutrition Examination Survey (NHANES, National Center for Health Statistics) range from $20 to $100, depending on the number of survey sections and exams that are completed. The National Survey of Adoptive Parents of Children with Special Health Care Needs (Department of Health and Human Services) offers parents $25 for participation in a 35-minute telephone survey. In order to improve response rates, reduce the number of contacts required to gain cooperation, and address respondent concerns about interview burden, the National Survey of Child and Adolescent Well-Being (NSCAW, Administration for Children and Families) in 2002 doubled the incentive offered to respondents from $25 to $50. The Early Childhood Longitudinal Study-Birth Cohort (ECLS-B, U.S. Department of Education) offered parent participants $50 and a children’s book for the first wave and $30 and a children’s book for subsequent waves of data collection. Over rounds 1 through 10 of the National Longitudinal Survey of Youth 1997 (NLSY97, Bureau of Labor Statistics) cohort, incentives offered to respondents ranged from $10 to $50 in an attempt to minimize attrition across waves of data collection. The National Immunization Survey (NIS, National Center for Immunizations and Respiratory Diseases) offers a combination of $5 prepaid and $10 promised incentives to encourage eligible nonrespondents to participate. Interviewers in the National Intimate Partner and Sexual Violence Survey (an RDD survey) offer a $10 promised incentive for an interview that lasts a maximum of 30 minutes. These federal studies, several of which also involve the collection of sensitive data, illustrate the wide range of incentive amounts that are being offered to respondents given their survey mode options and the perceived response burden. This information, coupled with findings in the survey literature, is helpful in determining the optimal incentive amounts to test against a $0 incentive condition in the SCV.


As noted earlier, the U.S. Census Bureau has also experimented with and begun offering incentives for several of its longitudinal panel surveys, including SIPP and the Survey of Program Dynamics (SPD). SIPP has conducted several multi-wave incentive studies, most recently with their 2008 panel, comparing results of $10, $20, and $40 incentive amounts to those of the $0 control group. The study has examined response rate outcomes in various subgroups of interest (e.g., the poverty stratum), use of targeted incentives for non-interview cases, and the impact of base wave incentives on participation in later waves of data collection. Overall, the results suggest that $20 incentives increase response rates and also improve the conversion rate for non-interview cases. Incentives may also have an additional impact on response rates for households in the poverty stratum and significantly reduce item nonresponse rates (see Creighton et al (2007); Clark, S.M. and Mack, S.P, (2009)). Similarly, SPD has conducted four incentive studies, testing $20, $40, $50, and $100 amounts in an effort to increase cooperation among poverty households and nonrespondents and to minimize attrition in subsequent waves of the study. Incentives were found to have a positive impact on both response and attrition rates; most recently, the fourth incentive study found that the average interview rate greatly increased with the use of incentives (Creighton et al, 2007).


Prepaid vs. Promised Incentives


Studies in the survey literature provide mixed evidence on the use of prepaid vs. promised incentives. While prepaid monetary incentives have consistent and positive effects on response rates in mail surveys (see Church, 1993 for a meta analysis), there is no evidence of the significant advantage of prepaid over promised incentives in interviewer-administered surveys (see Singer et al., 1999 for a meta analysis). Research on the use of incentives in web surveys also shows no evidence for the advantage of prepaid over promised incentives. For example, Bosnjak and Tuten (2003) compared experimentally four incentive conditions ($2 prepaid incentive vs. a $2 promised vs. a promised prize draw vs. no incentive) and found no advantages of prepaid incentives over promised incentives in terms of willingness to participate, actual completion rates, and incomplete responses. Such findings demonstrate that the effectiveness of incentives is largely design driven and depends on a set of interacting effects – topic interest, burden of the request, timing of the incentive, mode of data collection, and other essential survey conditions. As such, there is a value in a permanent randomized incentive component in every survey rather than reliance on methodological studies that are informative for a particular estimate, from a particular survey, at a particular time (Groves, 2008).


Recent incentive experiments in the initial pilot for the National Household Education Survey (NHES, U.S. Department of Education) tested a promised $5 incentive to do the topical survey on the phone (Tubman and Williams, 2010). This experiment found the response rates for the $5 promised incentive to be higher by 6 – 8 percentage points; however, due to the small sample sizes (approximately 40 – 50 in each group), those differences did not reach statistical significance. Additional experiments have explored the use of a prepaid incentive included with both the mail screener and mail topical survey requests. Based on results of the 2011 field test, the NHES currently seeks to refine the optimal incentive strategy for the study by testing alternative incentive approaches for the topical survey component. The study will provide a $5 prepaid incentive with the mail screener, followed by a prepaid incentive with the topical survey request. The topical incentive amount will vary based on the response to the screener mailing -- early screener respondents will receive $5, while an experiment will be launched for late screener respondents to receive either $5 or $15 with the topical survey. Similarly, the incentive strategy to be deployed for the National Survey of Early Care and Education (NSECE, Administration for Children and Families) is based on the outcomes of incentive experiments implemented during their 2011 field test. Like NHES, the NSECE will offer a $2 incentive in the first mailing of the household screener, an amount that proved more successful than the $1 incentive tested in the NSECE field test. An additional prepaid incentive of $5 will be mailed with the eligibility notification letter that precedes the in-person interviews.


The NHES and NSECE incentive experiments yield important findings for the research community on the impact of prepaid incentives included with mailed survey requests. However, given the choice of survey modes across waves, the sampling frame, and eligibility criteria for the SCV, there are several important factors that would argue for the use of a larger promised incentive rather than a prepaid incentive. First, the advantages of prepaid incentives over promised have been demonstrated consistently for mail surveys (see Church, 1993, for a meta analysis). Our design requires sample members to answer questions in a face-to-face setting, on the phone (inbound or outbound CATI), or via the web. To the extent to which the survey request and the incentive are separated from the survey instrument (available for self-administration online or over the phone), the effectiveness of prepaid incentives on participation rates remains unknown. Second, prepaid incentives are usually sent to a household in expectation that an eligible member of the household will cooperate with the survey request. There is no research that examines the effect of a single prepaid incentive on survey participation by all eligible members of the household, and there is potential danger of a backfiring effect of a prepaid incentive that is not sent to each individual participant in the survey. In a design where the sampling frame is a list of addresses and the household composition is unknown, it is impossible to prepay incentives to all potential survey participants. In addition, the SCV crime Screener is not a separate instrument from the Crime Incident Report, but rather a tool for identifying how many Crime Incident Reports need to be completed. Thus, gaining knowledge about the household composition before the initial mailing of survey materials will not be an option for this study.


An argument for the use of prepaid incentives in the second wave of data collection can be forwarded, as the household composition will be known and the survey will target only those respondents to the Wave 1 survey request. Given the biggest advantage of prepaid incentives is in gaining initial cooperation and trust, we want to abstain from adding more cost and changing the survey design in the second wave of data collection, when rapport has been established with the sampled households and respondents already know what to expect. Taking into consideration the SCV design features noted above (address-based sample, selection of all adults in the household, multiple survey modes), the survey literature, and the mode comparisons that are central to this research, we recommend a promised incentive strategy for the SCV.


In considering the benefits of promised incentives in the SCV, we examined additional studies that have demonstrated the significant effect of promised incentives compared to a no incentive condition. For example, Cantor et al. (2003) found an almost 10 percent increase in response rate when promising $20 (vs. no incentive) in an RDD survey of caregivers to children 0-17.Similarly, a meta analysis by Yu and Cooper (1983) also found promised incentives significantly improved response rates. A number of studies have reported gains in response rates with offering relatively large amounts of money ($25 or greater) at the end of the data collection period (e.g., Olson et al. 2004; Curtin et al. 2005).


Incentive Amount


In theory, incentives aid participation in two ways. First, they can be conceived as a “token” to elicit social exchange between the sample member and the survey organization and sponsor – each side doing something good for the other party, without an economic value. Small prepaid incentives can be viewed as merely invoking good will under social exchange theory. Some argue it is not social exchange, but rather establishing the legitimacy of the survey request that is achieved. Even if so, it is not the amount that is central. Second, while larger incentive amounts can also be seen as invoking social exchange, they can directly motivate sample members to participate by providing a direct benefit to the respondent in exchange for the time and burden of answering the survey questions. These can, therefore, be promised – conditional on completion of the survey. It is possible that too small an amount for a promised incentive will not help to increase participation as it can be seen as showing too little value for the respondent’s time. Although the cognitive mechanisms for how incentives influence survey participation are not well understood, other fields have proposed theories and empirical evidence showing that small monetary incentives to provide extrinsic motivation “backfire” and the incentive needs to be larger for it to work (Gneezy and Rustichini, 2000; Gneezy, 2003; Pouliakas, 2010). A critical risk in experiments is dosage in the manipulation (the incentive amount, in this case) and being too conservative can risk the success of the entire experiment. The additional concern here, is that too small of a promised incentive may fall below a threshold that is expected by many respondents in exchange for an approximately 15-20 minute survey.


The choice of an incentive amount largely depends on the survey burden, including the survey length and other tasks that may be required of the respondent, the survey topic, and whether the incentive is promised or prepaid. Promised incentives tend to be larger than prepaid incentives; Strouse and Hall, 1997, recommend that in order to be successful, promised incentives have to be $15-$35. As noted above, a number of federally funded surveys, including the NSDUH, SIPP, HRS, SCA, SPD, NHANES, and the NSFG, currently provide incentives. The amounts vary based on the interview length, perceived sensitivity of the survey topics, and other study design factors. As with some of these studies, the SCV includes sensitive questions, but is much shorter in terms of length (estimated respondent burden is 7-8 minutes per crime Screener, plus 8-9 minutes for each completed Crime Incident Report). However, unlike the interviewer-administered studies, the focus of the SCV is on self-administered modes of data collection that necessitate the use of incentives (see Dillman, 2007).


Given the SCV design and survey length, we propose to test a $10 promised incentive against a $0 incentive condition. We will implement the same incentive design in Wave 2, when it is particularly important to maximize the response rates for web and inbound CATI, the primary modes of data collection in this wave. The importance of maintaining a consistent incentive structure is both waves of data collection is two-fold – on one hand, the survey conditions presented in Wave 1 are likely treated by respondents as the conditions to which they agree for the duration of their enrollment in the panel; more importantly, however, in Wave 2 we are changing the modes of data collection and want to be able to attribute potential differences in response rates across waves to the use of interviewer vs. self-administered modes. Changing the incentive structure will be a confounding factor in such analyses. The promised incentive level of $10 is based on prior studies that have found significant effects of promised incentives (compared to a no incentive condition) where at least $5, and in most cases $15 or more, have been offered to respondents upon completion of the survey (Yu and Cooper, 1983; Strouse and Hall, 1997; Singer, et al, 1998; Singer, et al., 2000; Cantor, et al. 2003).


The optimal incentive amount is not the focal point of this study. Incentives are employed in order to study if comparable response rates can be achieved in self-administered modes. While it is possible that a lower amount (say $5) may suffice, without previous tests with the NCVS instruments or population, the entire experiment could be jeopardized by null findings due to an insufficient incentive amount and low response rates. It is also possible that $10 incentives are not sufficient to motivate respondents to self-administer the survey. Additional embedded experiments could be conducted to identify the optimum incentive amount – whether it is higher or lower. However, if embedded in the current design, such experiments would require increases in sample size to be able to detect differences between mode and incentive conditions with certain power. The current budget does not allow this and as a result the design is limited to a test of a single incentive amount across each of the two experimental conditions. However, if the use of incentives in the tested modes manages to yield comparable response rates and standard errors of key estimates across conditions, the next step for research will be to find experimentally the optimal incentive amount.


In addition to achieving comparable response rates with the current NCVS, we anticipate the proposed $10 amount will yield comparable, if not lower, costs per case relative to the existing design. Moreover, the $10 promised incentive amount has not been tested as extensively as a $5 prepaid incentive and we believe it has the most potential to contribute to our knowledge of how to increase response rates for the self-administered modes and to secure the cooperation of multiple household members over multiple study waves. We are particularly interested in whether the promised incentive works differently in eliciting response via inbound CATI at Wave 1 versus the interviewer-assisted modes, and at Wave 2 when respondents are offered the flexibility of an inbound CATI or Web survey mode.

1 Due to the complexity of the survey instrument, a mail survey mode will not be implemented in the SCV. Three rounds of cognitive testing of the draft mail survey demonstrated it was impossible to adapt the current instrument to mail self-administration without inducing measurement error and possibly nonresponse. Creating an alternative instrument, suitable for mail data collection, is out of the scope of this research study.

2 Currently, 20% of the adult U.S. population is cell-phone only (Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December 2008. National Center for Health Statistics. May 2009. Available from: http://www.cdc.gov/nchs/nhis.htm.)


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorshk
File Modified0000-00-00
File Created2021-02-03

© 2024 OMB.report | Privacy Policy