Incentive Justification

Attachment 17 Incentive Justification.docx

Research to support the National Crime Victimization Survey (NCVS)

Incentive Justification

OMB: 1121-0325

Document [docx]
Download: docx | pdf

Justification for Use of Incentives

As noted in the memorandum, the study is proposing two types of incentives. One type is to provide all sampled households with $2 at the initial survey request. For an RDD survey, prior research has found that an incentive of this size increased rates, on average, by 5.4 percentage points (Cantor, et al., 2007: Table 22.2). A recent experiment testing a $2 incentive for a mail survey found the increase to be approximately 10 percentage points (Cantor, et al., 2008). Church (1993) reports an effect size of almost 20 percentage points, although with varying incentive amounts. An incentive of this type has also been found to reduce non-response error by bringing in populations that traditionally have low response rates to telephone and mail surveys (Dillman, 1997; Hicks, et al., 2008).


If the IVR is to be used as a replacement for a local area survey or as a supplement to more expensive interviewer-based methods (e.g., CATI; CAPI; ACASI), it is important to get a realistic idea of the response rates that can be achieved. As noted above, a small incentive can significantly increase this rate. Within the context of a rotating panel design, for example, increasing the response rate by 10 to 20 percentage points significantly decreases the amount of in-person follow-up that would have to be done and would more than pay for the additional monies to pay for the incentive (e.g., Link, et al., 2001). Similarly, if the IVR is used as a way to generate local area estimates, rather than a paper or telephone survey, maximizing the response rate with a token incentive will more than pay for the cost of completing a mail or telephone survey.


A second reason to include an incentive is that it will significantly increase the power of the analysis of victimization rates and comparisons of the demographic distributions. Each of these rely on the number of completed interviews. For example, if the response rate to the RDD survey is 15%, rather than 20%, the number of completed interviews would decrease by 25%. With increases of response rates by as much as 20 percentage points, a $2 incentive is an efficient way to maximize statistical power for the analysis, as well as reduce potential bias in the estimates.


The second type of proposed incentive is to promise $10 if the survey is completed by those receiving the IVR request in the mail. The evidence on the effectiveness of this type of incentive for mail surveys is decidedly mixed (Church, 1993). The extrapolation of this for requesting to do an IVR has not been tested. A perceived barrier to the use of an IVR is getting respondents to call the 800 number. This is different from a mail or RDD survey where the respondent is faced with the response task without any further actions. For example, for a mail survey, the questionnaire is readily available as soon as the package is opened. A promised incentive might provide the respondent with additional motivation to make the call to take the survey. The proposed experiment would test this hypothesis. If successful, it could prove to be an efficient methodology to use when trying to get respondents to use the IVR.



Justification for Amount of Incentive

The research on pre-paid incentives generally finds that small pre-paid incentives, of approximately $1 or $2, have significant effects on mail (Church, 1993) and interviewer-administered surveys (Cantor et al., 2008; Singer et al., 1999). This same research has found that there are smaller gains in response rate for pre-paid incentives above $2. For example, Trussell and Lavrakas (2004) found a 13 point increase in response rate between $0 and $2, and a 6 point increase between $2 and $5. Similar differences between $2 and $5 for telephone surveys were found by Brick et al (2005) and Cantor et al., (1998). Given the effectiveness of this type of incentive, we are proposing to use a $2 pre-paid incentive for all households included in the sample. This will increase the efficiency of the study by increasing the response rate.



The research on promised incentives is not as definitive. In a meta-analysis of mail surveys, Church (1993) did not find a significant effect of small ($1, $2) promised incentives. Inconsistent effects have been found for interviewer-mediated surveys. In a meta-analysis of both telephone and personal interviewing, Singer found significant effects of promised incentives (Singer, et al. 1999). In contrast, a review of more recent random digit dial (RDD) surveys did not find consistent effects (Cantor et al. 2008). For example, Singer et al (2000) found a $5 promised incentive to increase response rates at the screening stage by 7.4%. However, a number of other studies have found no effect of a promised incentive for a screening interview (Cantor et al. 2008: Table 22.3). One reason why this might be the case is that a promise of money at the screener, which is the initial contact with the household, may sound more like a sales-pitch than a serious survey.



There has been more success when promising significantly more money when requesting to complete an extended interview of RDD respondents. For example, Strouse and Hall (1997) did not find a significant effect of amounts in the $0 - $10 range, but did find a significant effect of $35.i Cantor et al (2003) report an effect of 9.1 percentage points when offering $20. Other studies have found amounts of $25 or more have been effective at the point of refusal conversion (Fesco, 2001; Olson et al., 2004; Currivan, 2005; Curtin et al., 2005). The greater success at the extended stage might be attributed to the fact that the offer is made once someone has already cooperated in the household by completing the screening interview. Consequently the respondent has developed more trust in the interviewer.



The proposed experiment seeks to add to the above research literature by testing a promised incentive for an IVR survey, as implemented in the NCVS mode study. A promised incentive may work differently for the IVR than in a telephone interview, since the appeal will be made by mail. The letter will be on official Department of Justice stationary and should be credible in the respondent’s eyes. We are proposing the experiment have two conditions, $0 and $10. The $10 level was selected in light of prior studies that have found that significant effects of promised incentives were at least $5, with most being $15 or more.



References

Brick, J. Michael, Jill Montaquila, Mary Collins Hagedorn, Shelley Brock Roth, and Christopher Chapman. 2005. “Implications for RDD Design from an Incentive Experiment”. Journal of Official Statistics. Forthcoming.


Cantor, D., Schiffrin, H., Parke, I. and B. Hesse (2006) “An experiment testing a promised incentive for a random digit dial survey.” Paper presented at the 2006 annual meeting of the American Association for Public Opinion Research, May 16 – 18, Montreal, Canada.


Cantor., D., O’Hare, B. and O’Connor, K. (2007) “The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys” pp. 471-498 in J. M. Lepkowski, C. Tucker, J. M. Brick De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (Eds.), Advances In Telephone Survey Methodology, New York: J.W. Wiley and Sons, Inc.


Cantor, David, Kevin Wang, and Natalie Abi-Habib. 2003 “Comparing Promised and Pre-Paid Incentives for an Extended Interview on a Random Digit Dial Survey”. Proceedings of the American Statistical Association, Survey Research Section.


Church, Allan H. 1993. “Estimating the Effect of Incentives on Mail Survey Response Rates: a Meta-Analysis”. Public Opinion Quarterly 57:62-79.


Currivan, Doug. 2005. “The Impact of Providing Incentives to Initial Telephone Survey Refusers on Sample Composition and Data Quality”. Paper presented at the Annual Meeting of the American Association of Public Opinion Research, Miami Beach, FL.


Olson, Lorayn, Martin Frankel, Kathleen S. O'Connor, Stephen J. Blumberg, Michael Kogan, and Sergei Rodkin. 2004. “A Promise or a Partial Payment: the Successful Use of Incentives in an RDD Survey”. Paper presented at the Annual Meeting of the American Association of Public Opinion Research, Phoenix, AZ.


Singer, Eleanor, John Van Hoewyk, Nancy Gebler, Trivellore Raghunathan, and Katherine McGonagle. 1999. “The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys”. Journal of Official Statistics 15:217-230.


Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 2000. “Experiments with Incentives in Telephone Surveys”. Public Opinion Quarterly 64:171-188.


Strouse, Richard C., and John W. Hall. 1997. “Incentives in Population Based Health Surveys”. Proceedings of the American Statistical Association, Survey Research Section: 952-957.



Trussell, N. and P. Lavrakas (2004) “The influence of incremental increases in token cash incentives on mail survey responses. Is there an optimal amount?” Public Opinion Quarterly, 68(3): 349 – 367.

i See also Cantor et al 2006, who did not find a significant effect for a promised incentive of $15 for an RDD health survey.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorFeel Good Inc.
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy