Justification of Incentives

Attachment_11_Justification_of_Incentives.doc

Evaluation of the Food and Drug Administration’s Point-of-Sale Campaign

Justification of Incentives

OMB: 0910-0851

Document [doc]
Download: doc | pdf



Attachment 11: justification of Incentives


Incentives for the Point of Sale Intervention for Tobacco Evaluation (POSITEv)


This section describes our plan for the promised incentives. First, we provide background information about the Point of Sale Intervention for Tobacco Evaluation (POSITEv) and the benefit of using incentives. Next, a brief summary of the literature on the use of incentives is provided, highlighting a number of federal surveys that offer incentives to respondents. Prepaid ($2) and promised incentives for this study are then discussed, followed by the rationale for selecting each amount. The promised incentives include $25 for each of for the Waves 1, 2, 3, and 4 outcome evaluation questionnaires, an additional $5 for completing the Waves 2, 3, and 4 questionnaires online by a specific date, and $5 for each of the three smartphone app-based questionnaires.


Background


POSITEv will rely on a mail screener, field screener, in-person and online outcome evaluation questionnaires, and smartphone app-based questionnaires during data collection. Regardless of mode of administration, survey response rates are declining in the U.S. This challenge leads us to explore incentives to increase respondent participation.


Use of Incentives


The mechanisms that evoke higher participation when incentives are used are unclear. Two competing theories suggest that incentives may be construed as either a token of appreciation (social exchange theory) or compensation for one’s time and effort (economic exchange theory). Which mechanism is dominant would likely affect cooperation in panel surveys because the decision to participate in the first wave of the survey is, to a certain extent, a commitment to take part in following waves.


Longitudinal surveys often use incentives to build initial rapport with the panel respondents as participation in the first wave usually sets the retention rate for the life of the panel (Singer et al., 1998). The experience in the first wave is likely to be the most influential factor on future decisions to participate (Singer et al., 1998). That is why sizable incentives in the first wave of data collection are often recommended (Singer et al., 1998). For example, in an incentive experiment on Wave 1 of the 1996 Survey of Income and Program Participation (SIPP, U.S. Census Bureau), James (1997) found that the $20 prepaid incentive resulted in significantly higher response rates in Waves 1-3 compared to both the $10 prepaid and the $0 conditions. Mack et al. (1998), examining cumulative response through Wave 6 of the SIPP, found that an incentive of $20 improved household, person, and item (gross wages) response rates in the initial interview and that cumulative household response rates remained significantly higher at Wave 6 (75.2 percent in the $20 group vs. 73.3 percent in the $10 group and 72.4 percent in the $0 incentive group), even though no further incentive payments were made.


In addition, there seems to be no evidence of incentive expectation in subsequent waves of data collection. For example, research on the Health and Retirement Survey (HRS) suggests that respondents who are paid a refusal conversion incentive during one wave do not refuse at a higher rate than other converted refusers when re-interviewed during the next wave (Lengacher et al., 1995). Similarly, Singer et al (1998) found that respondents in the Survey of Consumer Attitudes who received a monetary incentive in the past were more likely to participate in a subsequent survey, despite receiving no further payments.


A common argument against the use of incentives is the cost associated with them. Yet, incentives can reduce the cost per case through the need for fewer interviewers to follow up with sample members who do not respond. Such evidence is provided by the incentive experiments conducted for the National Survey on Drug Use and Health (NSDUH, Substance Abuse and Mental Health Services Administration). Cost per interview in the $20 group was 5 percent lower than the control (no incentive), and cost per interview in the $40 group was 4 percent lower than the control. The cost savings were gained by interviewers spending less time trying to obtain cooperation from respondents (Kennet et al., 2005), which reduced interviewer labor and travel costs (mileage, tolls, parking, etc.) Similar results were experienced in an incentive experiment conducted for the National Survey of Family Growth (NSFG, National Center for Health Statistics) Cycle 5 Pretest which examined $0, $20, and $40 incentive amounts. As in the NSDUH experiments, the additional incentive costs were more than offset by savings in interviewer labor and travel costs (Duffer et al, 1994).


In addition to NSDUH and NSFG, many other federally-sponsored surveys offer incentives to gain cooperation. For example, the National Health and Nutrition Examination Survey (NHANES, National Center for Health Statistics) offers respondents up to $125, depending on the number of survey sections and exams that are completed. The National Survey of Adoptive Parents of Children with Special Health Care Needs (Department of Health and Human Services) offers parents $25 for participation in a 35-minute telephone survey. In order to improve response rates, reduce the number of contacts required to gain cooperation, and to address respondent concerns about interview burden, the National Survey of Child and Adolescent Well-Being (NSCAW, Administration for Children and Families) in 2002 doubled the incentive offered to respondents from $25 to $50. The Early Childhood Longitudinal Study-Birth Cohort (ECLS-B, U.S. Department of Education) offered parent participants $50 and a children’s book for the first wave and $30 and a children’s book for subsequent waves of data collection. Over rounds 1 through 10 of the National Longitudinal Survey of Youth 1997 (NLSY97, Bureau of Labor Statistics), incentives offered to respondents ranged from $10 to $50 to minimize attrition across waves of data collection. The National Immunization Survey (NIS, National Center for Immunizations and Respiratory Diseases) offers a combination of $5 prepaid and $10 promised incentives to encourage eligible nonrespondents to participate.


As noted earlier, the U.S. Census Bureau has also experimented with and begun offering incentives for several of its longitudinal panel surveys, including SIPP and the Survey of Program Dynamics (SPD). SIPP has conducted several multi-wave incentive studies, most recently with their 2008 panel, comparing results for $10, $20, and $40 incentive amounts to those of the $0 control group. The study has examined response rate outcomes in various subgroups of interest (e.g., by poverty stratum), use of targeted incentives for non-interview cases, and the impact of base wave incentives on participation in later waves of data collection. Overall, the results suggest that $20 incentives increase response rates and also improve the conversion rate for non-interview cases. Incentives may also have an additional impact on response rates for households in the poverty stratum and significantly reduce item nonresponse rates (Creighton et al, 2007). Similarly, SPD has conducted four incentive studies in an effort to increase cooperation among poverty households and nonrespondents and to minimize attrition in subsequent waves of the study. Incentives had a positive impact on both response and attrition rates; the average interview rate greatly increased with the use of incentives (Creighton et al, 2007).


Incentives are important for studies that employ smartphone apps because of the burden to the participant of installing an additional phone App, which takes up storage space, uses data when wifi is not available, and may impact battery life.


Prepaid vs. Promised Incentives


Studies in the survey literature predominantly find prepaid incentives to be more effective than promised incentives (e.g., Linsky, 1975 and Armstrong, 1975 for an overview; Church, 1993). This study will offer a prepaid incentive ($2) with the mail screener in order to encourage participants to return the questionnaire. Completion of the initial screener by mail will reduce interviewer burden by reducing the number of field screenings that interviewers will need to complete.


The Wave 1 outcome evaluation questionnaire (which occurs in person) and the Waves 2, 3, and 4 evaluation questionnaires (which occur online or in person) will employ a promised incentive ($25) at completion of each questionnaire. Participants who complete the Waves 2, 3, and 4 questionnaires online by a specific date will obtain a slightly larger promised incentive at completion ($30) than those who do not complete the questionnaire by this date. The three smartphone app-based questionnaires will also employ a promised incentive at completion of each of the questionnaires ($5). Consistent with other studies, Yu and Cooper (1983) also found promised incentives significantly improved response rates. Promised incentives are fairly common at the refusal conversion stage. A number of studies have reported gains in response rates with offering relatively large amounts of money ($25 or greater) at the end of the data collection period (Curtin et al. 2005).


The decision to use prepaid or promised incentives was determined by the mode of data collection. The prepaid incentive is intended to facilitate return of the mail screener, while the promised incentives for the Waves 1, 2, 3, and 4 evaluation questionnaires are designed to facilitate cooperation during and completion of these longer questionnaires. The promised additional “early bird” incentive for the Waves 2, 3, and 4 evaluation questionnaires is intended to reduce the cost of in-person data collection and to facilitate rapid data collection. The promised incentives for the three app-based questionnaires are also designed to facilitate cooperation and completion of these questionnaires.


Incentive Amount


In theory, incentives aid participation in two ways. First, they can be conceived as a “token” to elicit social exchange between the sample member and the surveyor – each side doing something good for the other party, without an economic value. Small prepaid incentives can be viewed as merely invoking good will under social exchange theory. Some argue it is not social exchange, but rather establishing the legitimacy of the survey request that is achieved. Even if so, it is not the amount that is central. Second, while larger incentive amounts can also be seen as invoking social exchange, they can directly motivate sample members to participate by providing a direct benefit to the respondent in exchange for the time and burden of answering the survey questions. It is possible that too small an amount for a promised incentive will not help to increase participation as it can be seen as showing too little value for the respondent’s time. Although the cognitive mechanisms for how incentives influence survey participation are not well understood, other fields have proposed theories and empirical evidence showing that small monetary incentives to provide extrinsic motivation “backfire” and the incentive needs to be larger for it to work (). A critical risk in experiments is dosage in the manipulation (the incentive amount, in this case) and being too conservative can risk the success of the entire experiment. Regarding the Waves 1 through 4 questionnaires, the concern is that too small of a promised incentive may fall below a threshold that is expected by many respondents in exchange for an approximately 30-40 minute evaluation questionnaire.


The choice of an incentive amount largely depends on the survey burden, including the questionnaire length and other tasks that may be required of the respondent, the survey topic, and whether the incentive is promised or prepaid. Promised incentives tend to be larger than prepaid incentives; Strouse and Hall, 1997, recommend that in order to be successful, promised incentives have to be $15-$35. As noted above, a number of federally funded surveys, including the NSDUH and the NSFG, currently provide incentives. For example, interviewers in the NSDUH currently offer $30 (for an interview that averages 60 minutes); interviewers in the NSFG offer $40 (for interviews that are about 60 minutes for males and 80 minutes for females). Incentives on the NHANES range from $20 to $100 depending on the survey and physical exam components that respondents choose to participate in.


In addition to payment method (prepaid or promised), careful consideration has been given to the incentive amount to be tested in this research.


$2 for mail screener:

The estimated respondent burden for the screener is 10 minutes. The use of a $2 prepaid incentive for the mail questionnaire is based on both an examination of the survey literature and results from several recent studies that analyzed the effectiveness of small prepaid and promised incentives in increasing participation in screening and topical surveys. The National Household Education Survey (NHES, U.S. Department of Education) tested $2 and $5 prepaid incentives for their mail screener, which was expected to take 2-8 minutes to complete depending on the questionnaire version used. The study did not find significantly higher response rates for the larger prepaid incentive amounts ($5 vs. $2) (Tubman and Williams, 2010). Given this data and the small estimated burden of the mail screener, we believe the $2 pre-paid incentive is appropriate. The National Survey of Early Care and Education (NSECE, Administration for Children and Families) found that a $2 prepaid incentive outperformed a $1 prepaid incentive for their field test. Although the main study is still being finalized, ACF staff have indicated that the study is seeking to offer a $2 prepaid incentive with the mailed screener. In addition, FDA’s Rural Smokeless Tobacco Education Campaign (RuSTEC) sent a $2 pre-paid incentive with the mail screener and obtained a very high response rate of 26%.


$25 each for 3 in-person interviews:

Many federally-sponsored surveys offer incentives to gain cooperation, and these incentives range from $15 to $125, depending on respondent burden. The National Survey of Adoptive Parents of Children with Special Health Care Needs (Department of Health and Human Services) offers parents $25 for participation in a 35-minute telephone survey, which is approximately the same length of time required for the POSITEv interviews. Likewise, the Evaluation of the Fresh Empire Campaign on Tobacco (EFECT, Food and Drug Administration) employed a $25 promised incentive for 30 and 45 minute in-person or web surveys. RESPECT, a tobacco control education campaign aimed at discouraging tobacco use among lesbian, gay, bisexual and transgender tobacco users, found that $25 was an effective amount for in-person interviews. In a household-based survey of adults that focused on travel behavior, Giaimo et al (2014) found that a $25 incentive was effective for reaching difficult-to-access households such as those that were low income.


$5 each “early bird” special for online completion of Waves 2, 3, and 4 questionnaires

ExPECTT, the evaluation of the FDA’s public education campaign for tobacco use among youth (The Real Cost), promised an additional $5 incentive to participants who completed the questionnaire online before the specified “early bird” date. Study staff found that this was an extremely effective method of facilitating timely data collection and promoting online completion of the questionnaires, which significantly reduced data collection costs. By eliminating the need for an interviewer to visit the household, this practice eliminated the cost of employing field staff and travel to the households. This method also reduced participant burden by not requiring them to complete the questionnaire on a specific day and time scheduled with an interviewer. In other words, online completion of the questionnaire allows the participant greater flexibility. This method has also been used for the ExPECTT, RESPECT, and RuSTEC campaigns. For follow-up 1 of RuSTEC, 79% of completes occurred during the early bird period. Although still collecting data, 67% of the RuSTEC web responses for follow-up 2 occurred during the early bird period. For the ExPECTT campaign, 85% of follow-up 1 participants, 81% of the follow-up 2 participants, 79% of the follow-up 3 participants, and 80% of the follow-up 4 participants completed the questionnaire during the early bird period.


Several studies have shown that early bird incentives can improve response rates. In one study, individuals who received an early bird incentive were 1.8 times more likely to complete the survey within the first 7 days of data collection and were 1.69 times more likely to ever complete the survey (LeClere, Plummer, Vanicek, Amaya & Carris, 2012). Another study showed that an early bird incentive significantly increased the response rate in the first two weeks of data collection (29.7 vs. 20.1%) (Coopersmith, Vogel, Bruursema & Feeney, 2016). Biemer et al. (2017) used an experimental design to test the effectiveness of incentivizing mode choice – the researchers offered web and mail modes at the same time. Individuals in the experiment group were offered an additional $10 if they completed the survey via web. The control group was offered the standard incentives. Incentivizing mode choice increased the overall response rate by 3.98 percentage points (42.78 vs. 38.80 for experiment and control group, respectively) and increased the proportion of respondents who completed via web (from 28% to 64%).


$5 each for 3 app-based questionnaires:

The estimated response burden for each of the app-based questionnaires is 5 minutes. According to Erica Dent, Marketing Manager for Kinesis Survey Technologies (Dent, 2013), although short online surveys may not necessitate an incentive, smartphone apps require a greater commitment from the participant to maintain the app long-term, such as by installing updates. As a result, incentives may be required to produce compliance. Naughton et al (2016) found that common participant barriers to using a smartphone app as part of a research study included not having the phone on hand, not wanting to appear rude around others, or not feeling motivated to engage with the app. Although poor acceptability and compliance have been identified as potential barriers to mobile phone-based research collection methods, participant compensation has been identified as one way to increase participation and compliance (Wray et al, 2014). The amount of incentive has varied widely across studies from no compensation to up to $120. We estimate that the $5 incentive (provided as electronic points that can be redeemed from online vendors) per 5-minute questionnaire is a reasonable amount to compensate participants for their time. The total amount, $15 over the course of the data collection, is also designed to compensate participants for their time and any potential data usage fees incurred due to the smartphone app’s data use requirements. However, we have instructed participants to confirm that they have a data plan before downloading the app. Standard data plans should be sufficient to cover the app’s monthly data use, which is estimated at 150MB. The separate consent process for the app-based portion of the study outlines this potential cost of study participation (as well as the potential effect on battery life) and the fact that participants can choose to delete the app at any time.

References


Armstrong, J.S. (1975). Monetary incentives in mail surveys. The Public Opinion Quarterly, 39(1), 111-116.


Creighton, K. P., King, K. E., & Martin, E. A. (2007). The use of monetary incentives in Census Bureau longitudinal surveys. Survey Methodology, 2.


Church, A.H. (1993). Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57(1) Spring, 62-79.


Curtin, R., Presser, S., Singer, E. (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly, 69(1): 87-98.


Dent, E. (2013). Smart incentives for smartphones. http://www.kinesissurvey.com/smart-incentives-for-smartphones/ (accessed 6 Dec 2016).


Duffer, A., Lessler, J., Weeks, M., & Mosher, W. (1994). Effects of incentive payments on response rates and field costs in a pretest of a national CAPI survey. Paper presented at the 49th annual conference of the American Association for Public Opinion Research, Danvers, MA.


Giaimo, G., Anderson, R., Wargelin, L., Stopher, P. (2010). Will it work? Pilot results from first large-scale global positioning system-based household travel survey in the United States. Journal of the Transportation Research Board, 2176, 26-34.


Gneezy, U., and Rustichini, A. (2000). Pay enough or don’t pay at all. Quarterly Journal of Economics, August, 791-810.


Gneezy, U. (2003). The W effect of incentives. The University of Chicago Graduate School of Business, October, 1-40.


James, D. E. (1997). Environmental Incentives: Australian Experience with Economic Instruments for Environmental Management: Consultancy Report. Community Information Unit, Department of the Environment, Sport and Territories.


Kennet, J., Gfroerer, J., Bowman, K. R., Martin, P. C., & Cunningham, D. B. (2005). Introduction of an incentive and its effects on response rates and costs in NSDUH. In Kennet, J., & Gfroerer, J. (Eds.), Evaluating and improving methods used in the National Survey on Drug Abuse (DHHS Publication No. SMA 05-4044, Methodology Series M-5). Rockville MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.


Lengacher, J.E., Sullivan, C.M. Couper, M.P., & Groves, R.M. (1995). Once reluctant, always reluctant? Effects of differential incentives on later survey participation in a longitudinal study. Survey Research Centre, University of Michigan.


Linsky, A.S. (1975). Stimulating responses to mailed questionnaires: A review. Public Opinion Quarterly, 39(1), 82-101.


Mack, S., Huggins, V., Keathley, D., & Sundukchi, M. (1998). Do monetary incentive improve response rates in the survey of income and program participation? Proceedings of the American Statistical Association, Survey Research Methods Section, pp. 529-534.


Naughton, F., Hopewell, S., Lathia, N., Schalbroeck, R., Brown, C., Mascolo, C., McEwen, A., Sutton, S. (2016). A context-sensing mobile phone app (Q Sense) for smoking cessation: A mixed-methods study. JMIR Mhealth Uhealth, 4(3): e106.


Pouliakas, K. (2010). Pay enough, don’t pay too much or don’t pay at all? The impact of bonus intensity on job satisfaction. Kyklos, 63(4): 597-626.


Singer, E., Van Hoewyk, J., & Maher, P. (1998). Does the payment of incentives create expectation effects? Public Opinion Quarterly, 62(2): 152-164.


Strouse, R., and Hall, J. (1997). Incentives in population based health surveys. In Handouts from paper presented at the Annual meeting of the American Association for Public Opinion Research, Norfolk, VA.


Wray, T.B., Merrill, J.E., Mont, P.M. (2014). Using ecological momentary assessment (EMA) to assess situation-level predictors of alcohol use and alcohol-related consequences. Alcohol Research, 36(1): 19-27.


Yu, J., and Cooper, H. (1983). A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research, 20(1): 36-44.

8


File Typeapplication/msword
AuthorRao, Pamela *
Last Modified BySYSTEM
File Modified2017-10-27
File Created2017-10-27

© 2024 OMB.report | Privacy Policy