Justification for the Use of Incentives

Additional justification for incentives_Opp Youth Bundling.docx

Opportunity Youth Evaluation Bundling Study

Justification for the Use of Incentives

OMB: 3045-0174

Document [docx]
Download: docx | pdf

Additional Justification for Incentives: Opportunity Youth Evaluation Bundling

7.6.15



Incentives being offered (pre and post of 10/20 in gift cards) have not been found to be effective in increasing rates substantially. Unless there is better justification, I am recommending taking these out.”


The opportunity youth targeted by this study are a hard to reach, and consequently, hard to survey population. Hard to reach populations are defined as those individuals who are difficult to reach because of their physical, geographic, social, or economic isolation (Shaghaghi, Bhopal, and Sheik, 2011). They are often marginalized, disengaged, or disconnected from mainstream services (Flanagan and Hancock, 2010); examples of such populations are racial/ethnic minorities, the homeless, immigrants, and those from rural or isolated communities. Our study population includes both AmeriCorps members and unserved comparison youth that are low-income, minority, and typically disengaged from both school and work. English may be their second language and they typically do not have a high school diploma. Based on these characteristics, and on conversations with organizations that work with these youth, we anticipate that reaching a representative sample of this population will be challenging, as will encouraging respondents to remain in the study over the course of the 18 months of data collection.


The literature has shown that incentives are an effective means to increase response rates (Brick, Montaquila, Hagedorn, Roth, & Chapman, 2005; Church, 1993; Edwards et al., 2002; James & Bolstein, 1992; Singer, Van Hoewyk & Maher, 1998, 2000; Singer & Ye, 2013; Yammarino, Skinner, & Childers, 1991). Shettle and Mooney, 1999, specifically tested the effectiveness of incentive use in US government sponsored surveys and concluded that “incentives can provide a cost-effective survey tool for use in government surveys when moderately high response rates are needed” Though, as noted, response rate increases may not be substantial for all populations, the literature does show that incentives increase sample size, and when targeted to sample members that would otherwise fail to respond to a survey, incentives improve the precision of the estimates without changing response distribution, and as such reduce nonresponse bias.


Several studies support the use of small monetary incentives for increasing response rates among minority and low-income respondents that would otherwise be difficult to reach, such as those in this study. This includes a study aimed at reducing disparities in service response to the Ohio Pregnancy Risk Assessment Monitoring System (PRAMS) among African-American women, which found that inclusion of a $10 gift card increased participation significantly in this population (Liu & Geidenberger, 2011). A randomized controlled trial of the effects of incentives on survey response among the same population of African-American mothers found that the response rate among respondents in the experimental group that received a $5 cash incentive was 42.5% compared to a response rate of 30.1% among the respondents in the experimental group with no incentive (Dykema et al., 2012). Dykema and colleagues also tested for the cost-effectiveness of a cash incentive. They reported that while total variable costs increased with an incentive, the cost per completed survey was lower for the group that received the cash incentives compared to the groups that did not receive an incentive. The authors also reported that the incentive was effective in drawing in members from more underrepresented groups, thus reducing nonresponse bias. Monetary incentives can be effective in recruiting low-income and minority respondents in panel studies (Singer & Ye, 2013), and incentives may increase the likelihood of response for those least likely to respond, particularly bringing in a larger percentage of less educated respondents (Petrolia & Bhattacharjee, 2009). These characteristics are particularly descriptive of the study population we are trying to engage.


In a systematic review of literature on strategies to improve health research with socially disadvantaged groups, Bonevski et al. note that incentives consistently appear as a key component of community based recruitment of study participants, helping to increase participant buy-in of community members whose participation in the study is crucial (e.g. comparison group members). Further, the systematic review noted that cash incentives were found to be more effective than non-cash incentives, and that vouchers for widely accessible retailers were preferred; both of these strategies will be employed in our use of incentives.

CNCS does not have a direct relationship with the comparison group members. These individuals will, at most, have had a limited amount of contact with the AmeriCorps program during the recruitment process, and will have no connection with them after that. They are receiving no other contact or benefit from AmeriCorps, have no reason to share information with the study, and have limited resources on which to draw in order to participate. They are not likely to be inclined to respond to the survey, and the possibility of receiving an incentive may increase the probability that they will participate in the study. A higher participation rate will increase the sample size and allow for statistically reliable comparison of the two groups.


CNCS Research and Evaluation and JBS International (the external evaluator for the project) asked our group of Opportunity Youth grantees who are enrolled in the study about their key concerns in making this study work and their ideas for fostering success. This group worked closely with CNCS and JBS in designing the study and tailoring the data collection to their members. This group was concerned about recruiting and maintaining comparison group members, and respondents engaged at follow-up. Representatives from each of the nine participating programs voiced strong concern throughout the project about successfully recruiting comparison group members if there is no incentive. They also indicated that some acknowledgement of the time these youth spend completing the survey would be fair, as comparison group members and AmeriCorps alumni, at follow-up, are not receiving any benefit from the program. If comparison group members cannot be recruited through over recruitment by the AmeriCorps program, they may also need to be recruited directly from other community organizations. During our study planning calls, representatives of the programs being evaluated, who work closely with these organizations, noted that these community organizations may be disinclined to assist with recruiting disengaged youth from their own service population if these youth are not offered some form of compensation.


Members who successfully complete their term of service with their AmeriCorps program can be challenging to find and obtain responses from six months following exit. Representatives of the nine programs participating in this study, and four additional AmeriCorps Opportunity Youth program leads in an pilot planning group convened from 2012-2013, commented on the difficulty of following up with opportunity youth in general, including their own alumni. This is due both to mobility and to changing life circumstances (e.g., moving, going to school, getting a job) following program exit. The longitudinal nature of this study means that both treatment and comparison group members will be disconnected from CNCS and the program they served at some point in the study’s follow-up cycle. Incentives may help increase the response rate of those least connected and therefore least likely to respond (Singer & Ye, 2013). Although these individuals can be tracked down with persistence and multiple follow up attempts on the part of both the study team and the program staff, follow-up costs would likely be reduced if incentives to participants can be used to further encourage them to reply to survey follow up phone calls and other communications with fewer follow-up attempts.


In longitudinal data collection the use of incentives has been shown to be cost effective due to the savings realized by reducing the costs of follow-ups with non-respondents across waves of data collection (Rogers, 2011). Although there are differences in the effect of incentives based on type, amount, and timing, the overall recommendation is that an incentive is an effective means to increase response rate and reduce nonresponse bias.


CNCS’s experience and the individual CNCS program’s experience in attempting to maintain contact with this group of respondents in both day to day programming and in earlier evaluations is that these individuals are not motivated to respond, particularly once their engagement with AmeriCorps is done. The $10 per pre and post survey and $20 per follow up survey incentive would offer these respondents some motivation, besides goodwill, to maintain contact with the study and respond to multiple rounds of surveys.



References:


Brick, J. M., Montaquila, J., Hagedorn, M. C., Roth, S. B., & Chapman, C. (2005). Implications for RDD design from an incentive experiment. Journal of Official Statistics, 21(4), 571 –89.


Bonevski, B., Randell, M., Paul, C., Chapman, K., Twyman, L., Bryant, J., & Hughes, C. (2014). Reaching the hard-to-reach: a systematic review of strategies for improving health and medical research with socially disadvantaged groups. BMC medical research methodology, 14(1), 42.


Church, A. H. 1993. Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly 57 (1): 62–79.


Dykema, J., Stevenson, J., Kniss, C., Kvale, K., González, K., & Cautley, E. (2012). Use of monetary and nonmonetary incentives to increase response rates among African Americans in the Wisconsin pregnancy risk assessment monitoring system. Maternal and child health journal, 16(4), 785-791.


Edwards, P., Roberts, I., Clarke, M., DiGuiseppi, C., Pratap, S., Wentz, R., & Kwan, I. (2002). Increasing response rates to postal questionnaires: systematic review. BMJ, 324(7347), 1183–91.


Flanagan, S. M., & Hancock, B. (2010). 'Reaching the hard to reach'-lessons learned from the VCS (voluntary and community Sector). A qualitative study. BMC Health Services Research, 10(1), 92.


James, J. M., & Bolstein, R. (1990). The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly, 54(3), 346-361.


Liu, S. T., & Geidenberger, C. (2011). Comparing incentives to increase response rates among African Americans in the Ohio pregnancy risk assessment monitoring system. Maternal and child health journal, 15(4), 527-533.


Petrolia, Daniel R., and Sanjoy Bhattacharjee. 2009. Revisiting incentive effects: Evidence from a random sample mail survey on consumer preferences for fuel ethanol. Public Opinion Quarterly 73 (3): 537–50.


Rodgers, Willard (2011) “Effects of Increasing the Incentive Size in a Longitudinal Survey” Journal of Official Statistics, Vol. 27, No. 2, pp. 279-299.


Shaghaghi, A., Bhopal, R. S., & Sheikh, A. (2011). Approaches to recruiting ‘hard-to-reach’ populations into research: a review of the literature. Health promotion perspectives, 1(2), 86.


Shettle, C., & Mooney, G. (1999). Monetary incentives in US government surveys. Journal of Official Statistics, 15(2), 231–50.


Singer, E., Van Hoewyk, J., & Maher, M. P. (1998). Does the payment of incentives create expectation effects? Public Opinion Quarterly, 152-164.


Singer, E., Van Hoewyk, J., & Maher, M. P. (2000). Experiments with incentives in telephone surveys. Public Opinion Quarterly, 64(2), 171-188.


Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112-141.


Yammarino, F. J., Skinner, S. J., & Childers, T. L. (1991). Understanding mail survey response behavior a meta-analysis. Public Opinion Quarterly, 55(4), 613-639.

3


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAnnie Georges
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy