MEMORANDUM
MEMORANDUM TO: Lynn Murray
Clearance Officer
Justice Management Division
THROUGH: James P. Lynch
Director
FROM: Shannan Catalano
Statistician, Project Manager
DATE: September 28, 2011
SUBJECT: BJS Request for OMB Clearance for Field Testing under the National Crime Victimization Survey (NCVS) Redesign Generic Clearance, OMB Number 1121-0325.
The Bureau of Justice Statistics requests clearance for field test tasks under the OMB generic clearance agreement (OMB Number 1121-0325) for activities related to the National Crime Victimization Survey Redesign Research program. BJS, in consultation with Westat under cooperative agreement (Award 2008-BJ-CX-K066 National Crime Victimization Survey Mode Research), has planned a field test of an Interactive Voice Response (IVR) system as a low cost alternative mode for data collection.
Purpose of the Research
This field test, to be carried out by Westat, supports the NCVS program by exploring survey methods that increase survey participation while maintaining affordable costs. The objective is to examine the use of IVR as a complementary mode of data collection to the interviewer-based methods that are currently used in the NCVS. IVR has the potential to collect better information on the more sensitive items, as well as offering a less expensive mode of collection that might be applied to the core NCVS or to a companion survey designed to obtain city or state level estimates of victimization. If the methodology proves feasible, this could have a significant effect on the resources available for other components of the NCVS.
The following questions will be addressed by this research:
1. Can the NCVS questionnaire be adapted for IVR administration?
2. What are the response rates with IVR and how do these rates vary by the mode used to initially contact sample respondents— mail or telephone?
3. Is it possible to effectively encourage sampled households to complete the interview when the initial contact is by mail?
4. Are there differences in respondent acceptance between speech IVR and touchtone data entry (TDE) IVR?
5. Does IVR lead to different victimization rates from a telephone interview?
6. Is there a difference in victimization rates for Speech and TDE modes of entry?
Burden Hours for Field Test
Burden on this study consists of completing the NCVS interview. There will be two types of interviews. The CATI contact will first involve asking a representative of the household to identify the person with the next birthday. We estimate this to take approximately 3 minutes, which includes screening the eligibility of the household and the answering the phone (e.g., not a business; identify person 18+ who lives in the household; selecting the person with the next birthday). We estimate that that actual interview will take, on average, 20 minutes. This figure is taken from timings from the NCVS. This results in a total of 23 minutes for those households contacted by phone. Those contacted by letter and who call into the NCVS will only experience the NCVS interview (20 minutes). We project 600 completed interviews from the telephone contact and 3000 from the mail contact. This results in 230 hours for the former [(600 x 23)/60 = 230)] and 1000 hours for the latter [(3000 x 20)/60 = 1000], which yields a total of 1230 hours of burden. The NCVS generic clearance allocated a predetermined combination of sample cases and burden hours that could be used for NCVS redesign efforts. The current sample size and burden hours in the field testing fall within the remaining allocation.
Usability and Cognitive Testing
Usability and cognitive testing was conducted to refine IVR instruments prior to the field test. A summary of the main findings is presented below. The full report is presented in Attachment 16. A total of 20 respondents were run through the usability tests for the screener. 10 used the TDE response mode, 10 used the voice response mode. All respondents were recruited by advertising on Craig’s list and the local paper for someone that had experienced a victimization in the last 12 months. Respondents were asked to complete the IVR two times. The first time they went through the instrument as if they would when responding to the survey. The second time, respondents were asked, mid-way through, to use the ‘help’ function. Respondents were debriefed immediately after each time going through the instrument. Major findings from the testing include—
1. The average time to complete the screener (first time through) was generally between 8 to 12 minutes. This included all parts of the instrument, including the demographic questions that are asked prior to the NCVS crime screener questions.
2. The crime screener generally worked well. The overall response was positive. Respondents thought it was understandable and could get through it without major issues.
3. Respondents did listen to all parts of the screener questions. There was some fear that respondents would get impatient when the longer crime screener questions were administered and prematurely answer ‘no’ in the middle of the cues. Interrupting the question would prevent respondents from hearing all of the cues. This did not happen during this round of usability testing. All respondents listened to the entire set of cues for each crime screener item.
4. Once a respondent reports a ‘yes’ to a crime screener item, the subsequent items are prefaced by a phrase to “exclude incidents already reported”…. However, respondents generally ignored this instruction. They interpreted the cues at each screener item broadly and reported the crime whenever the screener seemed to touch on their incident. This led to over reporting of incidents.
Recommended solution: modify the introduction to address perceived redundancy, explaining it as something that will improve the information that is collected on the survey. This modification would also add a statement to report victimizations only once.
5. Respondents tended to time-out when asked to provide a verbal description of the incident (which is recorded when there is a ‘yes’ response to a screener item). The major reason for this was that the time allotted in the IVR program to record the description was too short (10 seconds). A second reason is respondents were not initially prepared to describe the incident. At that point in the interview they had become accustomed to answering in set responses (‘yes’, ‘no’). The sudden request to actually speak did not allow them time to gather their thoughts on how they would describe the incident.
Recommended solution: Increase the current time limit to 30 seconds to allow more time for respondents to offer descriptions of their victimization. In addition, add in text for this question to provide respondents some time to prepare how they want to describe the incident:
“Please briefly describe the incident in your own words. Please give a brief description of what happened, including any details such as where it happened, when it happened, who was involved or any other details that might be important to you.”.
6. Respondents did not back up during the interview or ask for help. Since respondents did not generally encounter problems, there were very few situations where respondents need the help. Nonetheless, when asked to find the help function (during the second time through the instrument), only one respondent knew how.
Recommended solution: We do not recommend making any changes to the placement or wording of the help function at this time. We will monitor its use during Phase 2 to collect more data on whether the instruction needs to be changed or provided more frequently to users.
Use of Incentives in the Field Test
This project incorporates a test of the potential benefit of incentives. Historically, self-administered modes of data collection have yielded lower response rates than interviewer administered modes. The IVR research tests whether nominal incentives increase respondents’ willingness to utilize the IVR in a self-administered interview. The inclusion of incentives is important for this project because they are expected to increase participation and therefore the power of any statistical analyses of the result.
The use of incentives represents a significant departure from current administration of the NCVS. As such, BJS held discussions with OMB regarding the study and the rationale for including nominal payments to respondents. In 2009 OMB indicated that including incentives in self-administered modes was acceptable provided that experimental conditions were clearly defined and that sample sizes would support evaluation of responses across modes. The Westat IVR research meets these criteria. Attachment 17 presents a detailed justification for the use and amount of the proposed incentives in this research.
Our intent in Attachment 17 of the submission is to address our understanding of the issues raised by the informal guidance OMB provided prior to submission. We elaborate here by providing additional background on the rationale behind the recommended design. Our understanding is that OMB will approve a $2 pre-paid incentive to everyone in the sample (e-mail from Martinez to Rand dated August 5, 2010). For an RDD survey, prior research has found that an incentive of this size increased rates, on average, by 5.4 percentage points (Cantor, et al., 2007: Table 22.2). A recent experiment testing a $2 incentive for a mail survey found the increase to be approximately 10 percentage points (Cantor, et al., 2008). Church (1993) reports an effect size of almost 20 percentage points, although with varying incentive amounts. An incentive of this type has also been found to reduce non-response error by bringing in populations that traditionally have low response rates to telephone and mail surveys (Hicks, et al., 2008).
The proposed design also requests clearance for an experiment for a promised incentive of $10 among those receiving the IVR request by mail. As OMB has pointed out, the evidence on a promised incentive is mixed (e.g., compare Church, 1993 and Singer, et al., 1999). The most recent evidence that we are aware of is an experiment that was conducted as part of the initial pilot for the National Household Education Survey (NHES). This tested a promise of $5 to do a telephone survey (Tubman and Williams, 2010). This experiment found the response rates for the $5 incentive were consistently higher by 6 – 8 percentage points. The experiment had very small sample sizes (approximately 40 – 50 in each group), however, and none of the observed differences were statistically significant.
It might be preferable to test a larger pre-paid incentive, such as $2 vs. $10, given the success of the pre-paid methodology. For example, Trussell and Lavrakas (2004) found a steady increase in response rate between a $2 pre-paid incentive and a $10 pre-paid incentive. The difference between $2 - $5 was 6 percentage points and $5 - $10 was 4 percentage points. In contrast, a recent experiment for the National Household Education Survey found that a $5 pre-paid incentive produced a response rate that was significantly greater than no money, but not significantly greater than $10. However, the current project does not have the funds to test a $10 prepaid incentive. It is also questionable whether the NCVS could implement a survey on a large scale with this large of a pre-paid incentive. Consequently, if a pre-paid incentive were to be tested, it would be to compare a $2 to $5. The design of this experiment would split the sample into two groups, with half of the 6500 addresses getting $2 and the other half getting $5.
When initially designing the study, this idea was rejected because prior research had already shown that the $5 incentive would boost response rates above a $2 incentive. Testing a promised incentive of $10 was recommended because it has not been tested as extensively as a pre-paid incentive of $5. Our hypothesis is that a promised incentive may work differently for the IVR than for a telephone survey. One reason why a promised incentive for a telephone survey may not consistently work is that it lowers the credibility of the call. A promise of money on the telephone from a stranger may raise suspicions that the intent of the call is to sell the respondent something or solicit money. This is a different context than the proposed project which will make the request to participate by mail. The letter will be on official Department of Justice stationary and should be credible in the respondent’s eyes. There will not be an interviewer pressing the respondent to agree to do the survey.
The promised incentive experiment is proposing two conditions of $0 and $10. The $10 level was selected in light of prior studies that have found that significant effects of promised incentives were at least $5, with most being $15 or more (Strouse and Hall, 1997; Singer, 2000; Cantor, et al, 2003).
The proposed incentive experiment is not a primary focus of the project. It is one way to increase the response rate to the IVR. While we believe testing the promised incentive has the most potential to contribute to our knowledge about how to increase the response rate for the IVR, we also agree that testing a $2 vs. $5 incentive would provide useful information. If OMB does not agree with the decision to test a promised incentive, we recommend that the experiment be changed to compare pre-paid incentives of $2 and $5.
Design of the Field Test
The experimental design reflecting the combinations of different treatments is provided in Table 1 below (the sample sizes are discussed later). Overall assessment of response rates and victimization rates will be tested by comparing results obtained from IVR applications using a mail contact (cells A – H in Table 1) vs. telephone contact (cells I – L in Table 1) vs. a telephone interview (cell M in Table 1). All households will be sent a $2 pre-paid incentive. Tests of methods to enhance response rates and variations in victimization rates will be examined by experimenting with two factors within the mail mode of contact: 1) a promised incentive of $10 (cells B, F, J, D, H. L) and 2) the use of an insert encouraging response (cells A - D). Finally, the design will experiment with the use of either speech (cells A, B, E, F, I, J) or touchtone data entry (cells C, D, G, H, K, L) as the method of input.
The usability of the IVR will be assessed by examining several different forms of data. First, a debriefing interview will be administered as part of the IVR. This will ask respondents about their experience. This interview will be administered to everyone who completes the IVR interview. A second source of information will be paradata that is available from the interview itself. Data to be collected include 1) the time to complete the interview, 2) incomplete interviews, 3) use of the “help” function during the interview, and 4) the number of times respondents had to back up when moving through the interview.
Sample Design
The experiment will be conducted in Houston and St. Louis. The two cities were chosen because they both have open-record statutes and their police departments have both agreed to provide the addresses of persons who have reported crimes to the police. Attachment 11 provides the background materials sent to each police department. Attachments 12 and 13 provide correspondence documenting each department’s agreement under the conditions stated in the material. The police records will be used to identify an address, not the particular individual that reported the crime to the police. Records of sexual assaults, domestic violence and any case currently under investigation will be excluded from the sample. The name of the individual that originally reported the crime to the police will not be contained on the sample frame and will not be used. The information that will be included on the record for the study will be –
Address of the individual reporting the incident
Age and gender of the individual
Description of the incident
Date and time of the incident
In Houston and St. Louis, the information that is being requested is available upon request by the public through an “open record” statute. The St. Louis Police department has agreed to provide the records under this statute. In Houston, the records are being obtained via an inter-agency agreement between the police department and the Bureau of Justice Statistics, rather than as part of a public information request. This was done to expedite a request that required a high volume of records delivered in an electronic format.
Respondents will be told that they could have been selected for the study either randomly or because someone at the address had reported that a crime had occurred. A telephone number will be provided that will allow respondents to drop out of the study if they do not wish to be contacted again (see Attachments 4 and 6). Within each city, two sample frames will be used. One will be the police frame of the addresses referred to above. The other will be the Delivery Sequence File (DSF) which contains residential addresses to which the US Postal Service delivers mail. The DSF represents the population universe of the NCVS. The reaction of this group to the IVR will approximate the strengths and limitations of the methodology as it applies to the ongoing NCVS. The sample will include a total of 13,000 eligible addresses. These 13,000 will be allocated across the different contact methods (mail and telephone), cities and sample frames (DSF and police). Table 2 provides this allocation across these three different dimensions.
In order to be able to compare the survey results across the various experimental conditions, it is necessary to restrict eligibility for all conditions to addresses where a telephone number can be found. The telephone method of contact requires this criterion in order for a household to be reachable. This criterion will be met by drawing a sample of addresses and finding the telephone number using a reverse directory service. Prior experience has found that approximately 60% of the addresses will match in this way. The interview will select one adult in each sampled household using the “Next Birthday Method.”
Data Collection Procedures
The data collection procedures fall into one of three groups: CATI interview; interviewer assisted telephone contact with transfer to IVR interview; and IVR interview after a request to call an 800 line. The survey instruments included in this clearance submission are –
Demographic and Crime Screening interview to be administered to all households (Attachment 1);
Detailed crime incident report, to be administered when a crime is reported within the last 12 months in the screener (Attachment 2);
A short questionnaire on satisfaction with the police and a debriefing interview that asks respondent’s reaction to the IVR. Both will be administered at the end of the survey (Attachment 3).
The materials a household will receive will depend upon which mode the household is assigned and whether additional nonresponse follow-up efforts are required. All survey materials will highlight BJS sponsorship and logo. For letters this will include BJS letterhead, for other items, like mail insert or postcard, the BJS logo and the agency name will be included. Since the survey material will differ by mode of initial contact, they are described below for each contact mode (telephone or mail).
Telephone Materials
Advance letter: this letter is mailed to all households where telephone will be the initial contact. The letter will be mailed before any call attempts and will describe the purpose of the study, how the household was selected, the confidentiality of the information, and that participation is voluntary. (Attachment 4)
Refusal conversion letter: this letter is mailed to households where telephone will be the initial contact and an initial refusal has been recorded. Households with a refusal status will be held for a cooling-off period and then mailed a refusal conversion letter. The letter will restate the purpose of the survey, why the household was selected, the importance of participation, and that participation is voluntary. (Attachment 5)
Mail Materials
Invitation letter: this letter is similar to the advance letter used for households assigned to initial telephone contact. However, no telephone contact attempts will be made. The letter will describe the purpose of the study, how the household was selected, the confidentiality of the information and that participation is voluntary. The letter will include instructions and a unique ID for accessing the IVR system. The letter will also include instructions for randomly selecting a household respondent to call the IVR system (Attachment 6).
Insert with invitation letter: as described in experiments above, a random selection of households assigned to mail only contact, will receive an insert with their invitation letter. The insert will be different for the households that are being offered $10 and those that are not offered an incentive (Attachments 7 and 8).
Thank-you / reminder postcard: all households assigned to mail-only contact (IVR administration) will be mailed a thank-you / reminder postcard. This postcard serves two purposes; to thank respondents who have already completed the IVR survey, and to prompt non-responding households to complete the survey (Attachment 9).
Follow-up letter: households assigned to mail only contact who have not completed the IVR survey after about three weeks will be mailed a first follow-up letter. This letter is similar to the initial letter with slightly stronger wording to motivate participation. This letter will be delivered using priority mail (Attachment 10).
Data Collection Process
As described earlier the data collection process will vary by assigned modes of contact and data collection (Attachments 14 and 15). There are three distinct processes:
CATI Data Collection – sampled households assigned to this mode of contact and data collection will first be mailed a pre-notification letter before any calls are placed to the sampled household.
Telephone Interviewer to IVR – sampled households assigned to this approach will first be mailed a pre-notification letter. Once on the phone, demographic information will be collected by the interviewer. At this point, the IVR system will administer the screener questionnaire.
Mail Invitation to IVR –The first contact is an invitation that describes the survey and includes instructions for selecting a respondent. The letter includes a toll-free telephone number to call the IVR system and a unique ID for that household. All sampled households are then mailed a thank you / reminder postcard one week after their initial mail invitation.
Analysis Plan
To assess whether the NCVS can be adapted for a practical application of the IVR, we will examine a number of different indicators based on different forms of paradata indicative of issues respondents might have when moving through the system. The indicators that will be analyzed include –
Timings – Total time, time per section, extent users are “timing out” for particular items;1
Number of break-offs during the interview;
The number of times respondents asked for help; number of times they backed up to repeat text;
Whether respondents interrupted the voice to answer the question. Analysis will be specifically targeted to the screener when the list of cues is being read.
As noted in the data collection section, respondents will be administered a short user satisfaction survey at the end of the interview. The survey will ask about overall satisfaction and about specific aspects of taking the survey (entering data, understanding the questions). The survey will also ask respondents to provide verbatim responses to describe problems they encountered when going through the questionnaire. Tabulating the levels of satisfaction for particular components of the interview will provide an indication of what the strengths and weaknesses are of the IVR application.
All analyses will be centered around addressing questions about response rates, the usability of the IVR and the effects on the measurement of victimization. The discussion of the analysis is organized around the research questions posed above.
Tables 3 and 4 summarize the power for the comparisons between $0 and $10 incentive amounts, with and without a mailed insert condition.
Table 3. Power for testing a $10 promised incentive, Ignoring insert condition+
Group 1 |
Group 2 |
Detectable Difference with 80% power |
No Incentive (n=5000) |
$10 Incentive (n=5000) |
2.4% |
Table 4. Power for testing a $10 incentive within each Insert Condition+
Group 1 |
Group 2 |
Detectable Difference with 80% power |
No Incentive No Insert (n=2500) |
$10 Incentive No Insert (n=2500) |
3.5% |
No Incentive With Insert (n=2500) |
$10 Incentive With Insert (n=2500) |
3.5% |
+ Power to detect a difference with 80% power using a two-tailed significance at the 95% confidence level.
Can the NCVS questionnaire be adapted for the IVR?
Analysis focuses on comparing the response rates for the different methods of interviewing. Table 5 summarizes the power for these comparisons. One comparison will be between the mail IVR and the telephone interview. Assuming a telephone interview response rate of 20%, these comparisons will detect around a 4 percentage point difference with 80% power (assuming a 2-tail test at p<0.05) for the three comparisons involving the IVR mail, IVR mail with no incentive and the telephone IVR.
A related question to the response rate is whether the demographics of the respondents differ by these conditions. Tests of differences in demographic characteristics will not be as powerful as for response rates because they will rely on completed surveys. For purposes of calculating power, the demographics examined were age (18-44 vs. 45+), gender and race (nonwhite vs. other). Each of these groups is expected to be approximately 50% of the populations in the two cities. For the comparison of the IVR with the mail modes of contact to the telephone, the detectible difference is approximately 10 to 11 percentage points (see second panel of Table 5). It is slightly higher when comparing the IVR telephone mode of contact with a detectible difference of 12 percentage points.
Additional analysis will assess whether contacting respondents by mail for the IVR application yields a different response rate than that when they are contacted by telephone (Table 6). The mail IVR will have sample sizes of 10,000 and 5,000 respectively, depending on the incentive condition. The telephone contact IVR will be based on a sample size of 2,000. Assuming a response rate of 25% for one of the conditions, this analysis will detect a 3 percentage point difference with 80% power, regardless of the mail incentive condition. Attachment 18 presents additional analyses to be conducted and their accompanying power and precision analyses.
Informed Consent
The contact letters and the script read to respondents once on the telephone provide the elements of informed consent. The initial letters (see Attachments 4 for telephone and Attachment 6 for mail IVR) provide the purpose of the survey, the voluntary nature of the study, how their address was included, a procedure to refuse to participate and a number to call with questions about the study. The script read to respondents on the telephone repeats much of this information and additionally provides the length of the survey, as well as informing respondents of the availability of toll free hotline numbers (see introductions in Attachment 1). A summary of these elements are also depicted in two flowcharts depicting the data collection process for each mode of contact (Attachments 14 and 15).
Using records to either seed or exclusively draw sample for a survey has been done on a recent survey studying firearm ownership (Smith, 2003). This study provided a general description of the sample, and our design uses this approach which informs sampled members that their address was included in the survey from one of two sources --- a list of addresses available from the post office or police records. Keeping the source less specific will reduce the chances of cueing respondents about the incident in the police record.2 The Westat IRB is currently reviewing this application. OMB will be provided the final letter of approval when it is received.
References
Cantor, D., Han, D. and R. Sigman (2008) “Pilot of a Mail Survey for the Health Information National Trends Survey” Paper presented at that 2008 annual meeting of the American Association for Public Opinion Research, May 15 – 18, New Orleans, LA.
Cantor., D., O’Hare, B. and O’Connor, K. (2007) “The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys” pp. 471-498 in J. M. Lepkowski, C. Tucker, J. M. Brick De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (Eds.), Advances In Telephone Survey Methodology, New York: J.W. Wiley and Sons, Inc.
Cantor, David, Kevin Wang, and Natalie Abi-Habib. 2003 “Comparing Promised and Pre-Paid Incentives for an Extended Interview on a Random Digit Dial Survey”. Proceedings of the American Statistical Association, Survey Research Section.
Church, Allan H. 1993. “Estimating the Effect of Incentives on Mail Survey Response Rates: a Meta-Analysis”. Public Opinion Quarterly 57:62-79.
Currivan, Doug. 2005. “The Impact of Providing Incentives to Initial Telephone Survey Refusers on Sample Composition and Data Quality”. Paper presented at the Annual Meeting of the American Association of Public Opinion Research, Miami Beach, FL.
De Leeuw, E. (2005) “To Mix or Not to Mix Data Collection Modes in Surveys” Journal of Official Statistics 21: 233-255.
Groves, R.M., Singer, E. and A. Corning (2000) “A Leverage-Salience Theory of Survey Participation: Description and Illustration.” Public Opinion Quarterly 64: 299 - 308.
Groves, R.M., Presser, S. and S. Dipko (2004) “The role of topic interest in survey participation decisions.” Public Opinion Quarterly 68: 2-31.
Hicks, W., Cantor, D., St. Claire, A., Fee, R. and P. Rhode (2008) “A Tale of Two Methods: Comparing mail and RDD data collection for the Minnesota Adult Tobacco Survey III” Paper presented at the International Total Survey Error Workshop, June 2 – June 4, Research Triangle Park, NC.
Singer, Eleanor, John Van Hoewyk, Nancy Gebler, Trivellore Raghunathan, and Katherine McGonagle. 1999. “The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys”. Journal of Official Statistics 15:217-230.
Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 2000. “Experiments with Incentives in Telephone Surveys”. Public Opinion Quarterly 64:171-188.
Strouse, Richard C., and John W. Hall. 1997. “Incentives in Population Based Health Surveys”. Proceedings of the American Statistical Association, Survey Research Section: 952-957.
Tubman, Charlotte and Douglas Williams (2010) “The Effectiveness of Incentives Used in the Second Phase of a Two Phase Survey.” Paper presented a the American Association for Public Opinion Research, May 14, Chicago, IL.
Trussell, N. and P. Lavrakas (2004) “The influence of incremental increases in token cash incentives on mail survey responses. Is there an optimal amount?” Public Opinion Quarterly, 68(3): 349 – 367.
1 If there is no response after 30 seconds, the IVR repeats the question.
2 One can also imagine that telling respondents the address was from police records might also lead to some confusion. Some household members may not be aware that a crime was reported to the police. Some respondents may not even make a direct connection between their prior report to the police and the survey. Finally, if a different household has moved into the unit, they may be totally unaware of the prior report.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Draft Letter |
Author | Technology Center |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |