Interactive Voice Response Study

NCVS Research Westat IVR Study .docx

Research to support the National Crime Victimization Survey (NCVS)

Interactive Voice Response Study

OMB: 1121-0325

Document [docx]
Download: docx | pdf

National Crime Victimization Survey Mode Experiment


Request for OMB Clearance

April 1, 2011














April 2011

Prepared by:

Westat

1600 Research Boulevard

Rockville, Maryland 20850

(301) 251-1500

Table of Contents

Section Page




Table Page






1. Background

The National Crime Victimization Survey (NCVS) is the Nation’s primary source of information on criminal victimization. Each year data are obtained from a nationally representative sample of the non-institutional population 12 years of age and older. The information collected includes the frequency, characteristics and consequences of criminal victimization in the United States. The survey enables the Bureau of Justice Statistics (BJS) to estimate the likelihood of victimization in the form of rape, sexual assault, robbery, assault, theft, household burglary, and motor vehicle theft for the population as a whole, as well as for various subgroups of the population such as women, the elderly, members of various racial groups, city dwellers, or other groups. The NCVS provides the largest national forum for victims to describe the impact of crime and characteristics of offenders.


Currently, the NCVS relies exclusively on interviewer-administered modes of data collection. Since the redesign of the early 1990s, these modes have included in-person and telephone interviewing. Until fairly recently, the in-person interviews were completed using paper and pencil instruments. Up until several years ago, some of the telephone interviews were completed within centralized data collection centers that conducted computer-assisted telephone interviews (CATI), while some proportion were completed by field interviewers using the paper-and-pencil instrument. Within the last few years, the survey has converted to field-based, computer assisted personal interviews (CAPI). Currently no CATI interviews are conducted, although some telephone interviews are conducted from the interviewer’s home using the CAPI instrument.


The objective of this project is to examine the use of Interactive Voice Response (IVR) as a complementary mode of data collection to the interviewer-based methods that have been used on the NCVS. The IVR has the potential to collect better information on the more sensitive items, as well as offering a less expensive mode of collection that might be applied to a redesigned NCVS (e.g., follow-up within the rotating panel; screening for small area estimates). In the next section the potential uses of IVR are described, along with the advantages and disadvantages of the methodology. In sections 3 through 5 the field test is described in more detail. The remaining sections describe the proposed respondent incentives, the analysis plan, IRB review, confidentiality, burden hours and others consulted for the project.


2. IVR as a Possible Data Collection Mode for the NCVS

There are several possible applications of IVR for the NCVS. As part of the NCVS core methodology, an IVR mode could be incorporated as part of a multi-mode design within the rotating panel design. For instance, after the initial in-person interview, respondents could be asked to call into an 800 number to complete the survey in subsequent contacts. Cranford, et al. (2010) describe a related example that used IVR to collect data on a frequent, longitudinal, basis. The assumption is that by establishing rapport at the first interview, the NCVS could efficiently collect data from some portion of the sample without incurring the expense of an interviewer-driven methodology. The Current Employment Statistics program, administered by the Bureau of Labor Statistics (BLS), uses IVR in a similar application. At the first contact, establishments are contacted and interviewed over the telephone. For a large portion of the sample, respondents use an IVR to complete subsequent contacts on a monthly basis (Rosen et al, 1993).


A second possible use of IVR is for a supplemental survey to generate local area estimates. Previously, agencies have relied on mail or telephone surveys to conduct local area victimization surveys. IVR could provide a way to increase the efficiency, and possibly the quality, of these surveys. In this context, there are several different applications that are possible. One could administer the entire NCVS interview, both the screener and the detailed incident form, by IVR to supplement the core NCVS interview (Westat, 2010). A second application would be to administer the NCVS screening interview by IVR as a way to stratify households that could be followed up on a selected basis. For example, those individuals who report a victimization on the screener would be followed up in-person or by telephone to collect more information on the incident. Those who do not report a victimization would not be followed up at all or on a very limited basis (e.g., Westat, 2010). BJS is currently examining both of these approaches on a separate project using a telephone and a paper mail survey. IVR could offer an additional mode that could be used in this type of design.



Advantages of an IVR for the NCVS

An important advantage of IVR is that it is self-administrated. There are numerous findings in the literature that demonstrate that self-administration increases the reporting of sensitive information (Bloom, 2008; Kreuter, et al., 2008; Turner et al, 1996; Villarroel, et al., 2006). With respect to victimization, Mirrlees-Black (1999) found a computerized, self-administered questionnaire increased reports of domestic violence. More recently, Beach, et al. (2010) found that IVR significantly increased the reporting of elder abuse when compared to a CATI interview.


A second advantage is that an IVR is a computerized instrument. A paper mail survey has traditionally been used for local crime surveys. For the NCVS, a paper survey cannot optimize the skip patterns required for survey administration of both the screening and collection of details on incidents. This limitation is overcome with a computerized instrument like IVR.


A third advantage is that IVR minimizes interviewer effects. There is some indication that interviewer variance for the NCVS is high (Bailey, et al, 1978). In part, this may be because interviewers vary in how much time is spent to administer the screener. This affects respondent comprehension and recall. An automated system administers all of the questions to all respondents in a consistent manner. While this may also have some disadvantages, it does insure that all respondents will be provided the same set of recall cues and instructions when being asked about victimization.


Underlying all of the above advantages is that the IVR is one of the least expensive modes of data collection. If the methodology proves feasible, this could have a significant effect on the resources available for other aspects of the NCVS. The web is one of the few methods that is less expensive than an IVR. The one advantage the IVR has over a web survey is that most individuals in the general population have access to the technology required to respond. Only about 1% of the population does not have access to a telephone (Blumberg and Luke, 2009). This is in contrast to access to the web, where about 80% of the population 18 years and older have access (Pew Internet & American Life Project, 2010).



Issues with using an IVR

Probably of most concern for the use of IVR is the tendency for respondents to break off during the interview. Break-off rates and item nonresponse are usually higher in IVR surveys than in other data collection modes (Mingay, 2000; Schneider, et al., 2005; Tourangeau, et al., 2002; Dillman, et al., 2009). There is the added concern that respondents tend to become frustrated much more quickly with a computerized system than with human interviewers. To address this issue, it is necessary to reduce the complexity and/or scope of the information collected in IVR surveys (Brick and Williams, 2009).


The absence of interviewers to navigate the survey also has specific ramifications for the current design of the NCVS. For example, NCVS interviewers assist in un-duplicating crimes across respondents in the same household and assist in determining whether an incident is part of a “series” crime. Interviewers assist respondents with ambiguities in the scope of the survey, as well as with retrospective recall of incidents over the reference period. This type of assistance may be difficult for IVR systems to mimic or duplicate.


As with a telephone survey, the IVR is restricted to only one channel of communication --- speech. Dillman, et al. (2009) make this point by noting that the respondents have no visual presentation of the question or response categories. Unlike a mail/web or an in-person interview, respondents rely on short term memory to comprehend and answer questions. For example, when an interviewer administers response scales over the telephone there is a tendency to pick the extreme points of a scale (Tarnai and Dillman, 1992; Srinivasan and Hanway, 1999). Issues with this type of mode effect, however, are not as critical for the NCVS because it does not contain a significant number of questions that use detailed response scales. Prior studies have not found large differences between field-administered telephone and personal interviews for the NCVS (Bushery, et al., 1978).


Perhaps a more important issue is the use of open-ended questions. When collecting data on specific incidents, the NCVS interviewer asks respondents open ended questions on where the incident occurred, who was involved and when it occurred. The interviewer codes the responses using a pre-specified list. For the IVR, these questions will have to be closed up to provide the respondent with a list of alternatives from which to choose.



3. Design of the Field Test

The purpose of the present study is to assess the feasibility of using IVR for the NCVS. This will involve examining six research questions.


  1. Can the NCVS questionnaire be adapted for IVR administration?

The NCVS interview was not designed with IVR as a mode of interview. As noted above, for this mode to work effectively, it has to keep the respondent’s interest and has to adapt particular types of questions (e.g., open-ended). This project will implement an abbreviated IVR version of the NCVS questionnaire which, at the same time, gathers the information that is needed for classification of a reported victimization event into the type of crime classification used by BJS when publishing estimates. As a precursor to the present project, we are conducting a series of iterative cognitive interviews and usability labs to develop the IVR instrument. The current OMB package is requesting clearance for a field test to address this question under survey field conditions.


During this field test, data will be collected on the use of the IVR including: 1) the extent to which respondents break off the interview; 2) the extent to which they can navigate through the instrument; and, 3) an assessment of respondent satisfaction with the instruments.


  1. What are the response rates with IVR and how do these rates vary by the mode used to contact sampled respondents – mail or telephone?

The application of IVR for the NCVS depends partly on the response rates that can be obtained. As noted above, the application to the NCVS might take one of two forms. An outbound model might be used with a telephone interviewer. The initial contact and items on household characteristics would be administered by the interviewer, and the IVR would be used to administer all or some of the victimization items. Alternatively, it is possible to use an inbound model in response to a request sent by mail to call an 800 number. This might occur as an initial contact or as part of a follow-up to a prior interview. This project will compare these two methods along several different dimensions, including response rate, satisfaction, and the demographic distribution of respondents.


  1. Is it possible to effectively encourage sampled households to complete the interview when the initial contact is by mail?

As survey response rates have declined, there has been an increasing focus on methods to increase response rates. Much of this research has been for telephone, mail and web surveys. Very little research has tested how these methods apply for an IVR administration. This research question will investigate whether it is possible to significantly increase response to the IVR within the mail-mode of contact. The project concentrates the treatments on this mode because of its potential for cost savings relative to other methods that require the use of an interviewer to make an initial contact.


Within the self-administered modes we will test several ways to improve response rates. One is the use of an insert that contains a short message to motivate the respondent to participate. This approach was found to be very successful in a recent mail survey of Veterans (Han, et al., 2010). The second method will use an incentive. Incentives have been found to be very effective at increasing response rates for all types of surveys, including telephone (Cantor, et al., 2007), mail (Church, 1993) and in-person surveys (Singer, 2002). There is not as much literature on the effects of an incentive for an inbound IVR.


  1. Are there differences in respondent acceptance between speech IVR and touchtone IVR?

The project will compare the use of speech and touchtone (TDE) IVR applications. It is not clear from the usability literature which of these methods of data entry is best for a survey like the NCVS. Bloom (2008) maintains that respondents prefer to speak “yes” or “no” rather than use a keypad when given an option between touchtone and speech. This is consistent with the finding of Suhm, et al. (2002) who found that natural language call routing systems outperformed touch-tone menu systems. Similarly, in an experiment that evaluated user preference for an input modality (natural language or TDE) for a message retrieval system, Lee and Lai (2005) found that users expressed a strong preference for natural language systems. Speech may also be preferred if the user has the keypad on the handset, since this requires constant movement between listening and key entry (Dillman, et al., 2009). On the other hand, users might find TDE to be more reliable, especially given tasks requiring entry of large numbers. TDE may also be preferred for topics that are particularly sensitive, where the respondent may be reluctant to speak the answer aloud.


The goal of this project will be to assess the advantages and disadvantages of these two approaches with respect to outcomes such as the response rate and user satisfaction.


  1. Does IVR lead to different victimization rates from a telephone interview?

The project will compare the victimization rates for telephone and IVR modes of interviewing. Based simply on the mode of communication, the IVR offers more privacy and anonymity than the telephone. This has been found to lead to higher reports of domestic violence (Mirrlees-Black, 1999). In addition, the IVR is not subject to interviewer effects which may inhibit at least certain types of reports. Alternatively, a telephone interviewer may provide useful prompts, definitions and clarifications that assist the respondent when retrieving information from memory and formulating responses. The project will compare victimization rates for different types of crimes.


  1. Is there a difference in victimization rates for Speech and TDE modes of entry?

As noted above, there may be user preferences for the mode of entry. Speech may be easier to use but TDE may be perceived as more private. This project will assess whether this had an effect on reporting victimization. The sample sizes will be limited with respect to testing this hypothesis. However, it will be possible to detect a large effect for the more common type of crimes.


The study is designed to address the above research questions through a field experiment. The experimental design reflecting the combinations of different treatments is provided in Table 1 below (the sample sizes are discussed later). Overall assessment of response rates and victimization rates will be tested by comparing results obtained from IVR applications using a mail contact (cells A – H in Table 1) vs. telephone contact (cells I – L in Table 1) vs. a telephone interview (cell M in Table 1). Tests of methods to enhance response rates and variations in victimization rates will be examined by experimenting with two factors within the mail mode of contact: 1) a promised incentive of $20 (cells B, F, J, D, H. L) and 2) the use of an insert encouraging response (cells A - D). Finally, the design will experiment with the use of either speech (cells A, B, E, F, I, J) or touchtone data entry (cells C, D, G, H, K, L) as the method of input.


The usability of the IVR will be assessed by examining several different forms of data. First, a debriefing interview will be administered as part of the IVR. This will ask respondents about their experience. This interview will be administered to everyone who completes the IVR interview. A second source of information will be para-data that is available from the interview itself. Data to be collected include: 1) the time to complete the interview, 2) incomplete interviews, 3) the use of the “help” function during the interview, and 4) the number of times respondents had to back up when moving through the interview.


Table 1. Experimental Design




4. Sample Design

The experiment will be conducted in Houston and St. Louis. The two cities were chosen because they both have open-record statutes and their police departments have both agreed to provide the addresses of persons who have reported crimes to the police. Attachment 11 provides the background materials sent to each police department. Attachments 12 and 13 provides the correspondence documenting each departments’ agreement under the conditions stated in the material


The police records will be used to identify an address, not the particular individual that reported the crime to the police. Records of sexual assaults, domestic violence and any case currently under investigation will be excluded from the sample. The name of the individual that originally reported the crime to the police will not be contained on the sample frame and will not be used. The information that will be included on the record for the study will be;


  1. Address of the individual reporting the incident

  1. Age and gender of the individual

  2. Description of the incident

  3. Date and time of the incident

In Houston and St. Louis, the information that is being requested is available upon request by the public through an “open record” statute. The St. Louis Police department has agreed to provide the records under this statute. In Houston, the records are being obtained via an inter-agency agreement between the police department and the Bureau of Justice Statistics, rather than as part of a public information request. This was done to expedite a request that required a high volume of records delivered in an electronic format.


In the advance letters, respondents will be told that they could have been selected for the study because someone at the address had reported that a crime had occurred. A telephone number will be provided that will allow respondents to drop out of the study if they do not wish to be contacted again (see Attachments 4 and 6).


Salting the sample with addresses of recent victims serves two purposes. First, it will provide information on the acceptability of IVR collection for respondents with incidents to report, where responding to the IVR may be more challenging. It will thus allow a more thorough test of both the NCVS screener (NCVS-1) and the detailed incident form (NCVS-2). Second, it will increase the power of statistical tests of differences between incidence rates under the various experimental conditions.


Prior studies have used police records as part of record check studies (Murphy and Dodge, 1981; Groves and Miller, 1985; Cjaza and Blair, 1990). These studies matched survey respondents with reports to the police. As noted above, our research goal is not to match survey responses to the reports to the police. On the other hand, a secondary advantage of including police records is that it should be possible to do some validity checks for at least household crimes, such as Burglary, household theft and motor vehicle theft.


Within each city, two sample frames will be used. One will be the police frame of the addresses referred to above. The other will be the Delivery Sequence File (DSF) which contains residential addresses to which the US Postal Service delivers mail. The DSF represents the population universe of the NCVS. The reaction of this group to the IVR will approximate the strengths and limitations of the methodology as it applies to the ongoing NCVS.


The sample will include a total of 13,000 eligible addresses. These 13,000 will be allocated across the different contact methods (mail and telephone), cities and sample frames (DSF and police). Table 2 provides this allocation across these three different dimensions.


Table 2. Sample Allocation by Frame, City and Method of Contact



In order to be able to compare the survey results across the various experimental conditions, it is necessary to restrict eligibility for all conditions to addresses where a telephone number can be found. The telephone method of contact requires this criterion in order for a household to be reachable. This criterion will be met by drawing a sample of addresses and finding the telephone number using a reverse directory service. Prior experience has found that approximately 60% of the addresses will match in this way.


The interview will select one adult in each sampled household using the “Next Birthday Method.”


The 6,500 addresses to be selected from the police records will be split evenly between personal and household crimes (n=3250 for each type of crime). Among the household crimes, we propose to sample 2000 burglaries (30.7% of all addresses sampled from police records), 625 motor vehicle thefts (9.6%) and 625 larcenies (9.6%).


The sample will be randomly assigned to the different interviewing conditions described in the previous section. This allocation is provided in Table 3. There will be approximately1250 addresses allocated to each of the eight different conditions involving the mail method of contacting households. There will be 1000 allocated to each of the IVR methods using telephone contacts and there will be 1000 for the telephone interview (no experimental incentive is used with the telephone contacts).


Table 3. Allocation of Sample by Experimental Conditions



Table 4 provides the projected numbers of completed interviews by the experimental treatments described above. These numbers were calculated assuming a 20% response rate for the telephone contacts and a 30% response rate for the mail contacts, based on recent experiences conducting telephone and mail surveys (Westat, 2009). The projections assume no effects of the different enhancements related to the experimental design (insert; promised incentive).


Table 4. Projected Number of Completed Interviews by Experimental Condition





5. Data Collection Procedures

This section describes the data collection procedures. As noted previously, there are a number of experiments embedded into the data collection process. The data collection procedures generally fall into one of three groups: CATI interview; interviewer assisted telephone contact with transfer to IVR interview; and IVR interview after a request to call an 800 line. The survey instruments included in this clearance submission are:


  • Demographic and Crime Screening interview to be administered to all households; (Attachment 1)

  • Detailed crime incident report, to be administered when a crime is reported within the last 12 months in the screener; (Attachment 2)

  • A short questionnaire on satisfaction with the police and a debriefing interview that asks respondent’s reaction to the IVR. Both will be administered at the end of the survey (Attachment 3)


Description of Survey Materials

The materials a household will receive will depend upon which mode the household is assigned and whether additional nonresponse follow-up efforts are required. All survey materials will highlight BJS sponsorship and logo. For letters this will include BJS letterhead, for other items, like mail insert or postcard, the BJS logo and the agency name will be included. Since the survey material will differ by mode of initial contact, they are described below for each contact mode (telephone or mail).


Telephone Materials

  • Advance letter: this letter is mailed to all households where telephone will be the initial contact. The letter will be mailed before any call attempts and will describe the purpose of the study, how the household was selected, the confidentiality of the information, and that participation is voluntary. (Attachment 4)

  • Refusal conversion letter: this letter is mailed to households where telephone will be the initial contact and an initial refusal has been recorded. Households with a refusal status will be held for a cooling-off period and then mailed a refusal conversion letter. The letter will restate the purpose of the survey, why the household was selected, the importance of participation, and that participation is voluntary. (Attachment 5)

Mail Materials

  • Invitation letter: this letter is similar to the advance letter used for households assigned to initial telephone contact. However, no telephone contact attempts will be made. The letter will describe the purpose of the study, how the household was selected, the confidentiality of the information and that participation is voluntary. The letter will include instructions and a unique ID for accessing the IVR system. The letter will also include instructions for randomly selecting a household respondent to call the IVR system. (Attachment 6)

  • Insert with invitation letter: as described in experiments above, a random selection of households assigned to mail only contact, will receive an insert with their invitation letter. The insert will be different for the households that are being offered $20 and those that are not offered an incentive (Attachments 7 and 8)

  • Thank-you / reminder postcard: all households assigned to mail-only contact (IVR administration) will be mailed a thank-you / reminder postcard. This postcard serves two purposes; to thank respondents who have already completed the IVR survey, and to prompt non-responding households to complete the survey. (Attachment 9)

  • Follow-up letter: households assigned to mail only contact who have not completed the IVR survey after about three weeks will be mailed a first follow-up letter. This letter is similar to the initial letter with slightly stronger wording to motivate participation. This letter will be delivered using priority mail (Attachment 10)

Data Collection Process

As described earlier the data collection process will vary by assigned modes of contact and data collection (see Attachments 14 and 15). There are three distinct processes:


CATI Data Collection – sampled households assigned to this mode of contact and data collection will first be mailed a pre-notification letter before any calls are placed to the sampled household. Telephone interviewers will be used for all contact attempts to each sampled household in this condition. During the initial contact a respondent will be selected using the next birthday method. Once a respondent has been selected, the screener will be administered by a telephone interviewer. If any victimizations are reported during the screener the detailed incident interview will be administered.


Telephone Interviewer to IVR – sampled households assigned to this approach will first be mailed a pre-notification letter. Telephone interviewers will be used for all contact attempts to each sample household in this condition, but the screener and detailed incident report form will be administered by the IVR system. During the initial contact the interviewer will select a respondent using the next birthday method. Once on the phone, demographic information will be collected by the interviewer. At this point, the IVR system will administer the screener questionnaire. If any victimizations are reported during the screener the detailed incident interview will be administered for each incident by the IVR system.


Mail Invitation to IVR – with this approach all sampled household are contacted by mail. The first contact is an invitation that describes the survey and includes instructions for selecting a respondent. The letter includes a toll-free telephone number to call the IVR system and a unique ID for that household. All sampled households are then mailed a thank you / reminder postcard one week after their initial mail invitation. This thanks respondents who have completed the interview and prompts nonrespondents to complete the survey. Two weeks after the postcard mailing, nonrespondents are mailed a follow-up letter. This is the final study contact and will be delivered by USPS priority mail.


When a respondent contacts the IVR system, there is an introduction to the survey. After this, the system collects demographics. The crime screener is then administered. If any victimizations are reported, the detailed incident interview will be administered for each incident reported on the screener.


Methods to Maximize Response Rates

Use of Pre-notification Letters. For the telephone contacts, pre-notification letters will be mailed that engage respondent interest and cooperation by focusing on the legitimacy and importance of the study. The letters will provide advance notice of the survey contact and inform households about the purpose of the survey.


Two-dollar pre-paid incentive. All households will be sent $2 at the initial contact by mail (see discussion below).


Flexibility in Scheduling Interviews. In situations where a telephone respondent is unavailable, an appointment will be entered into the CATI management system with notations on the best time to reach the respondent.


Follow-up telephone contacts. Households that initially do not respond will be followed up. On the telephone, this will take the form of making multiple attempts to reach households that do not answer the phone. These follow-up attempts will be made at different days and times to maximize the chances of getting a person at home. Those that refuse during an initial telephone contact attempt will be held for a minimum of 13 days before contact is attempted by an interviewer again. During this hold period the refusing household will be mailed a letter in an attempt to convert the refusal before the next interviewer contact. (Attachment 7) The content of the letter will focus on the legitimacy and importance of the study. The letter will also address issues related to privacy or confidentiality of data. Telephone interviewer to IVR assigned households will be asked reasons for refusal or break-off after transfer to IVR system. Interviewers will be trained to address common issues and motivate participation.


Follow-up contacts for IVR. For the mail contacts multiple mailings will be completed. The first mailing will be a request to complete the survey. The second contact, mailed two weeks later, will be a reminder postcard. The third contact, mailed two weeks after the postcard, will be a second request to complete the survey. This request will be mailed using priority mail.




6. Payments to Respondents

As noted above, the study is proposing two types of incentives. One type is to provide all sampled households with $2 at the initial survey request. For an RDD survey, prior research has found that an incentive of this size increased rates, on average, by 5.4 percentage points (Cantor, et al., 2007: Table 22.2). A recent experiment testing a $2 incentive for a mail survey found the increase to be approximately 10 percentage points (Cantor, et al., 2008). Church (1993) reports an effect size of almost 20 percentage points, although with varying incentive amounts. An incentive of this type has also been found to reduce non-response error by bringing in populations that traditionally have low response rates to telephone and mail surveys (Dillman, 1997; Hicks, et al., 2008).


If the IVR is to be used as a replacement for a local area survey or as a supplement to more expensive interviewer-based methods (e.g., CATI; CAPI; ACASI), it is important to get a realistic idea of the response rates that can be achieved. As noted above, a small incentive can significantly increase this rate. Within the context of a rotating panel design, for example, increasing the response rate by 10 to 20 percentage points significantly decreases the amount of in-person follow-up that would have to be done and would more than pay for the additional monies to pay for the incentive (e.g., Link, et al., 2001). Similarly, if the IVR is used as a way to generate local area estimates, rather than a paper or telephone survey, maximizing the response rate with a token incentive will more than pay for the cost of completing a mail or telephone survey.


A second reason to include an incentive is that it will significantly increase the power of the analysis of victimization rates and comparisons of the demographic distributions. Each of these rely on the number of completed interviews. For example, if the response rate to the RDD survey is 15%, rather than 20%, the number of completed interviews would decrease by 25%. With increases of response rates by as much as 20 percentage points, a $2 incentive is an efficient way to maximize statistical power for the analysis, as well as reduce potential bias in the estimates.


The second type of proposed incentive is to promise $20 if the survey is completed by those receiving the IVR request in the mail. The evidence on the effectiveness of this type of incentive for mail surveys is decidedly mixed (Church, 1993). The extrapolation of this for requesting to do an IVR has not been tested. A perceived barrier to the use of an IVR is getting respondents to call the 800 number. This is different from a mail or RDD survey where the respondent is faced with the response task without any further actions. For example, for a mail survey, the questionnaire is readily available as soon as the package is opened. A promised incentive might provide the respondent with additional motivation to make the call to take the survey. The proposed experiment would test this hypothesis. If successful, it could prove to be an efficient methodology to use when trying to get respondents to use the IVR.


7. Analysis Plan

The analysis will be centered around addressing questions about response rates, the usability of the IVR and the effects on the measurement of victimization. The discussion of the analysis is organized around the research questions posed in Section 1 above.


  1. Can the NCVS questionnaire be adapted for the IVR?

Preliminary answers to this question will be gathered as part of the usability testing that is being completed in developing the IVR instrument. One problem with this type of testing is that it is conducted in a relatively artificial environment. It relies on volunteers who are paid for their participation. They are observed as they go through the interview. For the field test covered under the present OMB clearance request, the study will assess how the IVR instrument works under field conditions where respondents are not as motivated to cooperate as they are in the usability lab.


To assess whether the NCVS can be adapted for a practical application of the IVR, we will examine a number of different indicators based on different forms of paradata indicative of issues respondents might have when moving through the system. The indicators that will be analyzed include:


  1. Timings – Total time, time per section, extent users are “timing out” for particular items1

  1. Number of breakoffs during the interview

  2. The number of times respondents asked for help; number of times they backed up to repeat text

  3. Whether respondents interrupted the voice to answer the question. Analysis will be specifically targeted to the screener when the list of cues is being read.

As noted in the data collection section, respondents will be administered a short user satisfaction survey at the end of the interview. The survey will ask about overall satisfaction and about specific aspects of taking the survey (entering data, understanding the questions). The survey will also ask respondents to provide verbatim responses to describe problems they encountered when going through the questionnaire. Tabulating the levels of satisfaction for particular components of the interview will provide an indication of what the strengths and weaknesses are of the IVR application.


The remaining research questions involve examination of the survey results and the effects of different experimentally manipulated characteristics. In the remainder of this section, we discuss the analyses to address the remaining research questions discussed in Section 1.


  1. What are the response rates for the IVR and how do these rates vary the mode used to contact sampled respondents – mail or telephone? What mode of interview yields the highest response rate?

This question addresses what the response rate is for an IVR survey, how it compares to a telephone interview and whether there are differences by mode of contact. With respect to comparing the mail and telephone modes of contact, the analysis will collapse all of the mail IVR into a single group (n = 10,000) and the telephone IVR into a single group (n =2000). The point estimates for the response rates for these two groups have half-width 95% confidence intervals of 0.9% for the mail IVR (assuming a 30% response rate) and 1.7% for the telephone IVR (assuming a 20% response rate). The response rate for the telephone interview sample has a half-width 95% confidence interval of 2.4%. These response rate estimates will thus provide reasonably reliable estimates of the level of response for each of these methods.


Analysis will then focus on comparing the response rates for the different methods of interviewing. Table 5 summarizes the power for these comparisons. One comparison will be between the mail IVR and the telephone interview. Assuming a telephone interview response rate of 20%, these comparisons will detect around a 4 percentage point difference with 80% power (assuming a 2-tail test at p<0.05) for the three comparisons involving the IVR mail, IVR mail with no incentive and the telephone IVR.


Table 5. Power of Testing for Differences in Response Rates and Demographic Distributions by Interview Mode



A related question to the response rate is whether the demographics of the respondents differ by these conditions. Tests of differences in demographic characteristics will not be as powerful as for response rates because they will rely on completed surveys. For purposes of calculating power, the demographics examined were age (18-44 vs. 45+), gender and race (nonwhite vs. other). Each of these groups is expected to be approximately 50% of the populations in the two cities. For the comparison of the IVR with the mail modes of contact to the telephone, the detectible difference is approximately 10 to 11 percentage points (see second panel of Table 5). It is slightly higher when comparing the IVR telephone mode of contact with a detectible difference of 12 percentage points.


A third type of analysis will assess whether contacting respondents by mail for the IVR application yields a different response rate than that when they are contacted by telephone (Table 6). The mail IVR will have sample sizes of 10,000 and 5,000 respectively, depending on the incentive condition. The telephone contact IVR will be based on a sample size of 2,000. Assuming a response rate of 25% for one of the conditions, this analysis will detect a 3 percentage point difference with 80% power, regardless of the mail incentive condition.


Table 6. Power of Testing for Differences in Response Rates and Demographic Distributions by Contact Mode



Comparison of the demographic distributions across modes of contact for the IVR has slightly higher power than the same question for modes of interview. A difference of between 7.5 to 8 percentage points can be detected with 80% power.


  1. Is it possible to effectively encourage sampled households to complete the interview when the initial mode of contact is the mail?

This analysis will focus on the effectiveness of the different methods intended to increase the response rate to the mail IVR. The experimental design has four conditions representing a crossing of two incentive conditions ($0, $20) and two insert conditions (with insert, without insert). Each of these cells has 2500 sample cases. Ignoring the insert treatment yields 5000 sample cases for each incentive condition, providing 80% power to detect a 2.4 percentage point difference. The power for the insert is identical to this. If one tests within conditions (e.g., incentive effect for the “no insert” condition), the design can detect a 3.5 percentage point difference in response rate with 80% power.


  1. Are there differences in respondent acceptance between speech IVR and touchtone IVR?

The first part of this analysis will analyze the response rate across voice and TDE IVR treatments. Those contacted by mail have 5000 in each of the two conditions (speech vs. touchtone), which allows detection of a 2.6 percentage point difference with 80% power. The telephone contacts have 1000 in each condition. This analysis can detect a 5.1 percentage point difference with 80% power.


User acceptance will also be measured from the debriefing interview that asks about satisfaction and from the para-data (e.g., backing up, asking for help, breakoffs). One question will be whether there is a difference in satisfaction between the speech and TDE methods of entry. These analyses are based on completed interviews and will be split between those contacted by telephone (n=200 per group) and those by mail (n=1500 per group). When computing the power for these analyses, a 10% and a 40% level of satisfaction (or dissatisfaction) were used. For the 10% level, the telephone will not be able to detect anything but a very large effect of 8.5 percentage points. The mail, however, will be able to detect an effect of 3.1 percentage points. For the 40% level of satisfaction, the power is 13.9 and 5.1 percentage points for the telephone and mail respectively.


  1. Does IVR lead to different victimization rates from a telephone interview?

This analysis will evaluate whether victimization rates differ between the IVR and the telephone. To calculate the precision and power, it was assumed that the sample would yield, on average, a victimization rate of 30%, 20% and 10% for Total, Household and Personal crimes, respectively. These rates were estimated using reporting rates observed in prior victim surveys using police records (Murphy, 1981).


Table 7 provides the precision of the three different crime estimates for each of the three IVR conditions. The half-width confidence interval is between 1.5% to 4.9%, depending on the type of crime and condition.


Victimization rates for each of these IVR conditions will be compared to the rates from telephone interviews. The power for these comparisons will detect differences of around 10 percentage points for total crime, 8 percentage points for household crime and 6 percentage points for personal crimes (Table 8). While this is not a highly rigorous test, it will detect large differences by mode. For example, with a 30% total rate, a 10 percentage point difference is an effect of approximately 33%. This type of difference was observed for other NCVS experiments, including tests of the new screener and the implementation of CATI during the redesign of the 1990’s. For example, the redesigned screener boosted the personal crime rate by 44% and the household crime rate by 23% (Kindermann, et al., 1997). Slightly larger increases were observed when comparing centralized telephone interviewing to decentralized telephone interviewing (Hubble and Wilder, 1988). Of course, if one hypothesizes a one tailed test (e.g., the IVR will yield higher rates), the power significantly increases.



  1. Is there a difference in victimization rates for Speech and TDE modes of entry?

The total number of interviews, by mode of entry, is 1700 across the telephone and mail methods of contact. For household crimes, testing the difference between modes of entry (speech vs. TDE) will detect a difference of 4.4, 3.9 and 2.9 percentage points for total, household and personal crimes, respectively. The power when restricting to just the mail mode of contact is similar to this. The power drops significantly to 13.0, 11.3 and 8.5 percentage points when analyzing the telephone mode of contact.


Table 7. Half-Width Confidence Intervals for Victimization Rates for IVR by Contact Methods



Table 8. Power of Comparisons of Victimization Rates by Mode of Contact




8. Informed Consent and IRB Review

The contact letters and the script read to respondents once on the telephone provide the elements of informed consent. The initial letters (see Attachments 4 for telephone and Attachment 6 for mail IVR) provide the purpose of the survey, the voluntary nature of the study, how their address was included, a procedure to refuse to participate and a number to call with questions about the study. The script read to respondents on the telephone repeats much of this information and additionally provides the length of the survey, as well as informing respondents of the availability of toll free hotline numbers (see introductions in Attachment 1). A summary of these elements are also depicted in two flowcharts depicting the data collection process for each mode of contact (Attachments 14 and 15).


Using records to either seed or exclusively draw sample for a survey has been done on recent surveys. Two examples include the Early Childhood Longitudinal Study: Birth Cohort (ECLS-B) conducted by NCHS. ECLS-B did not inform sample members how they were included in the study. A second study, on firearm ownership (Smith, 2003), provided a general description of the sample. Our design uses this latter approach, which informed sampled members that their address was included in the survey from one of two sources --- a list of addresses available from the post office or police records. Keeping the source less specific will reduce the chances of cueing respondents about the incident in the police record.2


The Westat IRB is currently reviewing this application. OMB will be provided the final letter of approval when it is received.



9. Data Confidentiality and Data Security

The data collected for this project are protected under the Bureau of Justice Statistics statutory protection. This protects the data from potential subpoena (42 USC 3789g).


Access to Westat’s secure computer systems is password protected. All server and network data storage areas are protected by access privileges, which are assigned by the appropriate system administrator. All systems are backed up on a regular basis and are kept in a secure storage facility.


To protect the identity of NCVS respondents, no identifying information will be kept on the final survey file. Identifying information includes the address of the sampled unit and the telephone number. The survey will not be collecting the name of any of the respondents. The identifying information will be deleted once the analysis file has been created and the link is no longer needed. We estimate this to be 3 months after data collection has ended.


The final data sets, without the above identifiers, will be delivered to BJS at the end of the project. Once these data are delivered, all copies at Westat will be destroyed.


With respect to personnel, all Westat employees are required to sign a pledge of confidentiality. This pledge requires employees to maintain confidentiality of project data and to follow the above procedures when handling confidential information.



10. Estimate of Burden Hours

Burden on this study consists of completing the NCVS interview. There will be two types of interviews. The CATI contact will first involve asking a representative of the household to identify the person with the next birthday. We estimate this to take approximately 3 minutes, which includes screening the eligibility of the household and the answering the phone (e.g., not a business; identify person 18+ who lives in the household; selecting the person with the next birthday). We estimate that that actual interview will take, on average, 20 minutes. This figure is taken from timings from the NCVS. This results in a total of 23 mintues for those households contacted by phone. Those contacted by letter and who call into the NCVS will only experience the NCVS interview (20 minutes).


As shown in Table 4, we are projecting 600 completed interviews from the telephone contact and 3000 from the mail contact. This results in 230 hours from the former [(600 x 23)/60 = 230)] and 1000 hours for the latter [(3000 x 20)/60 = 1000] for a total of 1230 hours of burden.



11. Consulted for Study


David Cantor

Principal Investigator

Westat

1600 Research Blvd

Rockville, MD 20850


Pat Dean Brick

Project Manager

Westat

1600 Research Blvd

Rockville, MD 20850


Doug Williams

Survey Methodologist

Westat

1600 Research Blvd

Rockville, MD 20850


Graham Kalton

Chief Statistician

Westat

1600 Research Blvd

Rockville, MD 20850


Richard Sigman

Senior Statistician

Westat

1600 Research Blvd

Rockville, MD 20850


Roger Tourangeau

Director and Research Professor

Joint Program in Survey Methodology

University of Maryland

1218 LeFrak Hall

College Park, MD 20742


Fred Conrad

Research Scientist

PO Box 1248

Institute for Social Research

University of Michigan

Ann Arbor, MI 48106



Michael Rand

Chief, Victimization Statistics Branch

Bureau of Justice Statistics

810 Seventh Street, NW

Washington, DC 20531


Shannan Catalano

Statistician

Bureau of Justice Statistics

810 Seventh Street, NW

Washington, DC 20531



References


Bailey, L., Moore, R.F. and B.A. Bailar (1978) “An Interviewer Variance Study for the Eight Impact Cities of the National Crime Survety Cities Sample.” Journal of the American Statistical Association, 73: 16-23.


Beach, S.R., Schulz, R., Degenholtz, H.B., Castle, N.G., Rosen, J., Fox, A.R. and R.K. Morycz (2010) “Using Audio Computer-Assisted Self-Interviewing and Interactive Voice Response to Measure Elder Mistreatment in Older Adults: Feasibility and Effects on Prevalence Estimates” Journal of Official Statistics, Vol. 26: 507–533


Bloom, J. (2008). The Speech IVR as a Survey Interviewing Methodology. In F. G. Conrad & M. F. Schober (Eds.), Envisioning the survey interview of the future (pp.119-136). Hoboken, NJ: Wiley and Sons, Inc.


Blumberg SJ, Luke JV. (2009) Wireless substitution: Early release of estimates from the National Health Interview Survey, January-June 2009. National Center for Health Statistics.


Brick, J. M. and Williams, D. 2009. ‘‘Reasons for Increasing Nonresponse in U.S. Household Surveys.’’ Paper presented at the Workshop of the Committee on National Statistics, Washington, DC, December 14.


Bushery, J., Cowan, C. and L. Murphy (1978) “Experiments in Telephone-Personal Visit Surveys.” Proceedings of the Survey Research Methods Section of the American Statistical Association. Accessible at http://www.amstat.org/sections/srms/Proceedings/, Last accessed March 4, 2011.


Cantor, D., R. Sigman and D. Han (2008) “Pilot of a Mail Survey for the Health Information National Trends Survey” Paper presented at the 2008 meeting of the American Association for Public Opinion Research, May 15-18, New Orleans, LA.


Cantor., D., O’Hare, B. and O’Connor, K. (2007) “The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys” pp. 471-498 in J. M. Lepkowski, C. Tucker, J. M. Brick De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (Eds.), Advances In Telephone Survey Methodology, New York: J.W. Wiley and Sons, Inc.


Church, A. (1993) “Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.” Public Opinion Quarterly Volume 57:62-79


Cranford, J.A., Tennen, H. and R.A. Zucker (2010) “Feasability to Using Interactive Voice Response to Monitor Daily Drinking, Moods and Relationship Processes on a Daily Basis in Alocholic Couples.” Alcoholims: Clinical and Experimental Research 34: 499 – 508.


Currivan, D.B., Nyman, A.L., Turner, C.F. and L.Biener (2004) “Does Telephone Audio Computer-Assisted Self-Interviewing Improve the Accuracy of Prevalence Estimates of Youth Smoking?: Evidence from the UMass Tobacco Study” Public Opinion Quarterly 68: 542-564




Czaja, R. and Blair, J. (1990). Using Network Sampling in Crime Victimization Surveys. Journal of Quantitative Criminology, vol. 6, pp. 185-206.


Dillman, D.A. (1997) “Token Financial Incentives and the Reduction of Nonresponse error in Mail Surveys.” Proceedings of section on Government Statistics, pp. 200-205). Alexandria: American Statistical Association.


Dillman, D. A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., & Messer, B. L. (2009) Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research, 38, 1-19.


Dillman, D. and J. Tarnai (1991) “Mode Effects of Cognitively Designed Recall Questions: A Comparison of Answers to Telephone and Mail Surveys.” In Biemer, P., et al, Measurement Errors in Surveys. New York: John Wiley pp. 73 – 93.


Groves, R. and D. Cork (2008) Surveying Victims: Options for Conducting the National Crime Victimization Survey. National Academies Press: Washington DC


Han, D., D. Cantor and P. Dean Brick (2010) “Findings From a Two Phase Mail Survey For a Study of Veterans” Paper presented at the American Association for Public Opinion Research, Chicago, IL, May.


Hicks, W., Cantor, D., St. Clair, A., Fee, Rebecca and P. Rhode (2008) “A Tale of Two Methods: Comparing Mail and RDD Data Collection for the Minnesota Adult Tobacco Survey III.” Paper presented at the International Total Survey Error Workshop, Raleigh, North Carolina.


Hubble, D., & Wilder, B. E. (1988). Preliminary results from the National Crime Survey CATI Experiment. Proceedings of the American Statistical Association: Survey Methods Section, New Orleans, LA.


Kindermann, C., Lynch, J.P. and D. Cantor (1997) “Effects of the Redesign on Victimization Estimates” . Bureau of Justice Statistics, National Crime Survey Victimization Survey, NCJ-164381.


Kreuter F., Presser S., and Tourangeau, R. (2008). Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity. Public Opinion Quarterly, 72, 847-865.


Lee, K.M. and J. Lai (2005) “Sppech Versus Touch: A Comparative Study o fthe Use of Speech and DTMF Keypad for Naviagation.” International Journal of Human-Computer Interaction 19: 343-360.


Link, M.W., Malizio, A.G. and T.R. Curtin (2001) “Use of Targeted Monetary Incentives to Reduce Nonresponse in Longitudinal Surveys.” Paper presented at the Annual Meeting of the American Asociation for Public Opinion Research, Montreal, Quebec, Canada.


Miller, P. and Groves, R. (1985). Matching Survey Responses to Official Records: An Exploration of Validity in Victimization Reporting. Public Opinion Quarterly, vol. 49, pp. 366-380.

Mingay, D. J. (2000). Is telephone audio computer-assisted self interviewing (T-ACASI) a method whose time has come? Paper presented at the 55th Annual Conference of the American Association for Public Opinion Research & World Association for Public Opinion Research, Portland, Oregon.


Mirrlees-Black, C. (1999). Domestic violence: Findings from a new British crime survey Self-completion questionnaire. Home Office Research Study 191. London: Home Office.


Montaquila, Jill M., J. Michael Brick, Mary C. Hagedorn, Douglas Williams (2010) Maximizing response in a two-phase survey with mail as the primary mode” Paper presented at the American Association for Public Opinion Research, Chicago, IL, May.


Murphy, L. and R. Dodge (1981) “The Baltimore Recall Study” in Lehnen, R.G. and W. Skogan (eds) The National Crime Survey: Working Papers, Volume I, Current and Historical Perspectives, pp. 16 – 21, NCJ 75374, December.


Pew Internet & American Life Project (2010) “Change in Internet Access by age group, 2000 – 2010.” Inforgraphic last accessed on March 29, 2011 at

http://www.pewinternet.org/Infographics/2010/Internet-acess-by-age-group-over-time-Update.aspx#


Rosen, R., Clayton, R., and Wolf, L. (1993). Long-term Retention of Sample Members under Automated Self-Response Data Collection. Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 748-752.


Schneider, S.J., Cantor, D. and T. Hagerty-Heller (2005) “Interactive Voice Response” pp. 349-356 in Polling America: An Encyclopedia of Public Opinion (Eds. Radcliff, B., Best, S.), Greenwood Press.


Singer, E. (2002) “The Use of Incentives to Reduce Nonresponse in Household Surveys.” Pp. 163 – 178 in Groves, R.M., Dillman, D.A, Eltinge, J.L. and R.J.A. Little Survey Nonresponse. New York: J.W. Wiley and Sons, Inc.


Smith, T. (2003) “A seeded sample of concealed-carry permit holders.” Journal of Quantitative Criminology 19(4): 441 – 445.


Srinivasan, R., & Hanway, S. (1999, May). A new kind of survey mode difference:experimental results from a test of inbound voice recognition and mail surveys. Paper presented at the 54th Annual Conference of the American Association for Public Opinion Research, St. Pete Beach, FL.


Suhm, B., Bers, J., McCarthy, D., Freeman, B., Getty, D., Godfrey, K., and Peterson, P. 2002. “A Comparative Study of Speech in the Call Center: Natural Language Call Routing vs. Touch-Tone Menus,” TOCHI, 4(1).


Tarnai, J. and D. A. Dillman (1992) “Questionnaire context as a source of response differences in mail vs. telephone surveys.” Pp. 115-129, in Schwarz, Norbert and Seymour Sudman (eds) Context effects in Social and Psychological Research. New York, NY: Springer-Verlag.


Tourangeau, R., Steiger, D. M., and Wilson, D. (2002). Self-administered questions by telephone. Public Opinion Quarterly, 66, 265-278.


Turner, C., Miller, H., Smith, T., Cooley, P., Rogers, M. (1996) Telephone Audio Computer-Assisted Self-Interviewing (T-ACASI) and Survey Measurements of Sensitive Behaviors. Preliminary Results. In: R. Banks, J. Fairgrieve, L. Gerrard et al. (Eds.) Survey and Statistical Computing. Chesham, Bucks, U.K.,: Association for Survey Computing.


Villarroel, M.A., Turner, C.A., Eggleston, E., Al-Tayyib, A., Rogers, S.M., Roman, A.M., Cooley, P.C. and H,. Gordek (2006) “Same-Gender Sex in the United States, Impact of T-ACASI on Prevalence Estimates.” Public Opinion Quarterly 70: 166-196.


Westat, (2009) Health Information National Trends Survey (HINTS) 2007 Final Report. Submitted to the National Cancer Institute, February, 2009. Available at http://hints.cancer.gov/hints/instrument.jsp , last accessed February 9, 2011


Westat, (2010) NCVS Task 4 Report: Summary of Options Related to Local Area Estimates” Report submitted to the Bureau of Justice Statistics, May 19, 2010.


1 If there is no response after 30 seconds, the IVR repeats the question.

2 One can also imagine that telling respondents the address was from police records might also lead to some confusion. Some household members may not be aware that a crime was reported to the police. Some respondents may not even make a direct connection between their prior report to the police and the survey. Finally, if a different household has moved into the unit, they may be totally unaware of the prior report.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorcantor_d
File Modified0000-00-00
File Created2021-02-03

© 2024 OMB.report | Privacy Policy