MEMORANDUM
MEMORANDUM TO: Lynn Murray
Clearance Officer
Justice Management Division
THROUGH: James P. Lynch
Director
FROM: Shannan Catalano
Statistician, Project Manager
DATE: December 16, 2011
SUBJECT: BJS Request for OMB Clearance for Field Testing under the National Crime Victimization Survey (NCVS) Redesign Generic Clearance, OMB Number 1121-0325.
The Bureau of Justice Statistics (BJS) requests clearance for field test tasks under the OMB generic clearance agreement (OMB Number 1121-0325) for activities related to the National Crime Victimization Survey (NCVS) Redesign Research program. BJS, in consultation with Research Triangle Institute (RTI) under cooperative agreement (Award 2008-BJ-CX-K063 National Crime Victimization Survey Mode Research), has planned a field test of self-administered survey modes to test lower cost complements to current data collection techniques in the NCVS.
Purpose of the Research
This field test, to be carried out by RTI, supports the NCVS program by exploring self-administered survey methods that increase survey participation while maintaining affordable costs and data quality.
This includes providing respondents with more options for participation and testing whether nominal incentives increase subsequent survey participation when self-administration modes such as inbound CATI and Web are utilized.
The objective is to examine the use of Inbound CATI and Web modes as complementary forms of data collection to the interviewer-based methods that are currently used in the NCVS. Inbound CATI and Web modes have the potential to increase survey participation by increasing the ease with which survey respondents participate by allowing them discretion as to when and where they respond to the survey. Self-administered modes have the potential to collect better information on the more sensitive items, as well as offering a less expensive mode of collection that might be applied to the core NCVS. If these methodologies prove feasible, this could have a significant effect on the resources available for other components of the NCVS.
BJS will use the findings from this research to decide whether Web self-administration is viable for the NCVS. The Web application is promising due to its automated format. If findings indicate that Web administration is well received by respondents, then BJS would consider the incorporation of this mode into the survey, perhaps in later interviewing cycles when rapport has been established with respondents during previous in-person interviews.
Of more promise to the NCVS program is the addition of inbound CATI as a mode of data collection. CATI historically relies on the use of outbound phone calls to sampled households from centralized interviewing facilities. Inbound CATI allows respondents to call the centralized facility to initiate the interview. BJS is particularly interested in the utility of inbound CATI as a method of increasing the convenience, and willingness, to participate in the NCVS. The re-introduction of outbound CATI to the NCVS program is currently under consideration at BJS, and should respondents prove receptive to inbound CATI in the current research, then BJS will make a decision as to whether inbound CATI should be introduced in conjunction with outbound CATI.
BJS considers the testing of nominal incentives as a secondary benefit of the mixed-mode research. Incentives have never been used in the NCVS. However, the mixed-mode research design is well-suited to answer the question concerning the utility of incentives in self-administered surveys, particularly those utilizing Web and inbound CATI. The Wave 2 interviews will provide a follow-up measure to test the effects of Wave 1 contacts, including the mode of interviewing and whether respondents and households received an incentive amount during the first interview.
The following questions will be addressed by this research:
How do alternative mixed-mode designs compare to the current design in terms of response rate and cost?
Does initial rapport between interviewer and respondent carry over into subsequent self-administered interviews?
What portion of the household respondents will respond to an initial interview by inbound CATI, and what cost savings might be realized?
How will key survey estimates change (if at all) if different mode mixes and incentives are used?
How does the use of incentives affect interview cost or response rates within alternative modes of administration?
Are incentives effective in boosting response rates and maintaining rapport in subsequent waves?
Additionally, the feasibility of using address-based sampling (ABS) in the collection of data will be examined. To avoid confusion with the ongoing NCVS, the survey in this research is titled the Survey of Crime Victimization (SCV).
Attachment 1 provides a detailed review and discussion on the use of incentives in federal surveys. Careful consideration has been given to the use of incentives in the SCV, and our intent in Attachment 1 is not to imply comparability amongst the SCV and other federal surveys incorporating incentives. Rather, the purpose of Attachment is to demonstrate— 1) the breadth and depth of research related to the use of incentives in federal surveys, and 2) the full range of research that was considered and evaluated during the developmental stages of an SCV incentive strategy. Of import in developing our strategy, are the recent findings from the National Household Education Survey (NHES, U.S. Department of Education) and the National Survey of Early Care and Education (NSECE, Administration for Children and Families).
The NHES field test (OMB Control # 1850-0768) is designed to conduct an incentive experiment at the topical level to further refine an optimal strategy for the use of incentive in the NHES. An advance cash incentive of $5 will be included in the first screener mailing. For those households in which a child is selected as the subject of an ECPP or PFI questionnaire, cases that responded to the first or second mailing of the screener will receive a $5 cash incentive with the topical surveys. Evidence from the 2011 NHES field test indicated that topical response rates can benefit significantly by providing later screener respondents with a larger topical incentive. To confirm this finding, NHES will subsample late screener respondents (those responding to the 3rd or 4th questionnaire mailing) to receive either a $5 or $15 cash incentive with their first topical survey mailing.
Similarly, the incentive strategy to be deployed for the NSECE (OMB Control #: 0970-0391) is informed by outcomes of previous incentive experiments implemented during their 2011 field test. For the Household Screener which will be mailed, NSECE will include a $2 bill in the first mailing. Results from the field test indicated that the $1 advance outperformed the $5 incentive in a follow-up mailing, and NHES has found even greater success with a $2 bill, the incentive proposed for the NSECE main study mail effort.
Additionally, NSECE is including an additional pre-paid incentive of $5 be mailed to households that return the mail Household Screener and are eligible for the Household Survey only or for both the Household and Home-based Provider Survey. The purpose of this incentive strategy is to serve as a mechanism that builds cooperation with eligible households and engages respondents with the study prior to the start of in-person data collection.
An anticipated 9,844 returned mail screeners will be eligible for the Household Survey. Prior to the start of Household Survey data collection, these households will receive an advance letter along with an enclosed $5 bill thanking them for the return of the screener and letting them know they will be asked to answer some follow-up questions. The letter will also provide a toll-free number so that eligible household members can call to make arrangements for participation in the interview. Some significant fraction of these cases will be attempted by Computer-assisted Telephone Interview first, going to the field only if needed. Cases where no phone number is available will be visited in-person as a follow-up to the incentive mailing.
In developing the SCV incentive strategy we considered that the use of pre-pay incentives has been repeatedly endorsed in the literature (Singer, 2002).[1] However, the design of the SCV makes the use of a pre-paid incentive approach impossible. Prepaid incentives are generally sent to a household in expectation that a member of the household will cooperate with the survey request. The person responding to the initial survey request can be any one person residing in the household. This is a key distinction for the SCV which is designed to elicit survey responses from multiple unknown household members during the first contact--in this case, all adults age 18 and older.
Based on the study design, estimated respondent burden, and the sampling methodology--which involves the selection of all age-eligible adults in each sampled household--we believe a $10 promised incentive is the optimal amount for this research. Hence, the proposed experiment will test two incentive conditions of $0 and $10, with the same households being offered the $10 incentive at Waves 1 and 2.
The $10 level was selected because prior studies have found significant effects of promised incentives (compared to a no incentive condition) were at least $5, with most being $15 or more (Yu and Cooper, 1983; Strouse and Hall, 1997; Singer, et al, 1998; Singer, 2000; Cantor, et al. 2003). As this research has shown, offering a smaller amount may yield lower response rates than the $10 proposed amount, thus challenging mode comparisons that are critical to this mixed-mode evaluation.
Additionally, the $10 promised incentive amount has not been tested as extensively as a $5 prepaid incentive and we believe it has the most potential to contribute to our knowledge of how to increase response rates for the self-administered modes and to secure the cooperation of multiple household members over multiple study waves. We are particularly interested in whether the promised incentive works differently in eliciting response via inbound CATI at Wave 1 versus the interviewer-assisted modes, and at Wave 2 when respondents are offered the flexibility of an inbound CATI or Web survey mode. Attachment 1 provides a more detailed justification and discussion of these issues.
Table 1 summarizes the burden for the field test, which consists of screening sampled addresses for eligibility and completing the NCVS interview. There will be three types of interviews: CAPI, CATI, and Web. Wave 1 will involve CAPI and CATI (inbound and outbound) interviews with household and individual respondents. Wave 2 will involve inbound CATI and Web interviews with Wave 1 participants. The total estimated burden is 1,786 hours.
Table 1. Data Collection Burden Estimates
Condition – Respondent Type |
Wave I |
Wave 2 |
Total |
||||
Condition 1 Household Respondent |
Condition 1 Individual Respondent |
Condition 2 Household Respondent |
Condition 2 Individual Respondent |
All Conditions - Household Respondents |
All Conditions- Individual Respondents |
||
Data Collection Period (months) |
5 |
5 |
5 |
5 |
5 |
5 |
10 |
Total Sampled Addresses |
1,920 |
|
1,920 |
|
- |
3,840 |
|
Address Screening (min) |
2 |
|
2 |
|
|
- |
|
Total Eligible Sample |
1,594 |
|
1,594 |
|
2,661 |
5,849 |
|
Total Completed Interviews |
1,402 |
691 |
1,211 |
597 |
1,597 |
627 |
6,125 |
Field Interviews |
1,402 |
622 |
787 |
388 |
- |
- |
3,199 |
Interview length (min) |
18 |
16 |
18 |
16 |
|
|
- |
Telephone Interviews |
- |
69 |
424 |
209 |
1277 |
503 |
2,482 |
Interview length (min) |
- |
16 |
18 |
16 |
16 |
16 |
- |
Web Interviews |
- |
- |
- |
- |
160 |
62 |
222 |
Interview length (min) |
- |
- |
- |
- |
15 |
15 |
- |
Total Burden in Minutes |
29,076 |
11,056 |
25,638 |
9,552 |
22,832 |
8,978 |
107,132 |
Total Burden in Hours1 |
485 |
184 |
427 |
159 |
381 |
150 |
1,786 |
1 Burden calculations assume 10% of completed interviews will contain 1 or more Crime Incident Reports.
Cognitive testing of a mail survey instrument was conducted between January and June 2011 to assess the feasibility of this mode for the NCVS. Results of the testing suggested that considerable reworking of the survey instrument, including rewording and restructuring of items in the Screener and possibly the Crime Incident Report (CIR), is needed to reduce burden and arrive at a mail survey that can be effectively completed in a paper-and-pencil, self-administration format. Because these issues cannot be resolved without more extensive questionnaire redesign and testing, BJS eliminated the mail survey option from the SCV experimental design. Attachment 2 provides a full report of the cognitive test findings.
Usability testing of the Web instrument was completed between January and September 2011. As with the mail survey cognitive test, a preliminary assessment, followed by two additional rounds of testing were conducted with a total of 23 respondents. Testing focused on the respondent’s ability to log in to the survey web site, navigate through the survey questions, back up and change answers, and log off and resume the interview. Testing also examined the respondent’s understanding of key survey terms, concepts, and questions, and the effectiveness of on-screen cues in guiding the respondent through the survey. Problems identified during the usability test, and their resolutions, are summarized below:
Problem: Respondents did not fully understand the concept of crime “incident” and how to answer Screener questions when more than one type of crime happened in a single incident.
Resolution: The survey introduction and CIR transition text was revised to emphasize the reference period and improve respondent understanding of the term “incident.”
Problem: Respondents who experienced more than one crime, in separate incidents or during a single incident, over-reported them in the Screener, resulting in the wrong number of CIRs being generated.
Resolution: Items were added at the end of the Screener to display a summary of the reported crimes and allow respondents to confirm the number of unique crime incidents before proceeding to the first CIR.
Problem: Respondents failed to recognize the relationship between gate questions in the Screener, which determined if particular types of crimes had been experienced during the reference period, and their associated count questions
Resolution: The Screener count questions were reworded to more closely match their associated gate questions to emphasize the relationship between these items.
Problem: Respondent needed additional cueing about the crime incident being discussed in each CIR.
Resolution: An open-ended question that captures the respondent’s description of the crime incident was moved to the beginning of each CIR to cue respondents to the crime incident being discussed on each screen.
The SCV field test instruments are provided in Attachments 3, 4 and 5. Attachment 3 contains the CATI/CAPI Address Verification and Household Enumeration Questionnaire. This instrument has been modified slightly to remove questions aimed at emancipated minors 17 years of age because the target population for this study is persons age 18 or older. Attachment 4 contains the CATI/CAPI Screener and Crime Incident Report. Minor modifications have been made to these instruments to collect the respondent’s email address to facilitate Wave 2 contact and to confirm the number of unique crime incidents being reported prior to initiating a CIR. Attachment 5 contains the Web survey instrument reflecting the resolutions to problems identified during usability testing and the addition of the email address question.
Design development began with an evaluation of research in five areas of survey operations: address-based sampling; mixed-mode surveys; self-administered modes of data collection; use of incentives; and research related to NCVS design and measurement issues (see Attachment 6, Literature Review: Examination of Data Collection Methods for the NCVS). Once strengths and weaknesses of each mode were established, emphasis shifted to the combination of modes to be tested at initial contact in Wave 1 and follow-up contact in Wave 2. Table 2 summarizes the strengths and weaknesses of both interviewer- and self-administered modes considered for this project. Attachment 7 presents a detailed discussion on the development of the design, including the consideration given to modes of data collection.
Table 2. Strengths and Weaknesses of Data Collection Modes
CAPI |
CATI |
Web Self-Administration |
Strengths: Amenable to longer interviews Allows use of visual aids Yields higher response rates Efficient in that CAPI interviewers can be cross-trained as telephone interviewers Helps build rapport for future interviews
Weaknesses: Expensive Longer data collection periods needed |
Strengths: Less expensive than CAPI
Weaknesses: Precludes use of visual aids More sensitive to interview length More partially completed interviews Lower response rates |
Strengths: Yields more honest reporting on sensitive topics Less costly as no interviewer labor involved Routing can be as complex as other computer-assisted modes Length of survey less apparent to respondent than mail
Weaknesses: Language and literacy problems can be difficult to overcome Limited control over who completes survey Best suited in combination with other modes |
The experimental design, presented in Table 3, is a mixed-mode (CATI, CAPI, and Web), multi-wave design with two experimental conditions. Within each condition, two incentive conditions in the amounts of $0 and $10 are tested, resulting in a 2x2 factorial design. The experiment will be conducted in four states—Pennsylvania, Ohio, Virginia, and North Carolina1—using shortened versions of the NCVS instruments. Two data collection waves are planned. A sample of 3,840 mailing addresses will be sampled and equally allocated to each of the four mode and incentive groups with approximately 960 addresses per group. The design will provide sufficient power and precision to examine key estimates and comparisons (see Analysis Plan discussion, page 9).
Table 3. SCV Mixed-Mode Experimental Design
Condition |
Type of Contact |
Wave 1 |
Wave 2 |
||
Household Respondent |
Individual Household Members |
Household Respondent |
Individual Household Members |
||
1 |
Initial Contact |
CAPI |
CAPI |
Web and Inbound CATI |
Web and Inbound CATI |
Follow-up |
None |
CATI |
CATI |
CATI |
|
2 |
Initial Contact |
Inbound and Outbound CATI |
Inbound and Outbound CATI |
Web and Inbound CATI |
Web and Inbound CATI |
Follow-up |
CAPI/CATI (if appt) |
CAPI/CATI (if appt) |
CATI |
CATI |
Condition 1 utilizes a combination of in-person and telephone interviews to build rapport with the households at Wave 1. Outbound CATI is used as the follow-up mode for individual respondents who do not respond to the initial in-person survey request, building on the rapport established by an interviewer with the household respondent. Condition 1 ($0 incentive) is considered a control2 group because the protocol closely resembles the current NCVS collection procedures. The control condition is needed to ensure comparability between the national panel survey and the experimental conditions.3
At Wave 2, the more expensive in-person mode is eliminated to evaluate whether Wave 1 survey experience encourages respondents to participate by less costly self-administered modes. Wave 2 provides all Wave 1 participants with a choice of Web or inbound CATI as their primary survey mode. Despite its promise to decrease cost, the Web mode may not be suited for initial contact because we cannot control who responds to the survey request. However, this mode is tested in Wave 2 (along with inbound CATI) to better understand the extent to which self-administered modes would be a plausible option for subsequent waves of data collection. Outbound CATI is then used as a less costly nonresponse follow-up mode to engage interviewers in securing participation from Wave 1 respondents who do not participate via the self-administered modes.
Condition 2 utilizes a combination in inbound and outbound CATI as the primary survey mode for household and individual respondents at Wave 1, with inbound CATI introduced as a lower-cost option for household participation. Initial CATI contact is a less costly option for establishing interviewer rapport with the household, particularly if a combination of inbound and outbound calling proves effective. The goal is to determine if the CATI efforts yield the desirable response rates and are viable options for the NCVS. The proportion of people who respond via inbound or outbound CATI may be sizeable enough to reduce costs in a non-negligible way given the cost differential between CATI and CAPI interviews. In-person follow-up is then attempted for household members who do not respond to the initial survey request, or when a telephone number is not available or nonworking. Once the household has been reached in-person, interview appointments can be handled via CATI to minimize costs.
As in Condition 1, Web and inbound CATI will be offered as the primary survey mode for all Condition 2 respondents at Wave 2. Outbound CATI will then be used as the nonresponse follow-up mode for both household and individual respondents.
Attachment 8 provides a full discussion of the survey modes and data collection flow diagrams by condition and wave.
The target population consists of English-speaking persons 18 years and older residing in households in four states: North Carolina, Ohio, Pennsylvania, and Virginia. Selection of states for the field test was based on a mix of criteria designed to maximize the number of interviews while containing costs. The four states were selected because of their 1) proximity to RTI’s central office in North Carolina, which will minimize travel costs for field staff training and production, 2) mix of urban and rural households; and 3) lower concentrations of Hispanic households because the SCV does not include bilingual interviews.
A sample of 3,840 mailing addresses drawn from an ABS frame will be selected and equally allocated to each of the four mode and incentive groups. Power calculations indicate that an initial sample of 960 residential mailing addresses is needed to detect Wave 1 response rate differences of approximately 4 percentage points between each of the four groups with 80 percent power at the 0.05 level of significance4. Not all addresses will yield eligible households (e.g., vacancies, small businesses, and non-English speaking household members), so the sample size in each cell has been slightly increased to account for ineligible addresses. We assume that 92% of addresses selected for the sample will be households5. Because the target population for the field test is English-speaking adults 18 years of age and over, we must also adjust the sample size to account for households with no English-speaking adults. Using the average national rate of 9.5% non-English speaking adults in the U.S.6, we can expect an overall eligibility rate of about 83% (92%*90.5%). This implies that an initial sample size of 960 will yield approximately 797 eligible households for each mode and incentive combination.
The assumed eligibility rates, response rates, and sample sizes for each condition are presented in Table 4. With its reliance on CAPI, we estimate that Condition 1 will attain the highest Wave 1 household and individual response rates. (The current NCVS response rate among new households is 89.7 percent.) The expected Wave 1 household response rate for Condition 1 is 86% without the incentive and 90% with the incentive. We have assumed more conservative response rates than the current NCVS because of differences in the study design and data collection protocol.
Because bounded interviews require data from Wave 1 to be collected, we can assume that the number of completed household interviews in Wave 1 will be the starting sample size for Wave 2. For Condition 1, we have assumed conservative conditional Wave 2 household and individual response rates of 60% and 64% respectively. Because the definition of a completed interview includes a completed household interview and completed individual interviews with all additional household members, without a household interview in Wave 2 we cannot pursue individual respondents from Wave 1. The overall Wave 2 household response rate for Condition 1 is expected to be 52% (86%*60%) without the incentive, and 58% with the incentive. The overall Wave 2 individual response rate is 44% (84%*52%) without the incentive and 49% with the incentive.
Table 4. SCV Expected Sample Sizes
Condition/ Incentive Treatment |
Wave 1 |
Wave 2 |
||||||||
Addresses |
Individuals |
Addresses |
Individuals |
|||||||
Sampled |
Eligible HHs |
HH Rs |
Sampled1 |
Interview Rs2 |
Sampled |
HH Rs |
Sampled1 |
Interview Rs2 |
||
1 |
$0 |
960 |
797 |
685 |
1,097 |
921 |
685 |
411 |
658 |
553 |
$10 |
960 |
797 |
717 |
1,147 |
963 |
717 |
459 |
734 |
617 |
|
2 |
$0 |
960 |
797 |
598 |
956 |
892 |
598 |
359 |
574 |
479 |
$10 |
960 |
797 |
614 |
982 |
916 |
614 |
368 |
589 |
492 |
|
Total |
3,840 |
3,187 |
2,614 |
4,182 |
3,693 |
2,614 |
1,597 |
2,555 |
2,140 |
HH = Household; R = Respondent.
1 Assumes that the average number of adults in a household is 1.6.
2 Includes household respondents who provided an interview.
Because we expect Condition 1 to yield the highest household and individual interview response rates, the minimum detectable differences shown in Table 5 assume a one-tailed test for comparisons between Conditions 1 and 2 with 80 percent power at the 0.05 level of significance7.
Table 5. Minimum Detectable Response Rate Differences between Conditions, with and without Incentives
|
Wave 1 Response Rate |
Conditional |
Overall
|
|||
Household |
Individual |
Household |
Individual |
Household |
Individual |
|
Without Incentive |
|
|
|
|
|
|
Sample Size2 |
797 |
1,097 |
685 |
658 |
685 |
658 |
Response Rate3 |
86% |
84% |
60% |
50% |
52% |
44% |
Detectable Difference4 |
4.6% |
4.0% |
6.6% |
6.8% |
6.7% |
6.7% |
With Incentive |
|
|
|
|
|
|
Sample Size2 |
797 |
1,147 |
717 |
734 |
717 |
734 |
Response Rate3 |
90% |
84% |
64% |
54% |
58% |
49% |
Detectable Difference4 |
4.1% |
3.8% |
6.4% |
6.5% |
6.5% |
6.5% |
1 The overall Wave 2 response rate accounts for nonresponse in Wave 1.
2 Eligible sample size for Condition 1.
3
Response rate for
Condition 1. The individual response rate assumes that all household
reference persons and
84% of other eligible persons will
complete the individual interview.
4 Differences in response rates between the conditions, with and without incentives, will be detected with 80% power at the.05 (one-tail) level of significance.
Power calculations for detecting differences in the item response rate between two mode and incentive combinations are based on 921 Wave 1 and 553 Wave 2 individual interviews per cell (i.e., the expected number of interviews from Condition 1, no incentive group). At 80% power and at the 0.05 two-tailed level of significance (as we have no reason to assume one condition will produce higher or lower item response rates than another), detectable differences between Wave 1 item response rate comparisons will range from 4.6% to 5.4% for item response rates between 75% and 85%, and 5.9% to 6.0% for item response rates between 55% and 65%. Similarly, Wave 2 item response rate comparisons will range from 6.4% to 7.5% for item response rates between 75% and 85%, and 8.1% to 8.3% for item response rates between 55% and 65%.
An important goal of this research is to provide an evaluation of ABS frames to enable interviews to be conducted in modes other than CAPI. This is one potential means of reducing data collection costs for the NCVS in the future. In considering the use of ABS frames, one objective is to determine whether names and telephone numbers can be obtained for a high percentage of the NCVS survey population, making contact by telephone a viable option. A second objective is to determine the implications of an ABS frame on the coverage of the NCVS survey population. Attachment 9 provides a detailed discussion on the development and utilization of an ABS sampling approach in this research.
The sample will be selected from a probability proportional to size (PPS) sample of 64 Primary Sampling Units (PSU) (five-digit ZIP codes) from the frame of 3,737 eligible ZIP codes. Following previous research, a systematic PPS sample will be selected with the frame first sorted by ZIP codes (Madow, 1949). This approach ensures a reasonable spread of PSUs across the four states. After selecting the PSUs each sampling unit will be randomly assigned to a mode and incentive combination such that each mode and incentive combination receives 16 PSUs.
Within each of the 64 selected PSUs, a sample of 90 addresses from the frame is taken. Addresses will be selected by a simple random sample of all eligible addresses within each PSU. This ensures an EPSEM (Equal Probability of Selection Method) design.
The field test will involve contacting sampled addresses, gaining cooperation from eligible households, and conducting interviews with eligible household members. Field and telephone staff recruiting, training, and monitoring is described in Attachment 10. A discussion of contact procedures and copies of all materials used to gain cooperation such as lead letters, study brochures, and consent forms are presented in Attachment 11. RTI’s Institutional Review Board (IRB) has reviewed and approved the field test data collection and data security protocols. Attachment 12 presents the data security protocols to be followed during the field test.
The survey questions have the potential to make some respondents upset or distressed as they recall crime events experienced personally or by family members. While we expect this to be a rare event, all interviewers will be trained to handle respondents who become upset during the interview, or whose life or health is in imminent danger. The protocol in Attachment 13 provides interviewers with sample responses to use in the interview setting and contact information for crisis assistance organizations.
Analysis efforts will focus on the six questions to be addressed by this research.
How do alternative mixed-mode designs compare to the current design in terms of response rate and cost?
We will compare the Wave 1 household and individual interview rates for each of the four subgroups of interest (i.e., treatment/control crossed with incentive/no incentive). With 80 percent power, we expect to declare differences of between 4 and 5 percentage points or more statistically significant at the 0.05 (one-tailed) level of significance. In addition, we will use cost and effort data gleaned from the data collection to compare the costs associated with interviewing households with one person versus those with two or more persons.
Does initial rapport between interviewer and respondent carry over into subsequent self-administered interviews?
When considering less-costly modes of data collection for subsequent waves, it is important to know what mode of initial contact will yield high participation rates in a longitudinal design. The proposed research design would allow us to evaluate which combination of modes will produce high response rates not only in Wave 1, but will help build rapport with respondents to ensure participation in Wave 2, when respondent action is required. We will test this hypothesis by comparing the Wave 2 household and individual interview rates for each of the four subgroups. With 80 percent power, we expect to declare differences of between 6 and 7 percentage points or more statistically significant at the 0.05 (one-tailed) level of significance.
What portion of the household respondents will respond to an initial interview by inbound CATI, and what cost savings might be realized?
We will monitor inbound call data, examining both the proportion of sample members who contact RTI to participate by telephone and the demographic characteristics of the callers. We also will estimate the cost savings by comparing the level of effort associated with inbound CATI to outbound CATI. In addition, using telephone interviewing for the first contact with a sample household raises the following issues: Can telephone numbers be matched for a substantial proportion of the sample addresses, and how correct are these matches? We will evaluate the overall ability to append telephone numbers to the address sample, overall and by subgroups of the sample (i.e., urban versus rural).
How will key survey estimates change (if at all) if different mode mixes and incentives are used?
We expect the number of Wave 1 respondents to range between 892 and 963 in each of the four mode/incentive groups. This will enable 95 percent confidence intervals of at most +/- 3.3 percent for percentage estimates in each of the four subgroups8. In addition, estimates such as victimization rates can be derived from regression models which will have increased precision because demographic variables related to the outcomes can be included as covariates.
How does the use of incentives affect interview cost or response rates within alternative modes of administration?
Direct comparisons can be made between the household interview response rates and level of effort, by mode, when no incentives are provided, or when $10 is promised upon completion of the interview. An evaluation of the ability to obtain more complete household rosters as a result of the possible incentive to all adult family members can also be conducted. The latter is particularly important if gatekeepers, the individuals who provide the interviewer with an enumeration of the household, are less likely to omit members of the household when an incentive will be provided for each completed interview.
Conducting part of the household enumeration by an alternative mode can also lead to greater cost efficiency by minimizing the number of in-person contact attempts, especially because the majority of the individual interviews are conducted in the first interview together with the initial enumeration. However, another potential drawback is the possibility that fewer household members will be enumerated in CATI (inbound or outbound) at Wave 1 because household informants are more concerned with providing information about household members via these alternative modes. We will evaluate whether screening households via alternative modes (CATI – Wave 1, and Web, CATI – Wave 2) presents any limitations that are typically not observed with in-person screenings.
Are incentives effective in boosting response rates and maintaining rapport in subsequent waves?
The use of incentives is necessitated by the implementation of Web and inbound CATI modes of data collection that require respondent action. An evaluation of the effectiveness of incentives will compare the distribution of completed interviews by mode to determine whether an incentive achieves a greater proportion of interview completions by these less-costly modes.
Another direct comparison will evaluate the level of cooperation with the individual-level incident reports that is obtained in the incentive vs. no incentive groups in each condition. For example, a $10 incentive may be more effective in gaining cooperation at the first stage (enumeration and household-level questionnaire) than at the second stage. As a result, there may be demographic differences in the subject pools for the incentive and non-incentive groups. To protect against findings that may be affected by the effect of incentives through the household informant, we will use logistic regressions to control for respondent characteristics. Additional outcomes related to cost will inform the relative efficiency of the incentive protocol, by comparing the extent to which the incentive decreases the number of calls required to obtain interviews in the follow-up attempts, as well as the overall cost per case in each condition.
[1][1] Singer, Eleanor. 2002. “The Use of Incentives to Reduce Nonresponse in Household Surveys.” In Survey Nonresponse, ed. Robert M. Groves, et al, pp. 163-178. New York: Wiley.
1 Selection of states for the Phase 2 field test was based on a mix of criteria designed to maximize the number of interviews while containing costs. The four states (VA, NC, PA, and OH) were selected because of their (1) proximity to RTI’s central office in North Carolina, which will minimize travel costs for field staff training and production, (2) mix of urban and rural households; and (3) lower concentrations of Hispanic households (the SCV will not involve bilingual interviews).
2 For purposes of this research, the term “control” refers to the comparison group in the SCV experimental design that most closely resembles the national panel study.
3 Using the most current NCVS data instead of having Condition 1 would not provide comparable data as multiple survey factors impact the data collection process (e.g., response rates can be affected by the geographic area of the experiment, the interviewer pool, the recruitment procedures, coding of call outcomes, and other differences between survey organizations and sample design).
4 Because we expect Condition 1 to yield the highest household and individual interview response rates, the detectable differences assume a one-tailed test for comparisons between Conditions 1 and 2 with 80 percent power at the 0.05 level of significance.
5 In 2002, we selected a nationally representative sample of 12,000 city-style addresses and found 10,999 (91.7 percent) to be associated with HHs (Staab and Iannacchione 2003).
6 2008 ACS One-Year Estimates, Tables S1601 and S0101.
7 Detectible differences were calculated in SAS’ power procedure using Person’s Chi-Square test for two independent proportions.
8 The confidence interval is conservative because it assumes percentage estimates will lie in the mid-range (i.e., between 40% and 60%) where variances are highest. Percentage estimates outside the mid-range will have smaller confidence intervals.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Author |
File Modified | 0000-00-00 |
File Created | 2021-02-03 |