Supporting Statement
For OMB Information Collection Request
Part B
OMB# 0920-0822
October 14, 2014
The National Intimate Partner and Sexual Violence Survey (NISVS)
Supported by:
Department of Health and Human Services
Centers for Disease Control and Prevention
National Center for Injury Prevention and Control
Division of Violence Prevention
Project Officer:
Mikel L. Walters, PhD
Behavioral Scientist
Contact Information:
Centers for Disease Control and Prevention
National Center for Injury Prevention and Control
4770 Buford Highway NE MS F-64
Atlanta, GA 30341-3724
phone: 770-488-1361
fax: 770-488-4349
email: [email protected]
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
B.1. Respondent Universe and Sampling Methods
The target population for the civilian NISVS is English or Spanish speaking men and women aged 18 and older in U.S. households. Those under age 18 are excluded because they are legally considered minors and their participation would necessitate significant changes to the study protocol. Additional exclusions include adults that are: 1) residing in penal, mental, or other institutions; 2) living in other group quarters such as dormitories, convents, or boarding houses (with ten or more unrelated residents) and do not have a cell phone; 3) living in a dwelling unit without a land line telephone used for voice purposes and also do not have a working cell phone; or 4) unable to speak English or Spanish well enough to be interviewed. Those who do not speak English or Spanish are excluded because the instrument and survey materials are currently limited to those two languages.
The targeted sample size is driven by the number of respondents needed to provide precise (+/- 1 to 2%) and unsuppressed (relative standard error < .30) national prevalence estimates each year and to provide stable lifetime state-level prevalence estimates within 2-3 years depending on state population size.
B.1.a)
According to the National Health Interview Survey (NHIS), 32.3% of adults in the United States have only a cell phone (no landline phone); this percentage has been increasing by about 2 percentage points per year (Blumberg and Luke, 2012). Those with only a cell phone are two to three times more likely to be under 35 years old (Tucker, Brick, Meekins, and Morganstein, 2004). Furthermore, 56% of those with a landline phone also have a cell phone and there is limited evidence that respondents who have both types of service can be different depending on whether they were reached and interviewed from the landline or cell phone frame (Kennedy, 2007). They are likely differentiated on which phone service they use most (Blumberg and Luke, 2012). This provides motivation for selecting 1) adults with only cell phones and 2) adults who have both cell and landline phones but are selected through their cell phones. To maximize the coverage of the target population, a dual frame design that randomly samples landline and cell phone numbers, is used. The dual frame design to reduce undercoverage is discussed in more detail in section B.4.
While there is an overlap between the cell phone frame and the landline frame, the gains in coverage achieved by sampling both frames and the efficiency from not excluding any adults with both types of phone service outweigh the resulting design effect due to inclusion of adults with multiple selection probabilities, who can be selected from either frame.
Only 8.3% of adults are estimated to live in households with only landlines as of the end of 2011 (Blumberg and Luke, 2012), a percentage that can be projected to fall below 2% by the end of 2013 based on the trends observed in the NHIS data. That is, a single frame RDD survey with cell phone numbers will have near complete coverage of the population. There are great efficiencies to be gained from a single frame design, as well as new challenges. Such an alternative will be evaluated in terms of potential bias and variance reduction, to allow the survey design to be responsive to changes in the survey environment.
Table 5. Sampling Strata
Cell Phone Strata |
|
Stratum No. |
Strata |
1 |
Alabama |
2 |
Alaska |
… |
… |
51 |
Wyoming |
|
|
Landline Strata |
|
Stratum No. |
Strata |
52 |
Alabama (Excluding AI/AN Zip Codes) |
53 |
Alaska (Excluding AI/AN Zip Codes) |
… |
… |
102 |
Wyoming (Excluding AI/AN Zip Codes) |
103 |
AI/AN Zip Codes |
The goal of NISVS is to provide national and state level prevalence rates. For most state level estimates, data is pooled across years. To use existing resources most efficiently, the 34 smallest states are also oversampled by setting a minimum target number of 225 interviews per state. There is a direct trade-off between optimizing the sample design for national or state level estimates. By comparing different options, using a target of at least 225 interviews per state allows for state level lifetime estimates based on 2-3 years of cumulated data, while increasing the variance of national estimates by an estimated less than 25%.
Sampling Frames For the landline strata, the recommended sampling frame is maintained by Marketing Systems Group (MSG) using Cell-WINS. A list-assisted RDD sample was selected and stratified by state. Exchanges that are known to be limited to cell phones are excluded from the landline strata. For the cell phone strata defined by state, an RDD sample of cell phone numbers are selected within exchanges assigned to wireless providers.
B.1.b) Multi-year data collection plan
As a surveillance system, NISVS plans to collect data on an annual basis to track national and state level estimates of IPV, SV, and stalking. Data collection is conducted continuously throughout every calendar year, producing annual estimates early in the following year. A new sample is selected every quarter, avoiding problems associated with an old sample of telephone numbers. The quarterly sample design also allows the implementation of any changes for the next quarter as the need arises, rather than being unable to implement a change until the following year. The main features of the sampling and data collection design for each survey year are the same as 2010 in order to preserve the trend in estimates, but allowing for an annual sample size as high as 35,000 interviews, depending on anticipated resources.
B.1.c)
Based on the study goal of producing national estimates, each sample is allocated to distribute the intended interviews across the national landline frame, and the cell phone frame. To address the study goal of achieving state-level estimates for selected prevalence rates through cumulating data across years, the samples are further stratified by state. Sample size within states is determined primarily by proportional allocation based on the relative size of individual state populations. However, to decrease the number of years required to generate stable state estimates among smaller states, a minimum target of 225 interviews per year was set. This over-allocation to smaller states was done within both landline and cell phone frames.
The most recent NHIS results based on data from June-December 2011 (Blumberg and Luke, 2012) estimate that 32% of adults live in households with only cell phones while only 8% live in households with only landlines (57% have both types of service and 2% no phone service). We use this information together with cost of interviewing data to optimize the sample allocation to the cell and landline frames (Brick et al., 2011). Since the cell phone only rate continues to increase and the landline only rate continues to decrease, NISVS uses projections to plan sample allocation and also evaluates the point at which a dual frame design becomes suboptimal compared to a single frame RDD design.
B.1.c)
Landline and cell phone samples perform differently for many reasons. Examples include the inability to screen cell phone samples for known nonworking and business numbers and the responsibility of the cell phone owner for air time cost. The cost per interview in each sample will be closely monitored and the allocation between landline and cell phone numbers are adjusted during data collection to achieve a more optimal allocation. Releasing the sample in replicates provides some control during the initial weeks of data collection, while more major changes can be made across quarters of data collection, if needed.
Males, particularly in the landline frame, tend to respond at a lower rate than females. While greater substantive interest may lie in victimization of females, the proportion of males and females is also monitored. The instrument provides the capacity to change selection probability during data collection for males and females in households with adults from both sexes, during data collection. If the percent of respondents who are male drops below 40%, oversampling of males will be reconsidered. As with the allocation by telephone service, if substantial changes are needed, changes in allocation are made between quarters of data collection, to avoid the creation of extreme weights (e.g., for males and females in such households, interviewed late in the data collection period).
To address nonresponse, a nonresponse protocol has been implemented, as described in section B.3.d. Briefly, the nonresponse phase is a protocol implemented at some point during survey recruitment, in order to decrease nonresponse and gain information from sample members who have not yet chosen to respond or participate. Importantly, the point at which the nonresponse protocol is implemented during the requirement process has cost implications. For example, a decision could be made to move a phone number into the nonresponse recruitment protocol after 8 unsuccessful attempts or after 15 unsuccessful recruitment attempts. As described in section B.3.d, indicators of cost and survey measures are monitored to determine when the nonresponse protocol is implemented. This approach is “responsive” to maintain the most effective data collection.
B.1.d)
Response rates are maximized, in part, by utilizing experience from previous surveillance efforts, such as Behavioral Risk Factor Surveillance System (BRFSS) and Injury Control and Risk Survey (ICARIS-2) and improving response rates be using sophisticated methodological techniques (for example, using responsive design elements). Most telephone surveys have seen a decrease in response rates in recent years. One comparison for telephone survey response rates is the BRFSS. For example, in the 2010 BRFSS, response rates ranged from a low of 19.29% in Oregon to a high of 57.35% in Utah, with a median of 35.83%. The 2010 median response rate is 22.47% lower than the 2002 median response of 58.3% (Behavioral Risk Factor Surveillance System Summary Data Quality Report, 2010.
For the NISVS, the response rate is computed based on the American Association of Public Opinion Research (AAPOR) response rate #4 formula (AAPOR, 2008). The AAPOR calculation is a standard developed by researchers and established as a requirement by a leading journal for survey methodology (Public Opinion Quarterly). This particular formula is the most commonly implemented formula that 1) accounts for ineligibility among cases with unknown eligibility; and 2) treats partial interviews (by respondents who have answered all pre-identified essential questions) as interviews. The response rate for the previous NCIPC ICARIS survey was 47.9% (Black et al, 2006). The response rate for NISVS in 2010 was 25.86% for landlines and 28.78% for cell phones. In 2011, the response rate was 24.59% for landlines and 28.28% for cell phones.
The importance of increasing and maintaining response rates is well recognized. Even if evidence is provided that various survey estimates do not suffer from nonresponse bias, the response rate remains the single number that is reported and used to gauge the representativeness of the survey data. One way to increase overall response rates is through interviewer training, reducing the variation of response rates by interviewer through improving techniques among lower performing interviewers (Groves and McGonagle, 2001). Promised incentives for completing the survey have been shown to increase response rates in RDD surveys (e.g., Cantor, Wang, and Abi-Habib 2003), while also help reduce nonresponse bias in estimates related to the topic of the survey (e.g., Groves et al. 2006; Groves, Singer, and Corning 2000). Implementing an effective incentive plan can, over the course of data collection, reduce overall costs and burden to respondents by reducing the need for additional calls to potential respondents. Furthermore, we have tried to improve the impact of incentives on increasing response rates and reducing nonresponse bias by implementing a phased design. The implementation of these methods in the NISVS is described in section B.3.
B.2. Procedures for the Collection of Information
B.2.a)
Interviewers are highly trained female staff. The decision to use only female interviewers is based on both the survey topics and the literature regarding gender and reporting. A study conducted by Pollner (1998) indicates that interviewer gender is significantly related to respondents' reports of psychiatric symptoms. Male and female respondents interviewed by women reported more symptoms of depression, substance abuse, and conduct disorders than respondents interviewed by men. These results suggest that female interviewers may create conditions more conducive to disclosure and be perceived as more sympathetic than male interviewers (Pollner, 1998). Furthermore, the sex of the respondent selected from a specific household is unknown until the respondent has been randomly selected. Thus, it is not feasible to match interviewer and respondent by sex.
A study of the relationship between interviewer characteristics and disclosure of physical and sexual abuse showed that matching clients and interviewers on sex, race, and age did not increase disclosures of either physical or sexual abuse. Rather, respondents were more likely to disclose sexual abuse to female interviewers than to male interviewers (Dailey and Claus, 2001). An earlier study showed that, in most cases, the socio-demographic characteristics of the interviewer did not affect the quality of participants' responses (Fowler and Mangione, 1990).
An additional consideration specifically related to interviews about IPV, SV, and stalking includes the fact that the majority of victims are female and the majority of the perpetrators are male. Thus, females may be less comfortable reporting IPV, SV, and stalking to a male interviewer. Based on the lack of evidence to suggest the need for matching interviewers and respondents by gender and because evidence suggests that female interviewers may create conditions more conducive to disclosure, only female interviewers conduct interviews for this study.
Similarly, female interviewers may be more comfortable asking these questions than would a male interviewer. It is essential that the interviewers be comfortable with the survey because their level of comfort, in turn, impacts the quality with which they administer the interview. During the hiring process, potential English and Spanish speaking interviewers are informed about the background and purpose of the study and carefully screened to ensure that they are comfortable conducting interviews about the topics included.
Interviewers receive a minimum of 12 hours of training. Only those who successfully complete all training sessions conduct interviews. Training topics include the purpose of the study, question-by-question review of the instrument, ways to engage respondents, role-playing, and techniques to foster cooperation and completed surveys. Interviewers are briefed on the potential challenges of administering a survey on IPV, SV, and stalking.
Interviewers are trained to follow specific interviewing procedures that have been proven in previous studies. Interviewers are properly trained in the art of administering questions about IPV, SV, and stalking. For example, interviewers learn about respondent reactions to similar surveys conducted by CDC (as described in Section A.11). They learn about the need for the use of explicit language and are coached on being matter-of-fact in their delivery. Interviewers also learn about resource information that is provided for participants to learn about resources that are available to those who are coping with traumatic and violent events.
A detailed written training manual specific to this study has been developed. The content of the training focuses on the study background, project specific protocols, confidentiality procedures, questionnaire content, refusal avoidance and well-defined conversion protocols. The information is presented using a variety of methods, including lecture, demonstration, round-robin practice, paired-practice, and group and paired mock interviews. Due to the nature of the study, particular attention is paid to the distressed respondent protocol for this study.
Respondent safety is a primary concern for any data collection asking about violence, particularly IPV, SV, and stalking. This protocol addresses how telephone interviewers should respond and record issues of emotional, physical, or unknown sources of distress throughout the interview process. The distress protocol is covered extensively during interviewer training. Any information entered into CATI regarding distress cases are reviewed by project staff, including the staff clinical psychologist. Project staff forward information regarding distressed respondents to Abt Associates IRB, and includes information regarding these cases on the weekly report to CDC. Further, to ensure the safety of respondents, we provide them with a code word that they can use to end the interview at any time they feel concerned for their safety.
A clinical psychologist with prior experience working with victims of interpersonal violence participates in the training and in ongoing monitoring and supervision of interviewers. Only interviewers whose work has been reviewed and certified by the project team are permitted to conduct actual interviews. The certification process involves completing two paired practice interviews, orally answering the 6-8 most frequently asked questions, completing a written quizzes covering the distress protocol, refusal avoidance, and an overview of the study.
While participation in surveys is typically not distressful, it is important for researchers to anticipate potential respondent reactions to the questions being asked and to minimize any adverse impact to the fullest extent possible. Although distress is unlikely, both telephone interviewers and supervisors are trained in the distress protocol appropriate for this study.
The distress protocol includes step-by-step instructions on handling different types of distress. Interviewers are properly trained with well-established contingency plans, including early termination of the interview if the respondent becomes distressed or concerned for their safety. The protocol includes instructions on steps to follow for different types of distress: physical, emotional, and unknown.
If a respondent does display distress, either verbally or non-verbally (i.e., crying) the interviewer immediately offers to finish the interview at another time and offers the respondent the telephone numbers for the National Domestic Violence Hotline and The Rape, Abuse, and Incest National Network so that the respondent may obtain services to help alleviate their emotional distress. Similarly, in the unlikely event that a respondent expresses thoughts/intentions of suicide, the interviewer stops the interview and encourages the respondent to call the National Suicide Hotline.
In surveys conducted by NCIPC or by Abt Associates there have been no instances where interviewers actually had to transfer respondents to 911. In the extremely unlikely event that a respondent is in immediate physical danger, the interviewer will advise the respondent to hang up and dial 911 for immediate police assistance. If the respondent specifically asks the interviewer to call 911, the call will be transferred directly and the interviewer will then hang up. The supervisor will then record the details of the event, and relay them to a project staff member as soon as possible. The project staff member will evaluate any events as they are reported, and relay them to the project director and CDC/NICPC staff as soon as possible.
Resource information will also be provided for participants to access for assistance in coping with traumatic and violent events. These measures have been recommended in the literature (Gondolf & Heckert, 2003; Johnson, 1996; Tjaden and Thoennes, 2000; Sullivan & Cain, 2004; Weisz et al., 2000) and have been consistently used in NCIPC’s previous studies, including ICARIS-2 and the SIPV Pilot Survey.
Throughout data collection, interviewers are monitored to check the quality of their work and to identify areas needing more training or clarification. Silent audio and video monitoring of interviewers take place throughout data collection. Approximately 10% of all interviewing time are observed. Interviewers are scored on their performance during these sessions, which are unknown to the interviewer at the time of administration, and are given written and verbal feedback on their performance. This process allows the identification of any individual interviewer performance issues, as well as larger issues that might affect the data collection. The information obtained is then used as a teaching tool for other interviewers, as appropriate.
B.2.b)
The survey has been translated into Spanish. To ensure accuracy and usability of the Spanish versions of survey instrument, several steps have been taken. A translator translates the documents into Spanish and another translator translates the instruments back into English to ensure that all study materials were properly translated and that the meaning of the questions has been preserved. Both the letter and survey have been written in language that is commonly understood; to ensure that people of different Hispanic backgrounds can understand the Spanish versions, a third translator has reviewed the study instruments.
If it is determined that the respondent speaks Spanish and not English, a bilingual interviewer will continue with the introductory script, respondent selection, oral consent, and survey administration.
B.2.c)
All estimates are weighted to account for the stratified dual-frame sample design, multiple phases, and additional post-survey adjustments for coverage and nonresponse. The latest National Health Interview Survey data and reported estimates are used to adjust selection weights in order to combine the landline and cell phone samples, to inform the relative size of each sampling frame and the demographic composition within each frame. Census estimated totals are used to adjust the combined sample to U.S. adult population.
The variance of survey estimates are computed using statistical software designed for survey data analyses (e.g., SAS and SUDAAN). These procedures, such as CROSSTAB in SUDAAN, take into account the complex survey design and unequal weighting, and the option for Taylor Series Linearization for estimating variances of proportions are used.
B.3. Methods to Maximize Response Rates and Deal with Nonresponse
B.3.a)
NISVS uses an incentive plan structure that has been previously approved for several years (2010, 2011 and 2012) of information collections requests (OMB# 0920-0822). The yearly data collection is divided into 4 quarters. In each quarter, Phase 1 data collection is carried out for approximately 12 weeks. During Phase 1, all respondents are offered a $10 incentive to complete the survey.
Upon completion of the first phase a random subsample of non-respondents who did not participate during the main data collection period is drawn (Phase 2). The subsampling rate of all non-respondents for Phase 2 is approximately 0.40. Respondents in Phase 2 are re-contacted and offered a higher incentive of $40 to encourage their participation. The nonresponse phase is described in greater detail in section B.3.c.
Incentive amounts can impact the amount of data collection effort with higher incentive amounts associated with less effort required to complete the interview.
In a previous NISVS data collection cycle, respondents in Phase 2 were randomly assigned to receive incentive amounts of either $25 or $40 in order to determine the impact the lower amount could have on the response rate. It was determined that decreasing the amount from $40 to $25, during Phase 2, decreased the response rate by 17% for landlines and 7% for cell phones. It is clear that a decrease in the amount offered not only negatively impacts the response rate but also potentially increases the non-response bias, particularly in the phase of data collection that is specifically designed to decrease bias.
NISVS also contains a series of sensitive questions regarding respondent’s victimization experiences of sexual violence, intimate partner violence and stalking throughout their lifetime. Given the sensitive nature of these topics and the difficulty of obtaining acceptable response rates in a Random Digit Dial (RDD) telephone surveys, a substantially higher incentive is required in an attempt to reduce non-response bias and to increase the response rate.
Upon completion of the survey, respondents may choose to receive the incentive or to have a similar contribution sent to the United Way. Offering an incentive/donation helps gain cooperation from a larger proportion of the sample as well as compensates respondents on cell phones for the air time used. Promised incentives have been found to be an effective means of increasing response rates in RDD surveys (e.g., Cantor, Wang, and Abi-Habib 2003) and reducing nonresponse bias by gaining cooperation from those less interested in the topic (e.g., Groves et al. 2006; Groves, Singer, and Corning 2000).
The sole purpose of Phase 2 of the two-phase sample design is to be able to measure and reduce reduce nonresponse bias. . This design should also achieve higher overall response rates by focusing a more effective method on a subsample of nonrespondents. Therefore, the objective of this design for implementation of higher incentives is to increase response rates, measure, and reduce nonresponse bias in survey estimates, with a likely trade-off in increased variance due to weighting. This approach is described in more detail below in B.3.c.
This encouragement plan structure proposed in this request is exactly the same as the one used in previously approved information collections requests (OMB# 0920-0822) for 2010, 2011, and 2012. Maintaining the two-phase survey design with the current encouragement plan structure will allow for consistency across years of data collection. Such consistency will permit tracking of changes over time. Methodological changes, that impact the sample, could call into question our ability to make comparisons with earlier national and state level prevalence estimates.
B.3.b)
Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). For this reason, extensive interviewer training is a key aspect of the success of this data collection effort. The following interviewing procedures, all of which have been proven in the NISVS Pilot and other previous surveys, are used to maximize response rates:
Interviewers are briefed on the potential challenges of administering a survey on IPV, SV, and stalking. Well-defined conversion procedures have been established.
If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. The main purpose of this contact is to ensure that the potential respondent understands the importance of the survey and to determine if anything can be done to make the survey process easier (e.g., schedule a convenient call-back time). At no time do staff pressure or coerce a potential respondent to change their mind about their participation in the survey, and this is carefully monitored throughout survey administration to ensure that no undue pressure is placed on potential respondents.
Should a respondent interrupt an interview for reasons such as needing to tend to a household matter, the respondent is given two options: (1) the interviewer will reschedule the interview for completion at a later time or (2) they will be given a toll-free number designated specifically for this project, for them to call back and complete their interview at their convenience.
Fielding of the survey takes place on an ongoing basis.
Conversion staff is able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.
The contractor has established a toll-free number, dedicated to the project, so potential respondents may call to confirm the study’s legitimacy.
Special attention has been given to scheduling call backs and refusal procedures. The contractor works closely with CDC/NCIPC to set up these rules and procedures. Examples include:
Detailed definition when a refusal is considered final
Monitoring of hang-ups, when they occur during the interview, and finalization of the case once the maximum number of hang-ups allowed are reached
Calling occurs only during weekdays from 9am to 9pm, Saturdays from 9am to 6pm, and Sundays from noon to 9pm (respondent’s time).
Calling occurs across all days of the week and times of the day (up to 9pm).
During the early period of fielding the survey, supervisors, monitors, and project staff observe interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They select a team of refusal avoidance specialists from among the interviewers who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These interviewers are given additional training in specific techniques tailored to the interview, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to be interviewed or terminates an interview in progress, interviewers attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the study?” The interviewer then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”
B.3.c)
A nonresponse phase is introduced toward the end of each year’s data collection period. The primary objective of this design is to reduce nonresponse bias while minimizing the impact on cost. There are several components of the implementation of the nonresponse phase:
Indicators of cost and survey measures are monitored separately throughout data collection for the landline and cell phone samples.
When phase capacity is reached—the cost indicators start to change (e.g., increasing number of interviewing hours per completed interview) and survey measures stabilize (e.g., sexual victimization rates do not change with additional interviews)—the nonresponse phase is initiated. This occurs at about two-thirds into each data collection period, but is informed by the above indicators.
A stratified sample of nonrespondents to the initial phase is selected. Stratification variables include sampling frame (landline/cell phone) and state.
In Phase 2 of the two-stage design, the incentive is increased to $40 for the subsample of nonresponse.An answering machine and voice mail message about the new contact attempt and higher incentive are left for each number in the Phase 2 subsample.
This approach is informed by a number of theoretical developments and past empirical research. Ideally, nonresponse bias is eliminated when the response rate is at 100%. The double sample approach allows the allocation of greater resources to a subsample, in an attempt to substantially increase response rates, as originally proposed by Deming (1953). While 100% response rate is still not achieved in an RDD survey, of importance is how it is increased. Groves and colleagues (Groves et al. 2006; Groves, Singer, and Corning 2000) have developed a leverage-salience theory of survey participation, postulating that individuals vary in the reasons for which their cooperation can be gained. In particular, their experiments show that while individuals with greater interest or involvement in the survey topic are more likely to respond, which can bias survey estimates, incentives can offset such selection bias as incentives are disproportionately more effective in gaining cooperation from those less interested in the topic.
B.3.d)
As briefly described in the sampling plan, approximately 15% of adults in the U.S. have a cell phone and do not have a landline in the household (Blumberg and Luke, 2008). The substantial rate, coupled with its continuous increase, necessitates that a surveillance system such as NISVS incorporate this cell phone-only population, which would be missing from a landline telephone frame. To address this growing undercoverage problem, a dual-frame approach has been
implemented with RDD samples of landline and cell phone numbers. Gaining cooperation on cell phones can be at least as challenging as landlines; the intensive methods to increase response rates and reduce nonresponse bias described in section B.3 has been implemented for both landline and cell phone samples.
Despite the dual-frame approach, additional bias may result from the differential likelihood of reaching respondents with both types of telephone service, depending on which service they are being contacted on. If individuals with both types of service are selected only through the landline frame, and adults from the cell phone frame are screened for having only cell phones, a bias may result because people with both types of service tend to mostly use their landlines. To alleviate this potential problem and to increase the efficiency of data collection, adults with both types of service are interviewed from each frame. Those with both cell phones and landlines who predominantly use their cell phones (and are therefore unlikely to be interviewed on a landline) are more likely to be interviewed than if such procedures were not followed. The resulting increased complexity in identifying selection probabilities are addressed through weighting using the individual and household level telephone service questions asked during the interview (Attachment G).
B.4. Tests of Procedures or Methods to be Undertaken
To ensure that all skip patterns and data collection procedures are operating correctly, the first several months of data collection are closely monitored and any necessary adjustments to the CATI instrument or survey protocols will made during the initial weeks of data collection.
B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
B.5.a) Individuals who have participated in designing the data collection:
CDC staff
Mikel Walters, Ph.D. 770-488-1361 [email protected]
Kathleen Basile, Ph.D. 770-488-4224 [email protected]
Jieru Chen, Ph.D. 770-488-1288 [email protected]
Melissa Merrick, Ph.D. 770-488-7464 [email protected]
Sharon Smith, Ph.D. 770-488-1368 [email protected]
Jeff Hall, Ph.D . 770-488-4648 [email protected]
Abt Associates Staff
Danielle R. Hunt, Ph.D. 404-946-6305 [email protected]
Diane Rucinski, Ph.D. 301-628-5508 [email protected]
Andrew Evans, MBA 239-896-1214 a. [email protected]
B.5.b) The following individuals from Abt Associates participating in the collection of data:
Dianne Rucinski Ph.D . 301-628-5508 [email protected]
Andrew Evans MBA 239-896-1214 [email protected]
B.5.c) The following individuals participating in data analysis:
CDC Staff
Jieru Chen, Ph.D. 770-488-1288 [email protected]
Anurag Jain 770-488-3782 [email protected]
REFERENCES
American Association for Public Opinion Research (2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 5th edition. Lenexa, Kansas: AAPOR.
Armstrong, J.S. (1975). Monetary Incentives in Mail Surveys. Public Opinion Quarterly, 39, 111-116.
Bachar K, Koss MP. (2001). From prevalence to prevention: Closing the gap between what we know about rape and what we do. In: Renzetti C, Edleson J, Bergen RK, editors. Sourcebook on Violence Against Women. Thousand Oaks (CA): Sage Publications.
Basile KC, Black MC, Simon TR, Arias I, Brener ND & Saltzman LE. (2006). The Association between self reported lifetime history of forced sexual intercourse and recent health risk behaviors: findings from the 2003 National Youth Risk Behavior Survey. Journal of Adolescent Health, 39, 752.
Basile KC, Chen J, Black MC, & Saltzman LE. (2007). Prevalence and characteristics of sexual violence victimization among U.S. Adults 2001-2003. Violence and Victims, 22, 437-448.
Basile KC & Saltzman LE. (2002). Sexual violence surveillance: uniform definitions and recommended data elements. Version 1.0. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control.
Basile KC, Swahn MH, Chen J & Saltzman LE. (2006). Stalking in the United States: Recent National Prevalence Estimates. American Journal of Preventive Medicine, 31, 172-175.
Behavioral Risk Factor Surveillance System Summary Data Quality Report: http://www.cdc.gov/brfss/technical_infodata/pdf/2002SummaryDataQualityReport.pdf
Black MC & Black RS. (2007). A public health perspective on the ethics of asking and not asking about abuse. American Psychologist, 62, 328.
Black MC & Breiding, MJ. (2008) Adverse health conditions and health risk behaviors associated with intimate partner violence – United States, 2005. MMWR, 57, 113-117.
Blumberg S J & Luke JV. (2008). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2007. Retrieved May 13, 2008, from http://www.cdc.gov/nchs/nhis.htm.
Blumberg, S. J., & Luke, J. V. (2012). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2011 Retrieved June 28, 2012, from http://www.cdc.gov/nchs/nhis.htm
Bonomi AE, Thompson RS & Anderson Ml. (2006). Intimate partner violence and women’s physical, mental, and social functioning. Am J Prev Med, 30, 458-466
Breiding MJ, Black MC & Ryan GW. (2008). Prevalence and risk factors of intimate partner violence in Eighteen U.S. States/Territories, 2005. American Journal of Preventive Medicine, 34, 112-118.
Brick, J. M., Cervantes, I. F., Lee, S., & Norman, G. (2011). Nonsampling errors in dual frame telephone surveys. Survey Methodology, 37(1), 1-12.
Brush LD. (1990). Violent acts and injurious outcomes in married couples: methodological issues in the National Survey of Families and Households. Gender and Society, 4, 56-67.
Caetano R & Cunradi C. (2003). Intimate partner violence and depression among whites, blacks, and Hispanics. Annals of Epidemiology, 13, 661–5.
Campbell J, Sullivan CM & Davidson WD. (1995). Women who use domestic violence shelters: changes in depression over time. Psychology of Women Quarterly 19, 237-55.
Campbell JC. (2002). Health consequences of intimate partner violence. Lancet, 359, 1331–6.
Cantor D, O’Hare, BC & O’Connor KS. (2007). The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys. Pp. 471-498 in Advances in Telephone Survey Methodology, edited by J.M. Lepkowski, C. Tucker, J.M. Brick, E. de Leeuw, L. Japec, P.J. Lavrakas, M.W. Link, and R.L. Sangester. New York: Wiley.
Cantor D, Wang K & Abi-Habib N. (2003). Comparing Promised and Pre-Paid Incentives for an Extended Interview on a Random Digit Dial Survey. Proceedings of the Survey Research Methods Section of the ASA.
Centers for Disease Control and Prevention (CDC). (2009). Building data systems for monitoring and responding to violence against women: recommendations from a workshop. MMWR 49, No. RR-11).
Church AH. (1993). Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. Public Opinion Quarterly, 57, 62-79.
Coker AL, Smith PH, Bethea L, King MR & McKeown RE. (2000). Physical health consequences of physical and psychological intimate partner violence. Archives of Family Medicine, 9, 451-7.
Corso PS, Mercy JA, Simon TR, Finkelstein EA & Miller TR. (2007). Medical Costs and Productivity Losses Due to Interpersonal and Self-Directed Violence in the United States. American Journal of Prevention Medicine, 32, 474-482.
Crowell NA, Burgess AW, eds. Understanding Violence Against Women. Washington, D.C.; National Academy Press; 1996.
Dailey R & Claus RE. (2001). The relationship between interviewer characteristics and physical and sexual abuse disclosures among substance users: A multilevel analysis. Journal of
Drug Issues, 31, 867-88.
Defense Manpower Data Center. (2008). “August 2007 Status of Services Survey of Active Duty Members: Tabulations and Responses.” DMDC Report No. 2007–049.
Deming W E. (1953). On a Probability Mechanism to Attain an Economic Balance between the Resultant Error of Nonresponse and the Bias of Nonresponse. Journal of the American Statistical Association, 48, 743-772.
Dillman D. (2000) Mail and Internet Surveys. New York, NY: John Wiley & Sons, Inc.
Evans-Campbell T, Lindhorst T, Huang B & Walters KL. (2006). Interpersonal Violence in the Lives of Urban American Indian and Alaska Native Women: Implications for Health, Mental Health, and Help-Seeking. American Journal of Public Health, 96, 1416-1422.
Fahimi M, Kulp D, & Brick JM. (2008). Bias in List-Assisted 100-Series RDD Sampling. Survey Practice. September 2008.
Fisher BJ. (2004). Measuring Rape Against Women: The Significance of Survey Questions. U.S. Department of Justice.
Fowler Jr FJ & Mangione TW. (1990). Standardized Survey Interviewing. Newbury Park:
Sage publications.
Gelles RJ. (1997). Intimate Violence in Families. 3rd ed. Thousand Oaks (CA): Sage
Publications.
Golding JM. (1996). Sexual assault history and limitations in physical functioning in two general population samples. Research in Nursing and Health, 9, 33-44.
Gondolf EW & Heckert DA. (2003). Determinants of women's perceptions of risk in battering relationships. Violence & Victims, 18, 371-386.
Grossman, S. F., & Lundy, M. (2003). Use of domestic violence services across race and ethnicity by women aged 55 and older. Violence Against Women, 9(12), 2003.
Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public
Opinion Quarterly 70(5): 646-675.
Groves R M, Couper MP, Presser S, Singer E, Tourangeau R, Acosta GP & Nelson L. (2006). Experiments in Producing Nonresponse Bias. Public Opinion Quarterly 70, 720-736.
Groves RM & Heeringa S. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistical Society Series A: Statistics in Society 169, 439-457.
Groves R M & McGonagle KA. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics 17, 249-265.
Groves R M, Singer E & Corning A.(2000). Leverage-Saliency Theory of Survey Participation - Description and an Illustration. Public Opinion Quarterly, 64, 299-308.
Heyman RE, Schaffer R, Gimbel C & Kemer-Hoeg S. (1996). A Comparison of the Prevalence of Army and Civilian Spouse Violence. Prepared by Caliber Associates and Behavioral Science Associates for U.S. Army Community and Family Support Center, September, 1996.
Health Information National Trends Study. (http://cancercontrol.cancer.gov/hints/docs/HINTS_refusal_incentive_abstract.pdf ).
Johnson H. (1996). Dangerous Domains: Violence Against Women in Canada. Scarborough,
ON: Nelson Canada; 1996.
Kaslow N, Thompson MP, Meadows L, Jacobs D, Chance S & Gibb B. (1998). Factors that
mediate or moderate the link between partner abuse and suicidal behavior in African American Women. Journal of Consulting and Clinical Psychology; 66, 533-40.
Kennedy C. (2007). Constructing Weights for Landline and Cell Phone RDD Surveys. Paper
presented at the Annual Meeting of the American Association for Public Opinion Research, May 17-20, Anaheim, CA.
Kessler RC, McGoangle KA, Zhao S, Nelson CB, Hughes M, & Eshleman S. (1994).
Lifetime and 12-month prevalence of DSM-II-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Archives of General Psychiatry, 51, 8-19.
Kilpatrick DG, Edmunds CN, Seymour AK. (1992). Rape in America: A Report to the Nation.
Arlington,VA: National Victim Center & Medical University of South Carolina.
Kish L. Survey Sampling. John Wiley and Sons, Inc. New York; 1965.
Koss MP, Bailey JA, Yuan NP, Herrera VM & Lichter EL. (2003). Depression and PTSD in survivors of male violence: research and training initiatives to facilitate recovery. Psychology of Women Quarterly, 27, 130–42.
Krug et al., eds. (2002). World Report on Violence and Health. Geneva, World Health Organization; 2002.
Lundy M & Grossman SF. (2004). Elder abuse: spouse/intimate partner abuse and family abuse among elders. Journal of Elder Abuse & Neglect, 16, 85-102.
Malcoe LH, Duran BM & Montgomery JM. (2004). Socioeconomic Disparities in Intimate Partner Violence Against Native American Women: A Cross-Sectional Study. BMC Medicine, 2, 20.
Marshall A, Panuzioa J & Taft CT. (2005). Intimate Partner Violence Among Military Veterans and Active Duty Servicemen. Clinical Psychology Review, 25, 862-876.
Martin SL, Gibbs DA, Johnson RE, Rentz ED, Clinton-Sherrod AM & Hardison J. (In Press). Spouse Abuse and Child Abuse by Army Soldiers. Journal of Family Violence.
Max W, Rice DP, Finkelstein E, Bardwell RA, Leadbetter S. The economic toll of intimate partner violence against women in the United States. Violence Vict. 2004;19(3):259-72.
McCarroll JE, Newby JH, Thayer LE, Norwood AE, Fullerton CS & Ursano RJ. (1999). Reports of Spouse Abuse in the U.S. Army Central Registry (1989-1997). Military Medicine, 164, 77–84.
McCarty C. (2003) Differences in Response Rates Using Most Recent Versus Final Dispositions in Telephone Surveys. Public Opinion Quarterly, 67, 396-406.
Mechanic MB, Uhlmansiek MH, Weaver TL & Resick PA. (2000). The impact of severe stalking experienced by acutely battered women: an examination of violence, psychological symptoms and strategic responding. Violence and Victims, 15, 443–58.
Merrill LL, Newell CE, Milner JS, Koss MP, Hervig LK, Gold SR, Rosswork SG & Thornton SR. (1998). Prevalence of premilitary adult sexual victimization and aggression in a Navy recruit sample. Military Medicine, 163, 209-212.
Mouton CP, Rovi S, Furniss K & Lasser NL. (1999). The associations between health and domestic violence in older women: results of a pilot study. Journal of Women’s Health & Gender-Based Medicine, 8, 1173-1179.
National Center for Injury Prevention and Control. (2008). CDC Injury Research Agenda, 2009–2018. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention. Available at: http://www.cdc.gov/ncipc.
National Center for Injury Prevention and Control (NCIPC). (2003). Costs of Intimate Partner Violence Against Women in the United States. Atlanta (GA): Centers for Disease Control and Prevention.
National Household Education Survey. (http://www.amstat.org/sections/srms/Proceedings/papers/1997_181.pdf).
National Research Council. (2003). Elder Mistreatment: Abuse, Neglect, and Exploitation in an Aging America. Panel to Review Risk and Prevalence of Elder Abuse and Neglect. Richard J. Bonnie and Robert B. Wallace, Editors. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Oetzel J & Duran B. (2004). Intimate Partner Violence in American Indian and/or Alaska Native Communities: A Social Ecological Framework of Determinants and Interventions. American Indian and Alaska Native Mental Health Research, 11, 49-68.
O'Muircheartaigh C & Campanelli P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Non-Response. Journal of the Royal Statistical Society, 162, 437-446.
Peytchev, A., R. Baxter and L. R. Carley-Baxter (in press). Not All Survey Effort is Equal: Reduction of Nonresponse Bias and Nonresponse Error. Public Opinion Quarterly.
Pollner M. (1998). The effects of interviewer gender in mental health interviews. Journal of Nervous & Mental Disease, 186, 369-73.
Puzone CA, Saltzman LE, Kresnow MJ, Thompson MP & Mercy JA. (2000). National trends in intimate partner homicide. Violence Against Women, 6, 409–26.
Rennison C & Rand M. (2003). Non-lethal intimate partner violence: women age 55 or older. Violence Against Women, 12, 1417-1428.
Robin RW, Chester B, Rasmussen JK, Jaranson JM & Goldman JK. (1997). Prevalence and Characteristics of Trauma and Post-Traumatic Stress Disorder in a Southwestern American Indian Community. American Journal of Psychiatry, 154, 1582-1588.
Sadler AG, Booth BM & Doebbeling BN. (2005). Gang and Multiple Rapes During Military Service: Health Consequences and Health Care. Journal of the American Medical Women’s Association, 60, 33-41
Sahr, R. Consumer Price Index (CPI) Conversion Factors 1800 to Estimated 2015 to Convert Dollars of 2005. (Revised January, 18, 2006). Available: http://oregonstate.edu/Dept/pol_sci/fac/sahr/cv2005.xls (Accessibility Verified January 23, 2006).
Singer E. (2002). The Use of Incentives to Reduce Nonresponse in Household Surveys. Pp. 163-178 in Survey Nonresponse, edited by R.M. Groves, D.A. Dillman, J.L. Eltinge, and R. J.A. Little. New York: Wiley.
Singer E & Bossarte RM. (2006). Incentives for survey participation: when are they coercive? Am J Prev Med 31, 411-418.
Sullivan CM & Cain D. (2004). Ethical and safety considerations when obtaining information from or about battered women for research purposes. Journal of Interpersonal Violence, 19, 603-18.
Teaster, P.A. (2002). A response to the abuse of vulnerable adults: the 2000 survey of state adult protective services. Washington, D.C.: National Center on Elder Abuse.
Thompson M, Arias I, Basile KC, & Desai S. (2002). The Association Between Childhood Physical and Sexual Victimization and Health Problems in Adulthood in a Nationally Representative Sample of Women. Journal of Interpersonal Violence, 17, 1115-1129.
Thornberry O, Massey J. (1998). Trends in United States Telephone Coverage Across Time and Subgroups. In R.M. Groves, P.P. Biemer, L.E. Lyberg, J.T. Massey, W.L. Nicholls, II, & J. Wakesberg (Eds.), Telephone Survey Methodology. New York: Wiley.
Tjaden P & Thoennes N. (1998). Prevalence, Incidence, and Consequences of Violence against Women: Findings from the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 172837.
Tjaden P & Thoennes N. (1998). Stalking in America: Findings from the National Violence Against Women Survey: research brief. U.S. Department of Justice; 1998.
Tjaden P & Thoennes N. (2000). Full Report on the Prevalence, Incidence, and Consequences of Violence Against Women. NCJ Publication # 183781, Washington, DC: National Institute of Justice.
Tjaden P & Thoennes N. (2006). Extent, Nature, and Consequences of Rape Victimization: Findings From the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 210346.
Traugott MW, Groves RM & Lepkowski J. (1987). Using Dual Frame Designs to Reduce Nonresponse in Telephone Surveys. Public Opinion Quarterly, 51, 522-539.
Tucker C, Brick JM, Meekins B, Morganstein D. (2004). Household Telephone Service and
Usage Patterns in the U.S. in 2004. Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 4528 -4534.
U.S. Bureau of Statistics. http://www.dol.gov/dol/topic/statistics/index.htm).
U.S. Census. http://www.census.gov/popest/national/asrh/NC-EST2004/NC-EST2004-01.xls
U.S. Department of Health and Human Services (DHHS). Healthy People 2010. 2nd ed. With Understanding and Improving Health and Objectives for Improving Health 2 vols. Washington, DC: U.S. Government Printing Office; 2000.
U.S. Department of Health and Human Services. Report from the Secretaries Task Force on Elder Abuse. Feb 1992. http://aspe.hhs.gov/daltcp/reports/elderab.htm
Vos T, Astbury J, Piers LS, Magnus A, Heenan M, Stanley L, Walker L & Webster K. (2006). Measuring the Impact of Intimate Partner Violence on the Health of Women in Victoria, Australia. Bulletin of the World Health Organization, 84, 9.
Waksberg J (1978). Sampling Methods for Random Digit Dialing. Journal of the American Statistical Association, 73, 40-46.
Watts C, Heise L, Ellsberg M & Moreno, G. (2001). Putting women first: ethical and safety recommendations for research on domestic violence against women. (Document WHO/EIP/GPE/01.1). Geneva: World Health Organization, Global Programme on Evidence for Health Policy.
Yu J & Cooper H. (1983). Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research, 20, 36-44.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | CDC User |
File Modified | 0000-00-00 |
File Created | 2021-01-26 |