09ap Ss-b

09AP SS-B.doc

The National Intimate Partner and Sexual Violence Surveillance System (NISVSS)

OMB: 0920-0822

Document [doc]
Download: doc | pdf

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


B.1. Respondent Universe and Sampling Methods



The target population for the civilian NISVSS is English or Spanish speaking men and women aged 18 and older in U.S. households. Those under age 18 are excluded because they are legally considered minors and their participation would necessitate significant changes to the study protocol. Additional exclusions include adults that are: 1) residing in penal, mental, or other institutions; 2) living in other group quarters such as dormitories, convents, or boarding houses (with ten or more unrelated residents) and do not have a cell phone; 3) living in a dwelling unit without a land line telephone used for voice purposes and also do not have a working cell phone; or 4) unable to speak English or Spanish well enough to be interviewed. Those who do not speak English or Spanish are excluded because the instrument and survey materials are currently limited to those two languages.


The targeted sample size is driven by the number of respondents needed to provide precise (+/- 1 to 2%) and unsuppressed (relative standard error < .30) national prevalence estimates each year and to provide stable lifetime state-level prevalence estimates within 2-3 years depending on state population size.


B.1.a) American Indian /Alaska Native Populations


An oversample of AI/AN populations is being included, based on the VAWA Reauthorization described in Section A.1. AI/AN populations experience IPV and SV at a disproportionate rate compared to the general population (Oetzel & Duran, 2004). The only national survey to address this issue, which was conducted between 1995 and 1996, showed AI/AN populations reporting the highest rates of any racial/ethnic group for all types of violence (lifetime rates of 34.1% for rape, 61.4% for physical assault, and 17.0% for stalking) (Tjaden & Thoennes, 2000). Reservation-based and clinically based samples show even higher rates of IPV among AI/AN women, with lifetime prevalence rates between 40% (Robin, Chester, Rasmussen, Jaranson, & Goldman, 1997) and 58.7% (Malcoe, Duran, & Montgomery, 2004). Community-based samples have produced similar results; for example, a New York City sample of AI/AN female residents reported lifetime rates of 48% for rape, 40% for domestic violence, and 40% for multiple victimization experiences (Evans-Campbell, Lindhorst, Huang, & Walters, 2006).


B.1.b) Sampling Plan



According to the National Health Interview Survey (NHIS), 14.5% of adults in the United States have only a cell phone (no landline phone); this percentage has been increasing by over 2 percentage points per year (Blumberg and Luke, 2008). Those with only a cell phone are two to three times more likely to be under 35 years old (Tucker, Brick, Meekins, and Morganstein, 2004). Furthermore, 56% of those with a landline phone also have a cell phone and there is limited evidence that respondents who have both types of service can be different depending on whether they were reached and interviewed from the landline or cell phone frame (Kennedy, 2007). They are likely differentiated on which phone service they use most (Blumberg and Luke, 2008). This provides motivation for selecting 1) adults with only cell phones and 2) adults who have both cell and landline phones but are selected through their cell phones. To maximize the coverage of the target population, a dual frame design that randomly samples landline and cell phone numbers, will be used. The dual frame design to reduce undercoverage is discussed in more detail in section B.4.

While there will be overlap between the cell phone frame and the landline frame, the gains in coverage achieved by sampling both frames and the efficiency from not excluding any adults with both types of phone service outweigh the resulting design effect due to inclusion of adults with multiple selection probabilities, who can be selected from either frame.

Table 6. Sampling Strata

Cell Phone Strata

Stratum No.

Strata

1

Alabama

2

Alaska

51

Wyoming



Landline Strata

Stratum No.

Strata

52

Alabama (Excluding AI/AN Zip Codes)

53

Alaska (Excluding AI/AN Zip Codes)

102

Wyoming (Excluding AI/AN Zip Codes)

103

AI/AN Zip Codes

The national sample will be allocated across 103 strata; 51 cell phone frame strata and 52 landline frame strata, shown in Table 6. The cell phone sample will be stratified by state, one for each state and the District of Columbia.

An oversample of the AI/AN populations will also be included, as described above. Based on the adjusted 2000 Census demographic information, the AI/AN population is largely clustered together in known zip code areas. These zip codes will be pooled into their own stratum. Therefore, the landline sample will be comprised of 53 strata, one for each state excluding the AI/AN ZIP codes, the District of Columbia, and one for the AI/AN ZIP codes.

The goal of NISVSS is to provide national and state level prevalence rates. For most state level estimates, data will be pooled across years. To use existing resources most efficiently, the 34 smallest states will also be oversampled by setting a minimum target number of 592 interviews per state. There is a direct trade-off between optimizing the sample design for national or state level estimates. By comparing different options, using a target of at least 592 interviews per state will allow for state level lifetime estimates based on 2-3 years of cumulated data, while increasing the variance of national estimates by an estimated less than 25%.

Sampling Frames For the landline strata, the recommended sampling frame is maintained by Marketing Systems Group (MSG) using its Genesys application. A list-assisted RDD sample will be selected, stratified by state and AI/AN concentration. Exchanges that are known to be limited to cell phones will be excluded from the landline strata. For the cell phone strata defined by state, an RDD sample of cell phone numbers will be selected within exchanges assigned to wireless providers. For the initial year of data collection the oversampling by AI/AN concentration will be included in the 592 minimum target per state; for later years (as described below) target sample size for some small states with substantial oversampling of areas for AI/AN may be increased to offset increases in design effects due to the stratification by AI/AN.



B.1.c) Multi-year data collection plan



As a surveillance system, NISVSS is planned to be conducted annually to track national and state level estimates of IPV, SV, and stalking. Data collection will be conducted continuously throughout every calendar year, producing annual estimates early in the following year. The continuous data collection allows a steady workflow and the retention of skilled interviewing staff. A new sample will be selected every quarter, avoiding problems associated with an old sample of telephone numbers. The quarterly sample design also allows the implementation of any changes for the next quarter as the need arises, rather than being unable to implement a change until the following year. The main features of the sampling and data collection design for each survey year is the same as the one planned for 2010 in order to preserve the trend in estimates, but allowing for an annual sample size as high as 35,000 interviews, depending on anticipated resources.



B.1.d) Sample Allocation and Precision



Based on the study goal of producing national estimates, as well as estimates for the AI/AN population, the first year sample will be allocated to distribute the intended interviews across the AI/AN, the national landline frame, and the cell phone frame. To address the study goal of achieving state-level estimates for selected prevalence rates through cumulating data across years, the samples will be further stratified by state. Sample size within states will be determined primarily by proportional allocation based on the relative size of individual state populations. However, to decrease the number of years required to generate stable state estimates among smaller states, a minimum target of 592 interviews per year will be set. This over-allocation to smaller states will be done within both landline and cell phone frames. During the initial months of data collection, the effect of differential response rates across states on the ability to obtain the desired number of interviews per state will be closely monitored. If differential nonresponse affects the number of interviews by state and leads to fewer than 592 interviews in a subset of states, drawing replicate samples by state will be considered for the data collection starting in 2010.

During the first year, it is anticipated that approximately 1,150 completed interviews will be obtained from the AI/AN stratum. The number of interviews allocated to the AI/AN sample (and other minority populations) may increase in subsequent years of the surveillance system, pending available resources. To determine the amount of oversampling to be conducted in the AI/AN stratum, the effective sample size required to make reasonable estimates on AI/AN women was needed. Based on the NISVS Pilot study, an estimated 2.48% of females were involved in an attempted or completed sexual victimization by an intimate partner in the past 12 months. Using this estimate, an effective sample size of 752 women is needed to achieve an estimate with a relative standard error (RSE) of less than 0.23 (for this priority group, this higher level of precision was used instead of 0.30 beyond which estimates are suppressed). For this stratum, the design effect was estimated to be 1.224 due to unequal weighting and other factors.

Using recent NHIS results (Blumberg and Luke, 2008), 62.5% of the remaining 33,850 respondents will be allocated to the landline frame and 37.5% to the cell phone frame. This allocation is the result of the proportion with only cell phone and the proportion with only landlines, and the oversampling of AI/AN landline numbers. Assuming 76.6% of households with a landline phone also have a cell phone, it is anticipated that 19.9% of respondents will come from a household with a landline only. Furthermore, based on the NHIS, 14.8% of respondents should be from cell phone only households. It is anticipated that adults with both types of telephone service will be overrepresented as they can be selected through either landline or cell phone frames. While this disproportionate allocation (across landline-only, landline and cell, and cell-only domains) will lead to loss in efficiency due to weighting, the lower cost of these interviews given their cooperation to the screener is expected to exceed the trade-off in increased weight variation. Questions have been included in the instrument to allow the estimation of selection probabilities for this dual-frame design.

To illustrate the precision achievable for prevalence estimates, the past 12-month rates of attempted or completed unwanted sex is used. For females, it is estimated to be 2.48% (based on the results of the NISVS Pilot) compared to 1.31% among males. The sample design will not achieve an equal number of male and female respondents, but it will achieve enough male respondents to obtain relatively precise national estimates. Based on this design, the expected design effect will range from 1.24 to 1.78 in the 103 strata, and an additional design effect of 1.23 for national estimates from the oversampling of small strata.



B.1.e) Responsive Design



Landline and cell phone samples perform differently for many reasons. Examples include the inability to screen cell phone samples for known nonworking and business numbers and the responsibility of the cell phone owner for air time cost. The cost per interview in each sample will be closely monitored and the allocation between landline and cell phone numbers may be adjusted during data collection to achieve a more optimal allocation. Releasing the sample in replicates provides some control during the initial weeks of data collection, while more major changes can be made across quarters of data collection, if needed.

Males, particularly in the landline frame, tend to respond at a lower rate than females. While greater substantive interest may lie in victimization of females, the proportion of males and females will also be monitored. The instrument will provide the capacity to change selection probability during data collection for males and females in households with adults from both sexes, during data collection. If the percent of respondents who are male drops below 40%, oversampling of males will be reconsidered. As with the allocation by telephone service, if substantial changes are needed, changes in allocation will be made between quarters of data collection, to avoid the creation of extreme weights (e.g., for males and females in such households, interviewed late in the data collection period).

To address nonresponse, a nonresponse protocol will be implemented, as described in section B.3.d. Briefly, the nonresponse phase is a protocol implemented at some point during survey recruitment, in order to decrease nonresponse and gain information from sample members who have not yet chosen to respond or participate. Importantly, the point at which the nonresponse protocol is implemented during the requirement process has cost implications. For example, a decision could be made to move a phone number into the nonresponse recruitment protocol after 8 unsuccessful attempts or after 15 unsuccessful recruitment attempts. As described in section B.3.d, indicators of cost and survey measures will be monitored to determine when the nonresponse protocol is implemented. This approach is “responsive” to maintain the most effective data collection.


B.1.f) Response Rates


Response rates will be maximized, in part, by utilizing experience from previous surveillance efforts, such as BRFSS and ICARIS-2 and improving response rates be using sophisticated methodological techniques (for example, using responsive design elements). Most telephone surveys have seen a decrease in response rates in recent years. One comparison for telephone survey response rates is the Behavioral Risk Factor Surveillance System (BRFSS). For example, in the 2002 BRFSS, response rates ranged from a low of 43.8% in New York to a high of 82.6% in Minnesota, with a median of 58.3%. The 2002 median response rate is 11.5% lower than the 1996 median response of 69.8% (Behavioral Risk Factor Surveillance System Summary Data Quality Report, 2002).


For the NISVSS, the response rate will be computed based on the American Association of Public Opinion Research (AAPOR) response rate #4 formula (AAPOR, 2008). The AAPOR calculation is a standard developed by researchers and established as a requirement by a leading journal for survey methodology (Public Opinion Quarterly). This particular formula is the most commonly implemented formula that 1) accounts for ineligibility among cases with unknown eligibility; and 2) treats partial interviews (by respondents who have answered all pre-identified essential questions) as interviews. Using a very similar approach to that used in ICARIS-2 Phase-2, NCIPC’s most recent RDD survey (personal communication, Marcie-jo Kresnow), the anticipated response rate for NISVSS is approximately 52%. The response rate for the previous NCIPC ICARIS survey was 47.9% (Black et al, 2006). Similar or higher response rates for NISVSS are anticipated (compared to ICARIS-2) because both contractors use similar extensive interviewer training, and similar calling procedures, insuring that each phone number is fully “worked” within the existing calling protocol (e.g, across all days and weekends, during a range of day time hours). Furthermore, ICARIS-2 did not use a letter of introduction nor did it use a state-of-the-art responsive design approach.

Higher response rates do not necessarily translate to lower nonresponse bias, as surveys with lower response rates may evoke less nonresponse bias in some estimates than surveys with higher response rates (Groves, 2006). Indeed, we found from the NISVS Pilot study that estimates changed more not from increasing response rates due to a greater number of calls, but from the survey design features used in the additional call attempts (Peytchev, Carley-Baxter, and Lynberg Black, in press). These findings have been used to inform the design of NISVSS.


Based on the anticipated response rate, screening rates, and desired number of respondents, a random sample of 44,883 telephone numbers will be selected. This includes a sample of 20,433 cell phone numbers. Because cell phones cannot be prescreened for known nonworking and business numbers, the cell phone sample requires more numbers to be dialed to achieve the target number of interviews. It also includes 6,908 telephone numbers for the AI/AN stratum, where a lower response rate is expected, compared to the non-AI/AN landline strata. Additional sample numbers will be kept in replicates to be used if needed, depending on outcomes during data collection across these diverse frames.



The importance of increasing and maintaining response rates is recognized. Even if evidence is provided that various survey estimates do not suffer from nonresponse bias, the response rate remains the single number that is reported and used to gauge the representativeness of the survey data. One way to increase response rates is the use of an advance letter to inform households about a forthcoming telephone call (Traugott et al., 1987). Another way is through interviewer training, reducing the variation of response rates by interviewer through improving techniques among lower performing interviewers (Groves and McGonagle, 2001). Promised incentives for completing the survey have been shown to increase response rates in RDD surveys (e.g., Cantor, Wang, and Abi-Habib 2003), while incentives also help reduce nonresponse bias in estimates related to the topic of the survey (e.g., Groves et al. 2006; Groves, Singer, and Corning 2000). Implementing an effective incentive plan can, over the course of data collection, reduce overall costs and burden to respondents by reducing the need for additional calls to potential respondents. Furthermore, we will improve the impact of incentives on increasing response rates and reducing nonresponse bias by implementing a phased design. The implementation of these methods in the NISVSS is described in section B.3.


B.2. Procedures for the Collection of Information


B.2.a) Address Matching and Advance Letter of Introduction


Using a letter to inform households about a forthcoming telephone call and giving them a general description of the survey being conducted has been shown to increase survey response rates. For the purpose of mailing advance letters of introduction, released telephone numbers will be address-matched to the extent possible. The sample and address matches will be obtained from Genesys Sampling, Inc.


Based on the NISVS Pilot study, an estimated 45% of address matches will be found for the released landline telephone numbers. Following the procedure used in the NISVS Pilot (OMB # 0920-0724), respondents with an address match will be mailed an advance letter approximately 1- 2 weeks prior to the first telephone contact (Attachment H). The letter will describe the purpose of the survey in both English and Spanish and will: 1) inform sample members that their household has been randomly chosen to participate in the survey; 2) provide useful information regarding the survey; 3) include a toll-free telephone number that respondents can call if they have questions; and 4) include information regarding the incentive that will be offered to eligible respondents who agree to participate.

The study identification numbers will contain an indicator specifying whether or not the household was mailed a letter. The anonymity of those households receiving the letter will be protected because at no time will the address file be linked to data collected during the telephone interview. In addition, upon completion of the study the address file will be destroyed to further prevent any matching.


To maximize human subject protection, the introductory letter has been carefully written to be very general and describe the study in broad terms (Attachment H). The lack of detailed study information in the advance letter is intentional for the protection of the prospective study participant. If the prospective study participant is in a relationship where IPV is present, a more general introductory letter will be less likely to raise suspicion or incite potential perpetrators.


B.2.b) Interviewer Training


Interviewers will be highly trained female staff. The decision to use only female interviewers is based on both the survey topics and the literature regarding gender and reporting. A study conducted by Pollner (1998) indicates that interviewer gender is significantly related to respondents' reports of psychiatric symptoms. Male and female respondents interviewed by women reported more symptoms of depression, substance abuse, and conduct disorders than respondents interviewed by men. These results suggest that female interviewers may create conditions more conducive to disclosure and be perceived as more sympathetic than male interviewers (Pollner, 1998). Furthermore, the sex of the respondent selected from a specific household will be unknown until the respondent has been randomly selected. Thus, it is not feasible to match interviewer and respondent by sex.


A study of the relationship between interviewer characteristics and disclosure of physical and sexual abuse showed that matching clients and interviewers on sex, race, and age did not increase disclosures of either physical or sexual abuse. Rather, respondents were more likely to disclose sexual abuse to female interviewers than to male interviewers (Dailey and Claus, 2001). An earlier study showed that, in most cases, the socio-demographic characteristics of the interviewer did not affect the quality of participants' responses (Fowler and Mangione, 1990).


An additional consideration specifically related to interviews about IPV, SV, and stalking includes the fact that the majority of victims will be female and the majority of the perpetrators will be male. Thus, females may be less comfortable reporting IPV, SV, and stalking to a male interviewer. Based on the lack of evidence to suggest the need for matching interviewers and respondents by gender and because evidence suggests that female interviewers may create conditions more conducive to disclosure, only female interviewers will conduct interviews for this study.


Similarly, female interviewers may be more comfortable asking these questions than would a male interviewer. It is essential that the interviewers be comfortable with the survey because their level of comfort will, in turn, impact quality with which they administer the interview. During the hiring process, potential English and Spanish speaking interviewers will be informed about the background and purpose of the study and carefully screened to ensure that they are comfortable conducting interviews about the topics included.


Interviewers who have been selected will receive a minimum of 16 hours of training. Only those who have successfully completed all training sessions will conduct interviews. Training topics include the purpose of the study, question-by-question review of the instrument, ways to engage respondents, role-playing, and techniques to foster cooperation and completed surveys. Interviewers will be briefed on the potential challenges of administering a survey on IPV, SV, and stalking.


Interviewers will be trained to follow specific interviewing procedures that have been proven in previous studies. Interviewers will be properly trained in the art of administering questions about IPV, SV, and stalking. For example, interviewers will learn about respondent reactions to similar surveys conducted by CDC (as described in Section A.11). They will learn about the need for the use of explicit language and will be coached on being matter-of-fact in their delivery. Interviewers will also learn about resource information that will be provided for participants to learn about resources that are available to those who are coping with traumatic and violent events.


A detailed written training manual specific to this study is being developed. The content of the training will focus on the study background, project specific protocols, confidentiality procedures, questionnaire content, refusal avoidance and well-defined conversion protocols. The information will be presented using a variety of methods, including lecture, demonstration, round-robin practice, paired-practice, and group and paired mock interviews. Due to the nature of the study, particular attention will be paid to the distressed respondent protocol for this study.



Respondent safety is a primary concern for any data collection asking about violence, particularly IPV, SV, and stalking. A distress protocol will be developed. This protocol will address how telephone interviewers should respond and record issues of emotional, physical, or unknown sources of distress throughout the interview process. The distress protocol will be covered extensively during interviewer training. Any information entered into CATI regarding distress cases will be reviewed by project staff, including the staff clinical psychologist. Project staff will forward information regarding distressed respondents to RTI’s IRB, and will include information regarding these cases on the weekly report to CDC. Further, to ensure the safety of respondents, we will provide them with a code word that they can use to end the interview at any time they feel concerned for their safety.


A clinical psychologist with prior experience working with victims of interpersonal violence will participate in this training and in ongoing monitoring and supervision of interviewers. Only interviewers whose work has been reviewed and certified by the project team will be permitted to conduct actual interviews. The certification process will involve completing two paired practice interviews, orally answering the 6-8 most frequently asked questions, completing 2-3 written quizzes covering the distress protocol, refusal avoidance, and an overview of the study.


While participation in surveys is typically not distressful, it is important for researchers to anticipate potential respondent reactions to the questions being asked and to minimize any adverse impact to the fullest extent possible. Although distress is unlikely, both telephone interviewers and supervisors will be trained in the distress protocol appropriate for this study.

The distress protocol will include step-by-step instructions on handling different types of distress. Interviewers will be properly trained with well established contingency plans, including early termination of the interview if the respondent becomes distressed or concerned for their safety. The protocol will include instructions on steps to follow for different types of distress: physical, emotional, and unknown.


If a respondent does display distress, either verbally or non-verbally (i.e., crying) the interviewer will immediately offer to finish the interview at another time and will offer the respondent the telephone numbers for the National Domestic Violence Hotline and The Rape, Abuse, and Incest National Network so that the respondent may obtain services to help alleviate their emotional distress. Similarly, in the unlikely event that a respondent expresses thoughts/intentions of suicide, the interviewer will stop the interview and will encourage the respondent to call the National Suicide Hotline.


In surveys conducted by NCIPC or by RTI International there have been no instances where interviewers actually had to transfer respondents to 911. In the extremely unlikely event that a respondent is in immediate physical danger, the interviewer will advise the respondent to hang up and dial 911 for immediate police assistance. If the respondent specifically asks the interviewer to call 911, the call will be transferred directly and the interviewer will then hang up. The supervisor will then record the details of the event, and relay them to a project staff member as soon as possible. The project staff member will evaluate any events as they are reported, and relay them to the project director and CDC/NICPC staff as soon as possible.


Resource information will also be provided for participants to access for assistance in coping with traumatic and violent events. These measures have been recommended in the literature (Gondolf & Heckert, 2003; Johnson, 1996; Tjaden and Thoennes, 2000; Sullivan & Cain, 2004; Weisz et al., 2000) and have been consistently used in NCIPC’s previous studies, including ICARIS-2 and the SIPV Pilot Survey.


Throughout data collection, interviewers will be monitored to check the quality of their work and to identify areas needing more training or clarification. Silent audio and video monitoring of interviewers will take place throughout data collection. Approximately 10% of all interviewing time will be observed. Interviewers are scored on their performance during these sessions, which are unknown to the interviewer at the time of administration, and are given written and verbal feedback on their performance. This process allows the identification of any individual interviewer performance issues, as well as larger issues that might affect the data collection. The information obtained is then used as a teaching tool for other interviewers, as appropriate.


Because of the prevalence of IPV, SV, and stalking, it can be anticipated that potential or selected interviewers may have personal experience with the topics being addressed during the interview. Although disclosure of this private information will not be requested, it is important for the interviewers to have support available, as needed, and opportunities to debrief (regardless of their personal history) on a regular basis during the conduct of this study. In addition to participating in the interviewer training and ongoing monitoring and supervision of interviewers, interviewers will attend weekly meetings with members of project staff. The purpose of these meetings, which will occur throughout data collection, is typically to discuss progress in data collection, problems in interviewing, and survey instrument changes. These meetings will allow the interviewers to discuss specific experiences as well as their responses to difficult situations. The clinical psychologist will be in attendance at these meetings and will be available to provide this support during regularly scheduled meetings with the interviewers.


B.2.c) Collection of Survey Data


Households will be contacted by telephone approximately one week after the introductory letter has been sent. Interviewers will introduce themselves and (when applicable) state "You may have received a letter from us” (Attachment C, page 7), then will inform the potential participant about the study, select a respondent, and proceed with the introductory script. Households with multiple 18 year old or older residents will be selected using the most recent birthday method.


The letter of introduction and survey will be translated into Spanish. To ensure accuracy and usability of the Spanish versions of the introductory letter and survey instrument, several steps will be taken. A translator will translate the documents into Spanish and another translator will translate the instruments back into English to ensure that all study materials were properly translated and that the meaning of the questions has been preserved. Both the letter and survey will be written in language that is commonly understood; to ensure that people of different Hispanic backgrounds can understand the Spanish versions, a third translator will review the study instruments.


If it is determined that the respondent speaks Spanish and not English, a bilingual interviewer will continue with the introductory script, respondent selection, oral consent, and survey administration.


B.2.d) Estimation procedure


All estimates will be weighted to account for the stratified dual-frame sample design, multiple phases, and additional post-survey adjustments for coverage and nonresponse. The latest National Health Interview Survey data and reported estimates will be used to adjust selection weights in order to combine the landline and cell phone samples, to inform the relative size of each sampling frame and the demographic composition within each frame. Census estimated totals will be used to adjust the combined sample to U.S. adult population.


The variance of survey estimates will be computed using statistical software designed for survey data analyses (e.g., SAS and SUDAAN). These procedures, such as CROSSTAB in SUDAAN, take into account the complex survey design and unequal weighting, and the option for Taylor Series Linearization for estimating variances of proportions will be used.

B.3. Methods to Maximize Response Rates and Deal with Nonresponse


B.3.a) Address Matching


As mentioned in B.2.a, using a letter to inform households about a forthcoming telephone call and giving them a general description of the survey being conducted has been shown to increase survey response rates.


B.3.b) Incentives


Upon completion of the survey, respondents may choose to receive a $10 incentive or to have a similar contribution sent to the United Way. Offering an incentive/donation will help gain cooperation from a larger proportion of the sample as well as compensate respondents on cell phones for the air time used. Promised incentives have been found to be an effective means of increasing response rates in RDD surveys (e.g., Cantor, Wang, and Abi-Habib 2003)1 and reducing nonresponse bias by gaining cooperation from those less interested in the topic (e.g., Groves et al. 2006; Groves, Singer, and Corning 2000). Approximately 75% of respondents in ICARIS 2.5 choose to make a contribution to the United Way rather than receive the $10 themselves (unpublished data).


In order to measure and reduce nonresponse bias, a subsample of nonrespondents will be selected and an incentive of $40 will be offered to help gain their cooperation. This design should also achieve higher overall response rates by focusing a more effective method on a subsample of nonrespondents. Therefore, the objective of this design for implementation of higher incentives is to increase response rates, measure, and reduce nonresponse bias in survey estimates, with a likely trade-off in increased variance due to weighting. This approach is described in more detail below in B.3.d.


B.3.c) Interviewer Training and Calling Procedures


Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). For this reason, extensive interviewer training is a key aspect of the success of this data collection effort. The following interviewing procedures, all of which have been proven in the NISVS Pilot and other previous surveys, will be used to maximize response rates:

  1. Interviewers will be briefed on the potential challenges of administering a survey on IPV, SV, and stalking. Well-defined conversion procedures will be established.

  2. If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. The main purpose of this contact is to ensure that the potential respondent understands the importance of the survey and to determine if anything can be done to make the survey process easier (e.g., schedule a convenient call-back time). At no time will staff pressure or coerce a potential respondent to change their mind about their participation in the survey, and this will be carefully monitored throughout survey administration to ensure that no undue pressure is placed on potential respondents.

  3. Should a respondent interrupt an interview for reasons such as needing to tend to a household matter, the respondent will be given two options: (1) the interviewer will reschedule the interview for completion at a later time or (2) they will be given a toll-free number designated specifically for this project, for them to call back and complete their interview at their convenience.

  4. Fielding of the survey will take place on an ongoing basis. The initial data collection plan includes 2 additional years of data collection.

  5. Conversion staff will be able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.

  6. The contractor will establish a toll-free number, dedicated to the project, so potential respondents may call to confirm the study’s legitimacy.


Special attention will be given to scheduling call backs and refusal procedures. The contractor will work closely with CDC/NCIPC to set up these rules and procedures. Examples include:

  • Detailed definition when a refusal is considered final

  • Monitoring of hang-ups, when they occur during the interview, and finalization of the case once the maximum number of hang-ups allowed are reached

  • Calling will occur during weekdays from 9am to 9pm, Saturdays from 9am to 6pm, and Sundays from noon to 9pm (respondent’s time).

  • Calling will occur across all days of the week and times of the day (up to 9pm). 


Refusal avoidance training will take place approximately 2-4 weeks after data collection begins. During the early period of fielding the survey, supervisors, monitors, and project staff will observe interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They will select a team of refusal avoidance specialists from among the interviewers who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These interviewers will be given additional training in specific techniques tailored to the interview, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to be interviewed or terminates an interview in progress, interviewers will attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the study?” The interviewer will then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”


B.3.d) Phased Design


A nonresponse phase will be introduced toward the end of each year’s data collection period. The primary objective of this design is to reduce nonresponse bias while minimizing the impact on cost. There are several components of the implementation of the nonresponse phase:


  1. Indicators of cost and survey measures will be monitored separately throughout data collection for the landline and cell phone samples.

  2. When phase capacity is reached—the cost indicators start to change (e.g., increasing number of interviewing hours per completed interview) and survey measures stabilize (e.g., sexual victimization rates do not change with additional interviews)—the nonresponse phase will be initiated. This will likely occur at about two-thirds into each data collection period, but will be informed by the above indicators.

  3. A stratified sample of nonrespondents to the initial phase will be selected. Stratification variables will include sampling frame (landline/cell phone), state, and AI/AN oversample.

  4. The incentive will be increased to $40 for the nonresponse phase and letters will be sent to those in the landline sample for whom addresses could be matched. An answering machine and voice mail message about the new contact attempt and higher incentive will be left for each number.


This approach is informed by a number of theoretical developments and past empirical research. Ideally, nonresponse bias is eliminated when the response rate is at 100%. The double sample approach allows the allocation of greater resources to a subsample, in an attempt to substantially increase response rates, as originally proposed by Deming (1953). While 100% response rate is still not achieved in an RDD survey, of importance is how it is increased. Groves and colleagues (Groves et al. 2006; Groves, Singer, and Corning 2000) have developed a leverage-salience theory of survey participation, postulating that individuals vary in the reasons for which their cooperation can be gained. In particular, their experiments show that while individuals with greater interest or involvement in the survey topic are more likely to respond, which can bias survey estimates, incentives can offset such selection bias as incentives are disproportionately more effective in gaining cooperation from those less interested in the topic.


Of critical importance is also how the double sample design with an increased incentive is implemented during the study. Initiating the nonresponse phase too early would fail to collect more interviews prior to subsampling when it is still cost efficient and it brings valuable survey data. Starting it too late could result in expending disproportionate resources on a few additional interviews that bring little additional information, relative to the information that could have been collected through the next survey protocol. We will monitor cost indicators and survey measures to inform the decision of when to start the nonresponse phase. Such an approach has been proposed and tested in a national face to face survey (Groves and Heeringa 2006) and shows great potential for optimizing the onset of phases during the survey. Several factors in the responsive design will be evaluated and adjusted during data collection. For example, at what point during data collection should the standard incentive be stopped and the higher incentive offered to maximize both cost efficiency and response rates simultaneously? Several additional measures will also be closely monitored to inform the decision whether or not to make further adjustments in the contacting and enrollment procedures (for example, the interview completion rate by time of day and the number of interviewer hours to complete a sample case). As the degree of effectiveness of the nonresponse phase depends on how it is designed, such as incentive amount and sampling fractions, changes to the nonresponse phase will be implemented between quarters based on earlier results.


B.3.e) Methods to Maximize Coverage


As briefly described in the sampling plan, approximately 15% of adults in the U.S. have a cell phone and do not have a landline in the household (Blumberg and Luke, 2008). The substantial rate, coupled with its continuous increase, necessitates that a surveillance system such as NISVSS incorporate this cell phone-only population, which would be missing from a landline telephone frame. To address this growing undercoverage problem, a dual-frame approach will be implemented with RDD samples of landline and cell phone numbers. Gaining cooperation on cell phones can be at least as challenging as landlines; the intensive methods to increase response rates and reduce nonresponse bias described in section B.3 will be implemented for both landline and cell phone samples.


Despite the dual-frame approach, additional bias may result from the differential likelihood of reaching respondents with both types of telephone service, depending on which service they are being contacted on. If individuals with both types of service are selected only through the landline frame, and adults from the cell phone frame are screened for having only cell phones, a bias may result because people with both types of service tend to mostly use their landlines. To alleviate this potential problem and to increase the efficiency of data collection, adults with both types of service will be interviewed from each frame. Those with both cell phones and landlines who predominantly use their cell phones (and are therefore unlikely to be interviewed on a landline) will be more likely to be interviewed than if such procedures were not followed. The resulting increased complexity in identifying selection probabilities will be addressed through weighting using the individual and household level telephone service questions asked during the interview (Attachment C).


B.4. Tests of Procedures or Methods to be Undertaken



To ensure that all skip patterns and data collection procedures are operating correctly, the first several months of data collection will be closely monitored. Any necessary adjustments to the CATI instrument or survey protocols will be made during the initial weeks of data collection. An OMB Change Request will be submitted if there is an increase in burden.


B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


B.5.a) Individuals who have participated in designing the data collection:

CDC staff

Michele Lynberg Black, Ph.D., M.P.H. 770-488-4406 [email protected]

Brandy Airall, M.P.H. 770-488-1306 [email protected]

Kathleen Basile, Ph.D. 770-488-4224 [email protected]

Matthew Breiding, Ph.D. 770-488-1396 [email protected]

Jieru Chen, Ph.D. 770-488-1288 [email protected]

Deb Karch, Ph.D. 770-488-1307 [email protected]

Sharon Smith, Ph.D. 770-488-1368 [email protected]


RTI International Staff

Lisa Carley-Baxter, M.A. 919-485-2616 [email protected]

Andy Peytchev, Ph.D. 919-485-5604 [email protected]

Karol Krotki, Ph.D. 202-728-2485 [email protected]

Susan Twiddy, M.S. 919-541-6189 [email protected]

Christopher Krebs, Ph.D. 919-485-5714 [email protected]

B.5.b) The following individuals from RTI International will participate in the collection of data:

Lisa Carley-Baxter, M.A. 919-485-2616 [email protected]

Andy Peytchev, Ph.D. 919-485-5604 [email protected]

Susan Twiddy, M.S. 919-541-6189 [email protected]


B.5.c) The following individuals will participate in data analysis:


CDC Staff

Michele Lynberg Black, Ph.D., M.P.H. 770-488-4406 [email protected]

Brandy Airall, M.P.H. 770-488-1306 [email protected]

Kathleen Basile, Ph.D. 770-488-4224 [email protected]

Matthew Breiding, Ph.D. 770-488-1396 [email protected]

Jieru Chen, Ph.D. 770-488-1288 [email protected]

Deb Karch, Ph.D. 770-488-1307 [email protected]

Sharon Smith, Ph.D. 770-488-1368 [email protected]


RTI International Staff

Lisa Carley-Baxter, M.A. 919-485-2616 [email protected]

Andy Peytchev, Ph.D. 919-485-5604 [email protected]

Karol Krotki, Ph.D. 202-728-2485 [email protected]

Susan Twiddy, M.S. 919-541-6189 [email protected]

Christopher Krebs, Ph.D. 919-485-5714 [email protected]


REFERENCES


American Association for Public Opinion Research (2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 5th edition. Lenexa, Kansas: AAPOR.

Armstrong, J.S. (1975). Monetary Incentives in Mail Surveys. Public Opinion Quarterly, 39, 111-116.


Bachar K, Koss MP. (2001). From prevalence to prevention: Closing the gap between what we know about rape and what we do. In: Renzetti C, Edleson J, Bergen RK, editors. Sourcebook on Violence Against Women. Thousand Oaks (CA): Sage Publications.


Basile KC, Black MC, Simon TR, Arias I, Brener ND & Saltzman LE. (2006). The Association between self reported lifetime history of forced sexual intercourse and recent health risk behaviors: findings from the 2003 National Youth Risk Behavior Survey. Journal of Adolescent Health, 39, 752.


Basile KC, Chen J, Black MC, & Saltzman LE. (2007). Prevalence and characteristics of sexual violence victimization among U.S. Adults 2001-2003. Violence and Victims, 22, 437-448.


Basile KC & Saltzman LE. (2002). Sexual violence surveillance: uniform definitions and recommended data elements. Version 1.0. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control.


Basile KC, Swahn MH, Chen J & Saltzman LE. (2006). Stalking in the United States: Recent National Prevalence Estimates. American Journal of Preventive Medicine, 31, 172-175.


Behavioral Risk Factor Surveillance System Summary Data Quality Report: http://www.cdc.gov/brfss/technical_infodata/pdf/2002SummaryDataQualityReport.pdf


Black MC & Black RS. (2007). A public health perspective on the ethics of asking and not asking about abuse. American Psychologist, 62, 328.


Black MC & Breiding, MJ. (2008) Adverse health conditions and health risk behaviors associated with intimate partner violence – United States, 2005. MMWR, 57, 113-117.


Black MC, Kresnow MJ, Simon T, Arias I & Shelley G. (2006). Telephone survey respondents’ reactions to questions regarding interpersonal violence. Violence and Victims, 21, 445-459.


Blumberg S J & Luke JV. (2008). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2007. Retrieved May 13, 2008, from http://www.cdc.gov/nchs/nhis.htm.


Bonomi AE, Thompson RS & Anderson Ml. (2006). Intimate partner violence and women’s physical, mental, and social functioning. Am J Prev Med, 30, 458-466


Breiding MJ, Black MC & Ryan GW. (2008). Prevalence and risk factors of intimate partner violence in Eighteen U.S. States/Territories, 2005. American Journal of Preventive Medicine, 34, 112-118.


Brush LD. (1990). Violent acts and injurious outcomes in married couples: methodological issues in the National Survey of Families and Households. Gender and Society, 4, 56-67.


Caetano R & Cunradi C. (2003). Intimate partner violence and depression among whites, blacks, and Hispanics. Annals of Epidemiology, 13, 661–5.


Campbell J, Sullivan CM & Davidson WD. (1995). Women who use domestic violence shelters: changes in depression over time. Psychology of Women Quarterly 19, 237-55.


Campbell JC. (2002). Health consequences of intimate partner violence. Lancet, 359, 1331–6.


Cantor D, O’Hare, BC & O’Connor KS. (2007). The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys. Pp. 471-498 in Advances in Telephone Survey Methodology, edited by J.M. Lepkowski, C. Tucker, J.M. Brick, E. de Leeuw, L. Japec, P.J. Lavrakas, M.W. Link, and R.L. Sangester. New York: Wiley.


Cantor D, Wang K & Abi-Habib N. (2003). Comparing Promised and Pre-Paid Incentives for an Extended Interview on a Random Digit Dial Survey. Proceedings of the Survey Research Methods Section of the ASA.


Centers for Disease Control and Prevention (CDC). (2009). Building data systems for monitoring and responding to violence against women: recommendations from a workshop. MMWR 49, No. RR-11).


Church AH. (1993). Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. Public Opinion Quarterly, 57, 62-79.


Coker AL, Smith PH, Bethea L, King MR & McKeown RE. (2000). Physical health consequences of physical and psychological intimate partner violence. Archives of Family Medicine, 9, 451-7.


Corso PS, Mercy JA, Simon TR, Finkelstein EA & Miller TR. (2007). Medical Costs and Productivity Losses Due to Interpersonal and Self-Directed Violence in the United States. American Journal of Prevention Medicine, 32, 474-482.

Crowell NA, Burgess AW, eds. Understanding Violence Against Women. Washington, D.C.; National Academy Press; 1996.


Dailey R & Claus RE. (2001). The relationship between interviewer characteristics and physical and sexual abuse disclosures among substance users: A multilevel analysis. Journal of

Drug Issues, 31, 867-88.


Defense Manpower Data Center. (2008). “August 2007 Status of Services Survey of Active Duty Members: Tabulations and Responses.” DMDC Report No. 2007–049.


Deming W E. (1953). On a Probability Mechanism to Attain an Economic Balance between the Resultant Error of Nonresponse and the Bias of Nonresponse. Journal of the American Statistical Association, 48, 743-772.


Dillman D. (2000) Mail and Internet Surveys. New York, NY: John Wiley & Sons, Inc.


Evans-Campbell T, Lindhorst T, Huang B & Walters KL. (2006). Interpersonal Violence in the Lives of Urban American Indian and Alaska Native Women: Implications for Health, Mental Health, and Help-Seeking. American Journal of Public Health, 96, 1416-1422.

Fahimi M, Kulp D, & Brick JM. (2008). Bias in List-Assisted 100-Series RDD Sampling. Survey Practice. September 2008.

Fisher BJ. (2004). Measuring Rape Against Women: The Significance of Survey Questions. U.S. Department of Justice.


Fowler Jr FJ & Mangione TW. (1990). Standardized Survey Interviewing. Newbury Park:

Sage publications. 


Gelles RJ. (1997). Intimate Violence in Families. 3rd ed. Thousand Oaks (CA): Sage

Publications.


Golding JM. (1996). Sexual assault history and limitations in physical functioning in two general population samples. Research in Nursing and Health, 9, 33-44.


Gondolf EW & Heckert DA. (2003). Determinants of women's perceptions of risk in battering relationships. Violence & Victims, 18, 371-386.


Grossman, S. F., & Lundy, M. (2003). Use of domestic violence services across race and ethnicity by women aged 55 and older. Violence Against Women, 9(12), 2003.


Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public

Opinion Quarterly 70(5): 646-675.


Groves R M, Couper MP, Presser S, Singer E, Tourangeau R, Acosta GP & Nelson L. (2006). Experiments in Producing Nonresponse Bias. Public Opinion Quarterly 70, 720-736.


Groves RM & Heeringa S. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistical Society Series A: Statistics in Society 169, 439-457.


Groves R M & McGonagle KA. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics 17, 249-265.


Groves R M, Singer E & Corning A.(2000). Leverage-Saliency Theory of Survey Participation - Description and an Illustration. Public Opinion Quarterly, 64, 299-308.


Heyman RE, Schaffer R, Gimbel C & Kemer-Hoeg S. (1996). A Comparison of the Prevalence of Army and Civilian Spouse Violence. Prepared by Caliber Associates and Behavioral Science Associates for U.S. Army Community and Family Support Center, September, 1996.

Health Information National Trends Study. (http://cancercontrol.cancer.gov/hints/docs/HINTS_refusal_incentive_abstract.pdf ).

Johnson H. (1996). Dangerous Domains: Violence Against Women in Canada. Scarborough,

ON: Nelson Canada; 1996.


Kaslow N, Thompson MP, Meadows L, Jacobs D, Chance S & Gibb B. (1998). Factors that

mediate or moderate the link between partner abuse and suicidal behavior in African American Women. Journal of Consulting and Clinical Psychology; 66, 533-40.


Kennedy C. (2007). Constructing Weights for Landline and Cell Phone RDD Surveys. Paper

presented at the Annual Meeting of the American Association for Public Opinion Research, May 17-20, Anaheim, CA.


Kessler RC, McGoangle KA, Zhao S, Nelson CB, Hughes M, & Eshleman S. (1994).

Lifetime and 12-month prevalence of DSM-II-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Archives of General Psychiatry, 51, 8-19.


Kilpatrick DG, Edmunds CN, Seymour AK. (1992). Rape in America: A Report to the Nation.

Arlington,VA: National Victim Center & Medical University of South Carolina.


Kish L. Survey Sampling. John Wiley and Sons, Inc. New York; 1965.


Koss MP, Bailey JA, Yuan NP, Herrera VM & Lichter EL. (2003). Depression and PTSD in survivors of male violence: research and training initiatives to facilitate recovery. Psychology of Women Quarterly, 27, 130–42.


Krug et al., eds. (2002). World Report on Violence and Health. Geneva, World Health Organization; 2002.


Lundy M & Grossman SF. (2004). Elder abuse: spouse/intimate partner abuse and family abuse among elders. Journal of Elder Abuse & Neglect, 16, 85-102.


Malcoe LH, Duran BM & Montgomery JM. (2004). Socioeconomic Disparities in Intimate Partner Violence Against Native American Women: A Cross-Sectional Study. BMC Medicine, 2, 20.

Marshall A, Panuzioa J & Taft CT. (2005). Intimate Partner Violence Among Military Veterans and Active Duty Servicemen. Clinical Psychology Review, 25, 862-876.

Martin SL, Gibbs DA, Johnson RE, Rentz ED, Clinton-Sherrod AM & Hardison J. (In Press). Spouse Abuse and Child Abuse by Army Soldiers. Journal of Family Violence.

Max W, Rice DP, Finkelstein E, Bardwell RA, Leadbetter S. The economic toll of intimate partner violence against women in the United States. Violence Vict. 2004;19(3):259-72.



McCarroll JE, Newby JH, Thayer LE, Norwood AE, Fullerton CS & Ursano RJ. (1999). Reports of Spouse Abuse in the U.S. Army Central Registry (1989-1997). Military Medicine, 164, 77–84.

McCarty C. (2003) Differences in Response Rates Using Most Recent Versus Final Dispositions in Telephone Surveys. Public Opinion Quarterly, 67, 396-406.

Mechanic MB, Uhlmansiek MH, Weaver TL & Resick PA. (2000). The impact of severe stalking experienced by acutely battered women: an examination of violence, psychological symptoms and strategic responding. Violence and Victims, 15, 443–58.

Merrill LL, Newell CE, Milner JS, Koss MP, Hervig LK, Gold SR, Rosswork SG & Thornton SR. (1998). Prevalence of premilitary adult sexual victimization and aggression in a Navy recruit sample. Military Medicine, 163, 209-212.

Mouton CP, Rovi S, Furniss K & Lasser NL. (1999). The associations between health and domestic violence in older women: results of a pilot study. Journal of Women’s Health & Gender-Based Medicine, 8, 1173-1179.


National Center for Injury Prevention and Control. (2008). CDC Injury Research Agenda, 2009–2018. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention. Available at: http://www.cdc.gov/ncipc.


National Center for Injury Prevention and Control (NCIPC). (2003). Costs of Intimate Partner Violence Against Women in the United States. Atlanta (GA): Centers for Disease Control and Prevention.


National Household Education Survey. (http://www.amstat.org/sections/srms/Proceedings/papers/1997_181.pdf).


National Research Council. (2003). Elder Mistreatment: Abuse, Neglect, and Exploitation in an Aging America. Panel to Review Risk and Prevalence of Elder Abuse and Neglect. Richard J. Bonnie and Robert B. Wallace, Editors. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.



Oetzel J & Duran B. (2004). Intimate Partner Violence in American Indian and/or Alaska Native Communities: A Social Ecological Framework of Determinants and Interventions. American Indian and Alaska Native Mental Health Research, 11, 49-68.

O'Muircheartaigh C & Campanelli P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Non-Response. Journal of the Royal Statistical Society, 162, 437-446.

Peytchev, A., R. Baxter and L. R. Carley-Baxter (in press). Not All Survey Effort is Equal: Reduction of Nonresponse Bias and Nonresponse Error. Public Opinion Quarterly.

Pollner M. (1998). The effects of interviewer gender in mental health interviews. Journal of Nervous & Mental Disease, 186, 369-73.


Puzone CA, Saltzman LE, Kresnow MJ, Thompson MP & Mercy JA. (2000). National trends in intimate partner homicide. Violence Against Women, 6, 409–26.


Rennison C & Rand M. (2003). Non-lethal intimate partner violence: women age 55 or older. Violence Against Women, 12, 1417-1428.


Robin RW, Chester B, Rasmussen JK, Jaranson JM & Goldman JK. (1997). Prevalence and Characteristics of Trauma and Post-Traumatic Stress Disorder in a Southwestern American Indian Community. American Journal of Psychiatry, 154, 1582-1588.

Sadler AG, Booth BM & Doebbeling BN. (2005). Gang and Multiple Rapes During Military Service: Health Consequences and Health Care. Journal of the American Medical Women’s Association, 60, 33-41

Sahr, R. Consumer Price Index (CPI) Conversion Factors 1800 to Estimated 2015 to Convert Dollars of 2005. (Revised January, 18, 2006). Available: http://oregonstate.edu/Dept/pol_sci/fac/sahr/cv2005.xls (Accessibility Verified January 23, 2006).


Singer E. (2002). The Use of Incentives to Reduce Nonresponse in Household Surveys. Pp. 163-178 in Survey Nonresponse, edited by R.M. Groves, D.A. Dillman, J.L. Eltinge, and R. J.A. Little. New York: Wiley.


Singer E & Bossarte RM. (2006). Incentives for survey participation: when are they coercive? Am J Prev Med 31, 411-418.


Sullivan CM & Cain D. (2004). Ethical and safety considerations when obtaining information from or about battered women for research purposes. Journal of Interpersonal Violence, 19, 603-18.


Teaster, P.A. (2002). A response to the abuse of vulnerable adults: the 2000 survey of state adult protective services. Washington, D.C.: National Center on Elder Abuse.


Thompson M, Arias I, Basile KC, & Desai S. (2002). The Association Between Childhood Physical and Sexual Victimization and Health Problems in Adulthood in a Nationally Representative Sample of Women. Journal of Interpersonal Violence, 17, 1115-1129.


Thornberry O, Massey J. (1998). Trends in United States Telephone Coverage Across Time and Subgroups. In R.M. Groves, P.P. Biemer, L.E. Lyberg, J.T. Massey, W.L. Nicholls, II, & J. Wakesberg (Eds.), Telephone Survey Methodology. New York: Wiley.


Tjaden P & Thoennes N. (1998). Prevalence, Incidence, and Consequences of Violence against Women: Findings from the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 172837.

Tjaden P & Thoennes N. (1998). Stalking in America: Findings from the National Violence Against Women Survey: research brief. U.S. Department of Justice; 1998.


Tjaden P & Thoennes N. (2000). Full Report on the Prevalence, Incidence, and Consequences of Violence Against Women. NCJ Publication # 183781, Washington, DC: National Institute of Justice.

Tjaden P & Thoennes N. (2006). Extent, Nature, and Consequences of Rape Victimization: Findings From the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 210346.

Traugott MW, Groves RM & Lepkowski J. (1987). Using Dual Frame Designs to Reduce Nonresponse in Telephone Surveys. Public Opinion Quarterly, 51, 522-539.


Tucker C, Brick JM, Meekins B, Morganstein D. (2004). Household Telephone Service and

Usage Patterns in the U.S. in 2004. Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 4528 -4534.


U.S. Bureau of Statistics. http://www.dol.gov/dol/topic/statistics/index.htm).


U.S. Census. http://www.census.gov/popest/national/asrh/NC-EST2004/NC-EST2004-01.xls


U.S. Department of Health and Human Services (DHHS). Healthy People 2010. 2nd ed. With Understanding and Improving Health and Objectives for Improving Health 2 vols. Washington, DC: U.S. Government Printing Office; 2000.


U.S. Department of Health and Human Services. Report from the Secretaries Task Force on Elder Abuse. Feb 1992. http://aspe.hhs.gov/daltcp/reports/elderab.htm


Vos T, Astbury J, Piers LS, Magnus A, Heenan M, Stanley L, Walker L & Webster K. (2006). Measuring the Impact of Intimate Partner Violence on the Health of Women in Victoria, Australia. Bulletin of the World Health Organization, 84, 9.

Waksberg J (1978). Sampling Methods for Random Digit Dialing. Journal of the American Statistical Association, 73, 40-46.


Watts C, Heise L, Ellsberg M & Moreno, G. (2001). Putting women first: ethical and safety recommendations for research on domestic violence against women. (Document WHO/EIP/GPE/01.1). Geneva: World Health Organization, Global Programme on Evidence for Health Policy.


Yu J & Cooper H. (1983). Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research, 20, 36-44.



1

2.

1 Singer and colleagues [Singer, E., J. Van Hoewyk and M. P. Maher (2000). "Experiments with Incentives in Telephone Surveys." Public Opinion Quarterly 64(2): 171-188] have been cited as providing evidence toward the ineffectiveness of promised incentives to increase survey response rates. However, approximately 200 sample cases were assigned to each condition (with or without incentive) in their experiments, requiring very large differences to reach statistical significance. The pattern supported the effectiveness of promised incentives, as in all four of their experiments the response rate was higher in the condition with an incentive. Furthermore, the experiments were conducted in 1996 with response rates close to 70%, seemingly more difficult to be increased through incentives relative to the lower current response rates (below 50% on that same survey).

49


File Typeapplication/msword
File TitleOMB Application for
Authormcl2
Last Modified Bytfs4
File Modified2009-05-27
File Created2009-05-27

© 2025 OMB.report | Privacy Policy