SSB_0822_0822_NISVS 042016KA OMB comments 07 15_2016 revisions clean OMB

SSB_0822_0822_NISVS 042016KA OMB comments 07 15_2016 revisions clean OMB.docx

The National Intimate Partner and Sexual Violence Survey (NISVS)

OMB: 0920-0822

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT: PART B










OMB# 0920-0822



The National Intimate Partner and Sexual Violence Survey (NISVS)





April 20, 2016













Point of Contact:

Sharon G. Smith, PhD

Behavioral Scientist

Contact Information:

Centers for Disease Control and Prevention

National Center for Injury Prevention and Control

4770 Buford Highway NE MS F-64

Atlanta, GA 30341-3724

phone: 770.488.1363

email: [email protected]







B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


This is a revision request for the currently approved National Intimate Partner and Sexual Violence Survey - OMB# 0920-0822, expiration date 6/30/2016 for2 years. This survey has been conducted annually since 2010. Data collection is the 2016-2017 cycle is slated to begin in September 2016 and run through September 2017.

For the data collection year 2016-2017, the periodicity of the administration of the NISVS instrument is being changed from annual to biennial. This change is proposed to increase the number of interviews from 12,500 interviews collected annually to 25,000 interviews during a 12 month period. In addition, CDC has secured funding to increase the number of NISVS interviews conducted in each data collection cycle by as much as 7,500 initially during 2016-2017, and as many as 15,000 over the next three to four years.


In addition, in collaboration with the Department of Defense (DoD), NISVS (using the same newly revised survey described above)will collect information regarding the experiences of IPV, SV and stalking among active duty women and men in the military and wives of active duty men. The collection of data on behalf of DoD will take place during the first six months of data collection during the 2016-2017. The NISVS Survey was last administered to active duty females and wives of active duty males in 2010



B.1. Respondent Universe and Sampling Methods



Target Population 1

The target population for the civilian NISVS is English or Spanish speaking men and women aged 18 and older in U.S. households. Those under age 18 are excluded because they are legally considered minors and their participation would necessitate significant changes to the study protocol. Additional exclusions include adults that are: 1) residing in penal, mental, or other institutions; 2) living in other group quarters such as dormitories, convents, or boarding houses (with ten or more unrelated residents) and do not have a cell phone; 3) living in a dwelling unit without a land line telephone used for voice purposes and also do not have a working cell phone; or 4) unable to speak English or Spanish well enough to be interviewed. Those who do not speak English or Spanish are excluded because the instrument and survey materials are currently limited to those two languages.


It should be noted that he overall NISVS sample size is first defined by the amount of funding available to support interviews under the NISVS data collection contract. Given the budget, CDC attempts to allocate the available sample to states with two objectives in mind: (1) to report as many national prevalence estimates as possible and (2) to report as many state level prevalence estimates as possible, provided that the estimates meet reliability criteria. The reliability criteria implemented in NISVS analysis directly related to sample size for each domain of violence victimization for each subgroup of respondents. Of the many domains of victimization examined by NISVS, the domain of past 12 month sexual violence experienced by men is where the demand on sample size is the highest.


In prior data collection years, interviews were conducted over 12 month periods as defined by calendar year. Data collection was continuous; when one calendar year ends, immediately begin collection for the next calendar year. Data processing and reporting for calendar year 1 happens simultaneous with data collection and QA for the next calendar year. For analysis purposes, we had sample sizes of 16,507 (2010) 12,727 (2011); 11,940 (2012) completed or completed through SV; with sample sizes declining, likely due in part to increased costs and available funding. For the largest data collection year, 2010, some national estimates and many of the state-specific estimates on key outcomes were statistically unreliable and therefore not reportable (Black et al., 2011).


Under the proposed new design, approximately the same number of interviews would be conducted as in each data one-year collection period above, but in a 6 month rather than a 12 month period. This would be accomplished by staggering data collection periods in such a way that data could be collected in two adjacent 6 month periods using funds from different fiscal years. The goal of this new design is to be able to more quickly provide state-specific estimates for as many states as are possible in a more timely manner than has been possible in the past. Doing this is critical given that one of the primary foci of NISVS is to provide information that is useful for guiding action at the state level.


The benefits of this new approach are as follows:

  • Combining adjacent 6 month data collection periods results in twice as many interviews in ½ the time.

  • More precise national estimates

  • Statistically reliable national estimates not previously available for 1 year of data (e.g. )

  • Statistically relatable estimates for more states than previously available for 1 year

  • Built in time for data processing, reporting, and preparing data and documentation for public use


CDC can use data years 2011 and 2012 to illustrate the gains realized with the new design. In 2011, there were 12,727 respondents who either completed the interview or completed the interview at least through the sexual violence section and were therefore available for analysis; slightly more than the 12,500 we anticipate for the base period in the newer survey. Combining data from 2011 and 2012 results in 24,667 records (just shy of the minimum 25,000 anticipated for the base plus first option periods in the new survey, and quite a bit less than the maximum of 32,500 possible).

For males, using 2011 data we are able to present a national estimate for lifetime contact sexual violence; however, only 40 of the 51 states plus DC have statistically reliable estimates. Adding 2012 data allows presentation of estimates for all but 1 state. In addition, the increased sample size allows for a more precise national estimate. As another example, for female rape as a minor, using data from 2011 allows for state-specific estimates for only 2 states. Adding 2012 allows for presentation of statistically reliable estimates for 28 states, with an additional 11 being borderline statistically reliable (relative standard error 0.301 – 0.339). Given that the sample size for the base plus 1st option period is anticipated to be greater than that available from 2011-2012 combined, we anticipate that estimates will be presentable for these states (and possibly more).

As a final example, male rape in the past 12 months has an estimated relative standard error of 0.32 for 2011 and 2012 data combined. Using the same logic as in the prior example, we expect this national estimate to be reportable following option year 1.


The challenges with this approach are as follows:

  • More interviewers and therefore, more high level interviewer monitors needed, which may increase costs

  • Gap in data collection may result in loss of and therefore the need to hire and train additional interviewers which may impact interview quality

Regarding the first concern, while there may be some increase in cost due to the compressed time frame, the same number of interviewing and monitoring hours will be in effect in the compressed period as would have been in effect for an entire calendar year under the old design. Regarding the second concern, we will compare some of the health outcomes as well as prevalence estimates for some of the more common outcomes for the base period (when interviewers are becoming familiar with the survey) and the first option period to get an idea of the impact that a gap may have on data quality. We will then conduct a similar comparison of estimates from option period 1 and option period 2. However, while it is the case that new interviewers take some time to become familiar with the instrument, it is it is possible that at least some of the interviewers assigned to the base and first option periods will still be on staff and available to conduct interviews (after a 6 month gap) beginning with option period 2. Therefore, we would expect smaller differences, if any, to be observed relative to the first comparison.


The changes being implemented in this package, including implementation of a new sampling cycle, increases in the sample size, thus the precision, and changes to the questionnaire may lead to a break in series. We will explore changes in prevalence rates by comparing national and state level data from the 2010, 2011, 2012, and 2015 NISVS data years with data collected during the 2016-2017 data collection year, with regard to (1) estimates of the overall prevalence of sexual violence, stalking, and intimate partner violence victimization by gender; (2) racial/ethnic variation in prevalence; (3) how types of perpetrators vary by violence type; and (4) the age at which victimization typically begins. CDC will ensure that changes in NISVS methodology and issues regarding comparability with previous data and the findings of the above-described analyses are thoroughly discussed in subsequent OMB documentation and described in subsequent NISVS reports, publications, and presentations.


Target Population 2

In collaboration with the Department of Defense (DoD), NISVS (using the same newly revised survey described above) will collect information regarding the experiences of IPV, SV and stalking among active duty women and men in the military and wives of active duty men. This data collection will take place during the first six months of data collection. The targeted sample size for this data collection will be 10,800.


The target population for the DoD NISVS is English speaking men and women aged 18 and older who are active duty military members or wives of active duty male military members. Those who do not speak English are excluded because the instrument and survey materials for the military population are currently limited to English.


Based on findings from the National Health Interview Survey (Blumberg and Luke 2015) 45.8% of adults were living in households with both a landline and a cell phone (Blumberg and Luke 2015). There is limited evidence that respondents who have both types of service can be different depending on whether they were reached and interviewed from the landline or cell phone frame (please see Kennedy, 2007 for more information regarding these differences). Dual service users interviewed on landline versus cell phones are likely differentiated on which phone service they use most (Blumberg and Luke, 2015). Differences associated with this service preference provide motivation for selecting 1) adults with only cell phones and 2) adults who have both cell and landline phones but are selected through their cell phones. To maximize the coverage of the target population, a dual frame design that randomly samples landline and cell phone numbers, is used.

While there is an overlap between the cell phone frame and the landline frame, the gains in coverage achieved by sampling both frames and the efficiency from not excluding any adults with both types of phone service outweigh the resulting design effect due to inclusion of adults with multiple selection probabilities, who can be selected from either frame.

From January – June 2015, only 7.6% of adults were estimated to have lived in households with only landlines (Blumberg and Luke, 2015). Down from 19.2% from January – June 2012. If this trend continues, a single frame RDD survey with cell phone numbers will have near complete coverage of the population. There are great efficiencies to be gained from a single frame design, as well as new challenges. Such an alternative will be evaluated in terms of potential bias and variance reduction, to allow the survey design to be responsive to changes in the survey environment.



Table 5. Sampling Strata

Cell Phone Strata

Stratum No.

Strata

1

Alabama

2

Alaska

51

Wyoming



Landline Strata

Stratum No.

Strata

52

Alabama

53

Alaska

102

Wyoming



The national sample is allocated across 102 strata; 51 cell phone frame strata and 51 landline frame strata, shown in Table 5. Each sample is stratified by state, one for each state and the District of Columbia.







Sampling Frames For the landline strata, the recommended sampling frame is maintained by Marketing Systems Group (MSG) using Cell-WINS. A list-assisted RDD sample was selected and stratified by state. Exchanges that are known to be limited to cell phones are excluded from the landline strata. For the cell phone strata defined by state, an RDD sample of cell phone numbers are selected within exchanges assigned to wireless providers.



Multi-year data collection plan



As a surveillance system, NISVS plans to collect data on a semi-annual, biennial basis to track national and state level estimates of IPV, SV, and stalking. During data collection periods a new sample is selected every quarter, avoiding problems associated with an old sample of telephone numbers. The quarterly sample design also allows the implementation of any changes for the next quarter as the need arises, rather than being unable to implement a change until the following year. The main features of the sampling and data collection design for each survey period are similar to those used in prior NISVS data collection years. For the base data collection period, a minimum of 12,500 interviews will be collected. However, in each of the option periods, the sample size could be increased by as many as 7,500 interviews, depending on available resources.



Based on the study goal of producing national estimates, each sample is allocated to distribute the intended interviews across the national landline frame, and the cell phone frame. To address the study goal of achieving state-level estimates for selected prevalence rates through cumulating data across years, the samples are further stratified by state. Sample size within states is determined primarily by proportional allocation based on the relative size of individual state populations. However, to decrease the number of years required to generate stable state estimates among smaller states, a minimum target of 225 interviews per data collection period was set. This over-allocation to smaller states was done within both landline and cell phone frames.

The most recent NHIS results based on data from January-June 2015 (Blumberg and Luke, 2015) estimate that 47.4% of adults live in households with only cell phones while only 7.6% live in households with only landlines (41.6% have both types of service and 3.4% no phone service). We use this information together with cost of interviewing data to optimize the sample allocation to the cell and landline frames (Brick et al., 2011). Since the cell phone only rate continues to increase and the landline only rate continues to decrease, NISVS uses projections to plan sample allocation and also evaluates the point at which a dual frame design becomes suboptimal compared to a single frame RDD design.


Landline and cell phone samples perform differently for many reasons. Examples include the inability to screen cell phone samples for known nonworking and business numbers and the responsibility of the cell phone owner for air time cost. The cost per interview in each sample will be closely monitored and the allocation between landline and cell phone numbers are adjusted during data collection to achieve a more optimal allocation. Releasing the sample in replicates provides some control during the initial weeks of data collection, while more major changes can be made across quarters of data collection, if needed.

Males, particularly in the landline frame, tend to respond at a lower rate than females. While greater substantive interest may lie in victimization of females, the proportion of males and females is also monitored. The instrument provides the capacity to change selection probability during data collection for males and females in households with adults from both sexes, during data collection. If the percent of respondents who are male drops below 40%, oversampling of males will be reconsidered. As with the allocation by telephone service, if substantial changes are needed, changes in allocation are made between quarters of data collection, to avoid the creation of extreme weights (e.g., for males and females in such households, interviewed late in the data collection period).


Response rates are maximized, in part, by utilizing experience from previous surveillance efforts, such as Behavioral Risk Factor Surveillance System (BRFSS) and Injury Control and Risk Survey (ICARIS-2) and improving response rates be using sophisticated methodological techniques (for example, using responsive design elements). Specifically, this is done by using lessons learned from the findings associated with the work published in Black, Kresnow, Simon, Arias and Shelley, 2006’s article in Violence and Victims-- “Telephone Survey Respondents’ Reactions to Questions Regarding Interpersonal Violence.” A link to the article can be found here. http://search.proquest.com/docview/208528108/fulltextPDF/F8E6BDA42C34C5FPQ/1?accountid=26724


Moreover, NISVS implements a responsive design (Laflamme & Karganis, 2010; Peytchev, 2010). To increase its response rates, our contractor institutes responsive design features such as the following while also taking into account other improvements: (1) monitoring interview completion rates to determine whether to release additional replicates—how many and from which strata—to achieve sample size and precision targets; (2) monitoring interview completion to determine the subsampling rate for Phase 2, to maximize response rates and achieve the target number of interviews; and (3) using models recently developed to determine when to stop dialing specific unproductive numbers, to increase efficiency and response rates by directing interviewers to more productive cases.


For the NISVS, the response rate is computed based on the American Association of Public Opinion Research (AAPOR) response rate #4 formula (AAPOR, 2008). The AAPOR calculation is a standard developed by researchers and established as a requirement by a leading journal for survey methodology (Public Opinion Quarterly). This particular formula is the most commonly implemented formula that 1) accounts for ineligibility among cases with unknown eligibility; and 2) treats partial interviews (by respondents who have answered all pre-identified essential questions) as interviews. The response rate for the previous NCIPC ICARIS survey was 47.9% (Black et al, 2006). The response rate for NISVS in 2012 was 20.32% for landlines and 28.04% for cell phones. The overall response rate for 2012 was 25.20%. Preliminary estimates from the 2015 NISVS Survey indicate similar frame-specific and overall response rates (AAPOR RR4, unweighted – 21.6% landline; 27.3% cell phone; 25.7% overall).

The importance of increasing and maintaining response rates is well recognized. Even if evidence is provided that various survey estimates do not suffer from nonresponse bias, the response rate remains the single number that is reported and used to gauge the representativeness of the survey data. One way to increase overall response rates is through interviewer training, reducing the variation of response rates by interviewer through improving techniques among lower performing interviewers (Groves and McGonagle, 2001). Promised incentives for completing the survey have been shown to increase response rates in RDD surveys (e.g., Cantor, Wang, and Abi-Habib 2003), while also help reduce nonresponse bias in estimates related to the topic of the survey (e.g., Groves et al. 2006; Groves, Singer, and Corning 2000). Implementing an effective incentive plan can, over the course of data collection, reduce overall costs and burden to respondents by reducing the need for additional calls to potential respondents. Furthermore, we have tried to improve the impact of incentives on increasing response rates and reducing nonresponse bias by implementing a phased design. The implementation of these methods in the NISVS is described in section B.3.


B.2. Procedures for the Collection of Information


Using a letter to inform households about a forthcoming telephone call and giving them a general description of the survey being conducted has been shown to increase survey response rates. For the purpose of mailing advance letters of introduction, released telephone numbers are address-matched to the extent possible. The sample and address matches are obtained from Genesys Sampling, Inc.


Following the procedure used in the NISVS Pilot (OMB # 0920-0724), respondents with an address match are mailed an advance letter approximately 1- 2 weeks prior to the first telephone contact (Attachments I1-I2). The letter describes the purpose of the survey in both English and Spanish and: 1) inform sample members that their household has been randomly chosen to participate in the survey; 2) provide useful information regarding the survey; 3) include a toll-free telephone number that respondents can call if they have questions; and 4) include information regarding the incentive that is offered to eligible respondents who agree to participate.

The study identification numbers contains an indicator specifying whether or not the household was mailed a letter. The anonymity of those households receiving the letter are protected because at no time are the address file be linked to data collected during the telephone interview. In addition, upon completion of the study the address file are destroyed to further prevent any matching.


To maximize human subject protection, the introductory letter has been carefully written to be very general and describe the study in broad terms (Attachments I1-I2). The lack of detailed study information in the advance letter is intentional for the protection of the prospective study participant. If the prospective study participant is in a relationship where IPV is present, a more general introductory letter is less likely to raise suspicion or incite potential perpetrators.



Interviewers are highly trained female staff. The decision to use only female interviewers is based on both the survey topics and the literature regarding gender and reporting. A study conducted by Pollner (1998) indicates that interviewer gender is significantly related to respondents' reports of psychiatric symptoms. Male and female respondents interviewed by women reported more symptoms of depression, substance abuse, and conduct disorders than respondents interviewed by men. These results suggest that female interviewers may create conditions more conducive to disclosure and be perceived as more sympathetic than male interviewers (Pollner, 1998). Furthermore, the sex of the respondent selected from a specific household is unknown until the respondent has been randomly selected. Thus, it is not feasible to match interviewer and respondent by sex.


A study of the relationship between interviewer characteristics and disclosure of physical and sexual abuse showed that matching clients and interviewers on sex, race, and age did not increase disclosures of either physical or sexual abuse. Rather, respondents were more likely to disclose sexual abuse to female interviewers than to male interviewers (Dailey and Claus, 2001). An earlier study showed that, in most cases, the socio-demographic characteristics of the interviewer did not affect the quality of participants' responses (Fowler and Mangione, 1990).


An additional consideration specifically related to interviews about IPV, SV, and stalking includes the fact that the majority of victims are female and the majority of the perpetrators are male. Thus, females may be less comfortable reporting IPV, SV, and stalking to a male interviewer. Based on the lack of evidence to suggest the need for matching interviewers and respondents by gender and because evidence suggests that female interviewers may create conditions more conducive to disclosure, only female interviewers conduct interviews for this study.


Similarly, female interviewers may be more comfortable asking these questions than would a male interviewer. It is essential that the interviewers be comfortable with the survey because their level of comfort, in turn, impacts the quality with which they administer the interview. During the hiring process, potential English and Spanish speaking interviewers are informed about the background and purpose of the study and carefully screened to ensure that they are comfortable conducting interviews about the topics included.


Interviewers receive a minimum of 12 hours of training. Only those who successfully complete all training sessions conduct interviews. Training topics include the purpose of the study, question-by-question review of the instrument, ways to engage respondents, role-playing, and techniques to foster cooperation and completed surveys. Interviewers are briefed on the potential challenges of administering a survey on IPV, SV, and stalking.


Interviewers are trained to follow specific interviewing procedures that have been proven in previous studies. Interviewers are properly trained in the art of administering questions about IPV, SV, and stalking. For example, interviewers learn about respondent reactions to similar surveys conducted by CDC (as described in Section A.11). They learn about the need for the use of explicit language and are coached on being matter-of-fact in their delivery. Interviewers also learn about resource information that is provided for participants to learn about resources that are available to those who are coping with traumatic and violent events.


A detailed written training manual specific to this study has been developed. The content of the training focuses on the study background, project specific protocols, confidentiality procedures, questionnaire content, refusal avoidance and well-defined conversion protocols. The information is presented using a variety of methods, including lecture, demonstration, round-robin practice, paired-practice, and group and paired mock interviews. Due to the nature of the study, particular attention is paid to the distressed respondent protocol for this study.



Respondent safety is a primary concern for any data collection asking about violence, particularly IPV, SV, and stalking. This protocol addresses how telephone interviewers should respond and record issues of emotional, physical, or unknown sources of distress throughout the interview process. The distress protocol is covered extensively during interviewer training. Any information entered into CATI regarding distress cases are reviewed by project staff, including the staff clinical psychologist. Project staff forward information regarding distressed respondents to the contractor’s IRB, and includes information regarding these cases on the weekly report to CDC. Further, to ensure the safety of respondents, we provide them with a code word that they can use to end the interview at any time they feel concerned for their safety.


A clinical psychologist with prior experience working with victims of interpersonal violence participates in the training and in ongoing monitoring and supervision of interviewers. Only interviewers whose work has been reviewed and certified by the project team are permitted to conduct actual interviews. The certification process involves observation during training, completion of practice interviews, completion of paired mock interviews, successfully completion of written and oral quizzes, signing of a confidentiality agreement, and demonstration of proficient NISVS survey administration with correct disposition coding. Interviewers are not allowed to begin working until all certification steps are completed and documented.


While participation in surveys is typically not distressful, it is important for researchers to anticipate potential respondent reactions to the questions being asked and to minimize any adverse impact to the fullest extent possible. Although distress is unlikely, both telephone interviewers and supervisors are trained in the distress protocol appropriate for this study.

The distress protocol includes step-by-step instructions on handling different types of distress. Interviewers are properly trained with well-established contingency plans, including early termination of the interview if the respondent becomes distressed or concerned for their safety. The protocol includes instructions on steps to follow for different types of distress: physical, emotional, and unknown.


If a respondent does display distress, either verbally or non-verbally (i.e., crying) the interviewer immediately offers to finish the interview at another time and offers the respondent the telephone numbers for the National Domestic Violence Hotline and The Rape, Abuse, and Incest National Network so that the respondent may obtain services to help alleviate their emotional distress. Similarly, in the unlikely event that a respondent expresses thoughts/intentions of suicide, the interviewer stops the interview and encourages the respondent to call the National Suicide Hotline.


In surveys conducted by NCIPC or by the contractor there have been no instances where interviewers actually had to transfer respondents to 911. In the extremely unlikely event that a respondent is in immediate physical danger, the interviewer will advise the respondent to hang up and dial 911 for immediate police assistance. If the respondent specifically asks the interviewer to call 911, the call will be transferred directly and the interviewer will then hang up. The supervisor will then record the details of the event, and relay them to a project staff member as soon as possible. The project staff member will evaluate any events as they are reported, and relay them to the project director and CDC/NICPC staff as soon as possible.


Resource information will also be provided for participants to access for assistance in coping with traumatic and violent events. These measures have been recommended in the literature (Gondolf & Heckert, 2003; Johnson, 1996; Tjaden and Thoennes, 2000; Sullivan & Cain, 2004; Weisz et al., 2000) and have been consistently used in NCIPC’s previous studies, including ICARIS-2 and the SIPV Pilot Survey.


Throughout data collection, interviewers are monitored to check the quality of their work and to identify areas needing more training or clarification. Silent audio and video monitoring of interviewers take place throughout data collection. Approximately 10% of all interviewing time are observed. Interviewers are scored on their performance during these sessions, which are unknown to the interviewer at the time of administration, and are given written and verbal feedback on their performance. This process allows the identification of any individual interviewer performance issues, as well as larger issues that might affect the data collection. The information obtained is then used as a teaching tool for other interviewers, as appropriate.


Because of the prevalence of IPV, SV, and stalking, it can be anticipated that potential or selected interviewers may have personal experience with the topics being addressed during the interview. Although disclosure of this private information is not requested, it is important for the interviewers to have support available, as needed, and opportunities to debrief (regardless of their personal history) on a regular basis during the conduct of this study. In addition to participating in the interviewer training and ongoing monitoring and supervision of interviewers, interviewers attend weekly meetings with members of project staff. The purpose of these meetings, which occur throughout data collection, is typically to discuss progress in data collection, problems in interviewing, and survey instrument changes. These meetings allow the interviewers to discuss specific experiences as well as their responses to difficult situations. The clinical psychologist is available to provide this support during regularly scheduled meetings with the interviewers.



Households are contacted by telephone approximately one week after the introductory letter has been sent. Interviewers introduce themselves and (when applicable) state "You may have received a letter from us” (Attachment I), then informs the potential participant about the study, select a respondent, and proceed with the introductory script. Households with multiple 18 year old or older residents are selected using the most recent birthday method.


The letter of introduction and survey has been translated into Spanish. To ensure accuracy and usability of the Spanish versions of the introductory letter and survey instrument, several steps have been taken. A translator translates the documents into Spanish and another translator translates the instruments back into English to ensure that all study materials were properly translated and that the meaning of the questions has been preserved. Both the letter and survey have been written in language that is commonly understood; to ensure that people of different Hispanic backgrounds can understand the Spanish versions, a third translator has reviewed the study instruments.

If it is determined that the respondent speaks Spanish and not English, a bilingual interviewer will continue with the introductory script, respondent selection, oral consent, and survey administration.


All estimates are weighted to account for the stratified dual-frame sample design, multiple phases, and additional post-survey adjustments for coverage and nonresponse. The latest National Health Interview Survey data and reported estimates are used to adjust selection weights in order to combine the landline and cell phone samples, to inform the relative size of each sampling frame and the demographic composition within each frame. Census estimated totals are used to adjust the combined sample to U.S. adult population.


The variance of survey estimates are computed using statistical software designed for survey data analyses (e.g., SAS and SUDAAN). These procedures, such as CROSSTAB in SUDAAN, take into account the complex survey design and unequal weighting, and the option for Taylor Series Linearization for estimating variances of proportions are used.

B.3. Methods to Maximize Response Rates and Deal with Nonresponse



As discussed in Part A, NISVS uses an incentive plan structure that has been previously approved for several years (2010, 2011, 2012, and 2015) of information collections requests (OMB# 0920-0822). The data collection is divided into 4 quarters. In each quarter, Phase 1 data collection is carried out for approximately 12 weeks. During Phase 1, all respondents are offered a $10 incentive to complete the survey.




Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). For this reason, extensive interviewer training is a key aspect of the success of this data collection effort. The following interviewing procedures, all of which have been proven in the NISVS Pilot and other previous surveys, are used to maximize response rates:

  1. Interviewers are briefed on the potential challenges of administering a survey on IPV, SV, and stalking. Well-defined conversion procedures have been established.

  2. If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. The main purpose of this contact is to ensure that the potential respondent understands the importance of the survey and to determine if anything can be done to make the survey process easier (e.g., schedule a convenient call-back time). At no time do staff pressure or coerce a potential respondent to change their mind about their participation in the survey, and this is carefully monitored throughout survey administration to ensure that no undue pressure is placed on potential respondents.

  3. Should a respondent interrupt an interview for reasons such as needing to tend to a household matter, the respondent is given two options: (1) the interviewer will reschedule the interview for completion at a later time or (2) they will be given a toll-free number designated specifically for this project, for them to call back and complete their interview at their convenience.

  4. Fielding of the survey takes place on an ongoing basis.

  5. Conversion staff is able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.

  6. The contractor has established a toll-free number, dedicated to the project, so potential respondents may call to confirm the study’s legitimacy.


Special attention has been given to scheduling call backs and refusal procedures. The contractor works closely with CDC/NCIPC to set up these rules and procedures. Examples include:

  • Detailed definition when a refusal is considered final

  • Monitoring of hang-ups, when they occur during the interview, and finalization of the case once the maximum number of hang-ups allowed are reached

  • Calling occurs only during weekdays from 9am to 9pm, Saturdays from 9am to 6pm, and Sundays from noon to 9pm (respondent’s time).

  • Calling occurs across all days of the week and times of the day (up to 9pm). 


During the early period of fielding the survey, supervisors, monitors, and project staff observe interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They select a team of refusal avoidance specialists from among the interviewers who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These interviewers are given additional training in specific techniques tailored to the interview, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to be interviewed or terminates an interview in progress, interviewers attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the study?” The interviewer then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”



A nonresponse phase is introduced toward the end of each year’s data collection period. The primary objective of this design is to reduce nonresponse bias while minimizing the impact on cost. There are several components of the implementation of the nonresponse phase:


  1. Indicators of cost and survey measures are monitored separately throughout data collection for the landline and cell phone samples.

  2. When phase capacity is reached—the cost indicators start to change (e.g., increasing number of interviewing hours per completed interview) and survey measures stabilize (e.g., sexual victimization rates do not change with additional interviews)—the nonresponse phase is initiated. This occurs at about two-thirds into each data collection period, but is informed by the above indicators.

  3. A stratified sample of non-respondents to the initial phase is selected. Stratification variables include sampling frame (landline/cell phone) and state.

  4. In Phase 2 of the two-stage design, the incentive is increased to $40 for the subsample of nonresponse. An answering machine and voice mail message about the new contact attempt and higher incentive are left for each number in the Phase 2 subsample.


This approach is informed by a number of theoretical developments and past empirical research. Ideally, nonresponse bias is eliminated when the response rate is at 100%. The double sample approach allows the allocation of greater resources to a subsample, in an attempt to substantially increase response rates, as originally proposed by Deming (1953). While 100% response rate is still not achieved in an RDD survey, of importance is how it is increased. Groves and colleagues (Groves et al. 2006; Groves, Singer, and Corning 2000) have developed a leverage-salience theory of survey participation, postulating that individuals vary in the reasons for which their cooperation can be gained. In particular, their experiments show that while individuals with greater interest or involvement in the survey topic are more likely to respond, which can bias survey estimates, incentives can offset such selection bias as incentives are disproportionately more effective in gaining cooperation from those less interested in the topic.




B.4. Tests of Procedures or Methods to be Undertaken



To ensure that all skip patterns and data collection procedures are operating correctly, the first several months of data collection are closely monitored and any necessary adjustments to the CATI instrument or survey protocols will made during the initial weeks of data collection.


B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


Individuals who have participating in designing the data collection:

CDC staff

Mikel Walters, Ph.D.

Sharon Smith, Ph.D.

Kathleen Basile, Ph.D.

Jieru Chen, Ph.D.

Melissa Merrick, Ph.D.

Jeffrey E. Hall, Ph.D .

Thomas Simon, Ph.D.

Kevin Webb

Marcie-jo Kresnow–Sedacca


RTI International Staff

Lisa Carley-Baxter, M.A.

Kim Aspinwall, M.A.

Andy Peytchev, Ph.D.

Lilia Filippenko, M.A.

Jessica Williams, M.A.

Christopher Krebs, Ph.D.


The following individuals from the contract participate in the collection of data:

Lisa Carley-Baxter, M.A.

Kim Aspinwall, M.A.

Andy Peytchev, Ph.D.

Lilia Filippenko, M.A.

Jessica Williams, M.A.

Christopher Krebs, Ph.D.



The following individuals participate in data analysis:


CDC Staff

Jieru Chen, Ph.D.

Xinjian Zhang, Ph.D.

Marcie-jo Kresnow–Sedacca

Robert Thomas

Ann Smalls


RTI International Staff

Lisa Carley-Baxter, M.A.

Kim Aspinwall, M.A.

Andy Peytchev, Ph.D.

Jennifer Iriondo-Perez, M.A.

Christopher Krebs, Ph.D.










REFERENCES


American Association for Public Opinion Research (2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 5th edition. Lenexa, Kansas: AAPOR.

Armstrong, J.S. (1975). Monetary Incentives in Mail Surveys. Public Opinion Quarterly, 39, 111-116.


Bachar K, Koss MP. (2001). From prevalence to prevention: Closing the gap between what we know about rape and what we do. In: Renzetti C, Edleson J, Bergen RK, editors. Sourcebook on Violence Against Women. Thousand Oaks (CA): Sage Publications.


Basile KC, Black MC, Simon TR, Arias I, Brener ND & Saltzman LE. (2006). The Association between self reported lifetime history of forced sexual intercourse and recent health risk behaviors: findings from the 2003 National Youth Risk Behavior Survey. Journal of Adolescent Health, 39, 752.


Basile KC, Chen J, Black MC, & Saltzman LE. (2007). Prevalence and characteristics of sexual violence victimization among U.S. Adults 2001-2003. Violence and Victims, 22, 437-448.


Basile KC & Saltzman LE. (2002). Sexual violence surveillance: uniform definitions and recommended data elements. Version 1.0. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control.


Basile KC, Swahn MH, Chen J & Saltzman LE. (2006). Stalking in the United States: Recent National Prevalence Estimates. American Journal of Preventive Medicine, 31, 172-175.


Behavioral Risk Factor Surveillance System 2014 Summary Data Quality Report: http://www.cdc.gov/brfss/annual_data/2014/pdf/2014_dqr.pdf


Black MC & Black RS. (2007). A public health perspective on the ethics of asking and not asking about abuse. American Psychologist, 62, 328.


Black MC & Breiding, MJ. (2008) Adverse health conditions and health risk behaviors associated with intimate partner violence – United States, 2005. MMWR, 57, 113-117.


Black, MC, Basile, KC, Breiding, MJ, Smith, SG, Walters, ML, Merrick, MT, Chen, J, & Stevens, MR. (2011). The National Intimate Partner and Sexual Violence Survey (NISVS): 2010 Summary Report. Atlanta, GA: National Center for Injury Prevention and Control, Centers for Disease Control and Prevention.


Blumberg S J & Luke JV. (2008). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2007. Retrieved May 13, 2008, from http://www.cdc.gov/nchs/nhis.htm.


Blumberg, S. J., & Luke, J. V. (2012). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2011  Retrieved June 28, 2012, from http://www.cdc.gov/nchs/nhis.htm


Blumberg, S. J., & Luke, J. V. (2015). Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, January-June 2015  Retrieved

July 12, 2016, from http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201512.pdf


Bonomi AE, Thompson RS & Anderson Ml. (2006). Intimate partner violence and women’s physical, mental, and social functioning. Am J Prev Med, 30, 458-466


Breiding MJ, Black MC & Ryan GW. (2008). Prevalence and risk factors of intimate partner violence in Eighteen U.S. States/Territories, 2005. American Journal of Preventive Medicine, 34, 112-118.


Brick, J. M., Cervantes, I. F., Lee, S., & Norman, G. (2011). Nonsampling errors in dual frame telephone surveys. Survey Methodology, 37(1), 1-12.


Brush LD. (1990). Violent acts and injurious outcomes in married couples: methodological issues in the National Survey of Families and Households. Gender and Society, 4, 56-67.


Caetano R & Cunradi C. (2003). Intimate partner violence and depression among whites, blacks, and Hispanics. Annals of Epidemiology, 13, 661–5.


Campbell J, Sullivan CM & Davidson WD. (1995). Women who use domestic violence shelters: changes in depression over time. Psychology of Women Quarterly 19, 237-55.


Campbell JC. (2002). Health consequences of intimate partner violence. Lancet, 359, 1331–6.


Cantor D, O’Hare, BC & O’Connor KS. (2007). The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys. Pp. 471-498 in Advances in Telephone Survey Methodology, edited by J.M. Lepkowski, C. Tucker, J.M. Brick, E. de Leeuw, L. Japec, P.J. Lavrakas, M.W. Link, and R.L. Sangester. New York: Wiley.


Cantor D, Wang K & Abi-Habib N. (2003). Comparing Promised and Pre-Paid Incentives for an Extended Interview on a Random Digit Dial Survey. Proceedings of the Survey Research Methods Section of the ASA.


Centers for Disease Control and Prevention (CDC). (2009). Building data systems for monitoring and responding to violence against women: recommendations from a workshop. MMWR 49, No. RR-11).


Church AH. (1993). Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. Public Opinion Quarterly, 57, 62-79.


Coker AL, Smith PH, Bethea L, King MR & McKeown RE. (2000). Physical health consequences of physical and psychological intimate partner violence. Archives of Family Medicine, 9, 451-7.


Corso PS, Mercy JA, Simon TR, Finkelstein EA & Miller TR. (2007). Medical Costs and Productivity Losses Due to Interpersonal and Self-Directed Violence in the United States. American Journal of Prevention Medicine, 32, 474-482.

Crowell NA, Burgess AW, eds. Understanding Violence Against Women. Washington, D.C.; National Academy Press; 1996.


Dailey R & Claus RE. (2001). The relationship between interviewer characteristics and physical and sexual abuse disclosures among substance users: A multilevel analysis. Journal of

Drug Issues, 31, 867-88.


Defense Manpower Data Center. (2008). “August 2007 Status of Services Survey of Active Duty Members: Tabulations and Responses.” DMDC Report No. 2007–049.


Deming W E. (1953). On a Probability Mechanism to Attain an Economic Balance between the Resultant Error of Nonresponse and the Bias of Nonresponse. Journal of the American Statistical Association, 48, 743-772.


Dillman D. (2000) Mail and Internet Surveys. New York, NY: John Wiley & Sons, Inc.


Evans-Campbell T, Lindhorst T, Huang B & Walters KL. (2006). Interpersonal Violence in the Lives of Urban American Indian and Alaska Native Women: Implications for Health, Mental Health, and Help-Seeking. American Journal of Public Health, 96, 1416-1422.

Fahimi M, Kulp D, & Brick JM. (2008). Bias in List-Assisted 100-Series RDD Sampling. Survey Practice. September 2008.

Fisher BJ. (2004). Measuring Rape Against Women: The Significance of Survey Questions. U.S. Department of Justice.


Fowler Jr FJ & Mangione TW. (1990). Standardized Survey Interviewing. Newbury Park:

Sage publications. 


Gelles RJ. (1997). Intimate Violence in Families. 3rd ed. Thousand Oaks (CA): Sage

Publications.


Golding JM. (1996). Sexual assault history and limitations in physical functioning in two general population samples. Research in Nursing and Health, 9, 33-44.


Gondolf EW & Heckert DA. (2003). Determinants of women's perceptions of risk in battering relationships. Violence & Victims, 18, 371-386.


Grossman, S. F., & Lundy, M. (2003). Use of domestic violence services across race and ethnicity by women aged 55 and older. Violence Against Women, 9(12), 2003.


Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public

Opinion Quarterly 70(5): 646-675.


Groves R M, Couper MP, Presser S, Singer E, Tourangeau R, Acosta GP & Nelson L. (2006). Experiments in Producing Nonresponse Bias. Public Opinion Quarterly 70, 720-736.


Groves RM & Heeringa S. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistical Society Series A: Statistics in Society 169, 439-457.


Groves R M & McGonagle KA. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics 17, 249-265.


Groves R M, Singer E & Corning A.(2000). Leverage-Saliency Theory of Survey Participation - Description and an Illustration. Public Opinion Quarterly, 64, 299-308.


Heyman RE, Schaffer R, Gimbel C & Kemer-Hoeg S. (1996). A Comparison of the Prevalence of Army and Civilian Spouse Violence. Prepared by Caliber Associates and Behavioral Science Associates for U.S. Army Community and Family Support Center, September, 1996.

Health Information National Trends Study. (http://cancercontrol.cancer.gov/hints/docs/HINTS_refusal_incentive_abstract.pdf ).

Johnson H. (1996). Dangerous Domains: Violence Against Women in Canada. Scarborough,

ON: Nelson Canada; 1996.


Kaslow N, Thompson MP, Meadows L, Jacobs D, Chance S & Gibb B. (1998). Factors that

mediate or moderate the link between partner abuse and suicidal behavior in African American Women. Journal of Consulting and Clinical Psychology; 66, 533-40.


Kennedy C. (2007). Constructing Weights for Landline and Cell Phone RDD Surveys. Paper

presented at the Annual Meeting of the American Association for Public Opinion Research, May 17-20, Anaheim, CA.


Kessler RC, McGoangle KA, Zhao S, Nelson CB, Hughes M, & Eshleman S. (1994).

Lifetime and 12-month prevalence of DSM-II-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Archives of General Psychiatry, 51, 8-19.


Kilpatrick DG, Edmunds CN, Seymour AK. (1992). Rape in America: A Report to the Nation.

Arlington,VA: National Victim Center & Medical University of South Carolina.


Kish L. Survey Sampling. John Wiley and Sons, Inc. New York; 1965.


Koss MP, Bailey JA, Yuan NP, Herrera VM & Lichter EL. (2003). Depression and PTSD in survivors of male violence: research and training initiatives to facilitate recovery. Psychology of Women Quarterly, 27, 130–42.


Krug et al., eds. (2002). World Report on Violence and Health. Geneva, World Health Organization; 2002.


Lundy M & Grossman SF. (2004). Elder abuse: spouse/intimate partner abuse and family abuse among elders. Journal of Elder Abuse & Neglect, 16, 85-102.


Malcoe LH, Duran BM & Montgomery JM. (2004). Socioeconomic Disparities in Intimate Partner Violence Against Native American Women: A Cross-Sectional Study. BMC Medicine, 2, 20.

Marshall A, Panuzioa J & Taft CT. (2005). Intimate Partner Violence Among Military Veterans and Active Duty Servicemen. Clinical Psychology Review, 25, 862-876.

Martin SL, Gibbs DA, Johnson RE, Rentz ED, Clinton-Sherrod AM & Hardison J. (In Press). Spouse Abuse and Child Abuse by Army Soldiers. Journal of Family Violence.

Max W, Rice DP, Finkelstein E, Bardwell RA, Leadbetter S. The economic toll of intimate partner violence against women in the United States. Violence Vict. 2004;19(3):259-72.



McCarroll JE, Newby JH, Thayer LE, Norwood AE, Fullerton CS & Ursano RJ. (1999). Reports of Spouse Abuse in the U.S. Army Central Registry (1989-1997). Military Medicine, 164, 77–84.

McCarty C. (2003) Differences in Response Rates Using Most Recent Versus Final Dispositions in Telephone Surveys. Public Opinion Quarterly, 67, 396-406.

Mechanic MB, Uhlmansiek MH, Weaver TL & Resick PA. (2000). The impact of severe stalking experienced by acutely battered women: an examination of violence, psychological symptoms and strategic responding. Violence and Victims, 15, 443–58.

Merrill LL, Newell CE, Milner JS, Koss MP, Hervig LK, Gold SR, Rosswork SG & Thornton SR. (1998). Prevalence of premilitary adult sexual victimization and aggression in a Navy recruit sample. Military Medicine, 163, 209-212.

Mouton CP, Rovi S, Furniss K & Lasser NL. (1999). The associations between health and domestic violence in older women: results of a pilot study. Journal of Women’s Health & Gender-Based Medicine, 8, 1173-1179.


National Center for Injury Prevention and Control. (2008). CDC Injury Research Agenda, 2009–2018. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention. Available at: http://www.cdc.gov/ncipc.


National Center for Injury Prevention and Control (NCIPC). (2003). Costs of Intimate Partner Violence Against Women in the United States. Atlanta (GA): Centers for Disease Control and Prevention.


National Household Education Survey. (http://www.amstat.org/sections/srms/Proceedings/papers/1997_181.pdf).


National Research Council. (2003). Elder Mistreatment: Abuse, Neglect, and Exploitation in an Aging America. Panel to Review Risk and Prevalence of Elder Abuse and Neglect. Richard J. Bonnie and Robert B. Wallace, Editors. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.



Oetzel J & Duran B. (2004). Intimate Partner Violence in American Indian and/or Alaska Native Communities: A Social Ecological Framework of Determinants and Interventions. American Indian and Alaska Native Mental Health Research, 11, 49-68.

O'Muircheartaigh C & Campanelli P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Non-Response. Journal of the Royal Statistical Society, 162, 437-446.

Peytchev, A., R. Baxter and L. R. Carley-Baxter (in press). Not All Survey Effort is Equal: Reduction of Nonresponse Bias and Nonresponse Error. Public Opinion Quarterly.

Pollner M. (1998). The effects of interviewer gender in mental health interviews. Journal of Nervous & Mental Disease, 186, 369-73.


Puzone CA, Saltzman LE, Kresnow MJ, Thompson MP & Mercy JA. (2000). National trends in intimate partner homicide. Violence Against Women, 6, 409–26.


Rennison C & Rand M. (2003). Non-lethal intimate partner violence: women age 55 or older. Violence Against Women, 12, 1417-1428.


Robin RW, Chester B, Rasmussen JK, Jaranson JM & Goldman JK. (1997). Prevalence and Characteristics of Trauma and Post-Traumatic Stress Disorder in a Southwestern American Indian Community. American Journal of Psychiatry, 154, 1582-1588.

Sadler AG, Booth BM & Doebbeling BN. (2005). Gang and Multiple Rapes During Military Service: Health Consequences and Health Care. Journal of the American Medical Women’s Association, 60, 33-41

Sahr, R. Consumer Price Index (CPI) Conversion Factors 1800 to Estimated 2015 to Convert Dollars of 2005. (Revised January, 18, 2006). Available: http://oregonstate.edu/Dept/pol_sci/fac/sahr/cv2005.xls (Accessibility Verified January 23, 2006).


Singer E. (2002). The Use of Incentives to Reduce Nonresponse in Household Surveys. Pp. 163-178 in Survey Nonresponse, edited by R.M. Groves, D.A. Dillman, J.L. Eltinge, and R. J.A. Little. New York: Wiley.


Singer E & Bossarte RM. (2006). Incentives for survey participation: when are they coercive? Am J Prev Med 31, 411-418.


Sullivan CM & Cain D. (2004). Ethical and safety considerations when obtaining information from or about battered women for research purposes. Journal of Interpersonal Violence, 19, 603-18.


Teaster, P.A. (2002). A response to the abuse of vulnerable adults: the 2000 survey of state adult protective services. Washington, D.C.: National Center on Elder Abuse.


Thompson M, Arias I, Basile KC, & Desai S. (2002). The Association Between Childhood Physical and Sexual Victimization and Health Problems in Adulthood in a Nationally Representative Sample of Women. Journal of Interpersonal Violence, 17, 1115-1129.


Thornberry O, Massey J. (1998). Trends in United States Telephone Coverage Across Time and Subgroups. In R.M. Groves, P.P. Biemer, L.E. Lyberg, J.T. Massey, W.L. Nicholls, II, & J. Wakesberg (Eds.), Telephone Survey Methodology. New York: Wiley.


Tjaden P & Thoennes N. (1998). Prevalence, Incidence, and Consequences of Violence against Women: Findings from the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 172837.

Tjaden P & Thoennes N. (1998). Stalking in America: Findings from the National Violence Against Women Survey: research brief. U.S. Department of Justice; 1998.


Tjaden P & Thoennes N. (2000). Full Report on the Prevalence, Incidence, and Consequences of Violence Against Women. NCJ Publication # 183781, Washington, DC: National Institute of Justice.

Tjaden P & Thoennes N. (2006). Extent, Nature, and Consequences of Rape Victimization: Findings From the National Violence Against Women Survey. U.S. Department of Justice, Office of Justice Programs, Washington, DC, Report No. NCJ 210346.

Traugott MW, Groves RM & Lepkowski J. (1987). Using Dual Frame Designs to Reduce Nonresponse in Telephone Surveys. Public Opinion Quarterly, 51, 522-539.


Tucker C, Brick JM, Meekins B, Morganstein D. (2004). Household Telephone Service and

Usage Patterns in the U.S. in 2004. Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 4528 -4534.


U.S. Bureau of Statistics. http://www.dol.gov/dol/topic/statistics/index.htm).


U.S. Census. http://www.census.gov/popest/national/asrh/NC-EST2004/NC-EST2004-01.xls


U.S. Department of Health and Human Services (DHHS). Healthy People 2010. 2nd ed. With Understanding and Improving Health and Objectives for Improving Health 2 vols. Washington, DC: U.S. Government Printing Office; 2000.


U.S. Department of Health and Human Services. Report from the Secretaries Task Force on Elder Abuse. Feb 1992. http://aspe.hhs.gov/daltcp/reports/elderab.htm


Vos T, Astbury J, Piers LS, Magnus A, Heenan M, Stanley L, Walker L & Webster K. (2006). Measuring the Impact of Intimate Partner Violence on the Health of Women in Victoria, Australia. Bulletin of the World Health Organization, 84, 9.

Waksberg J (1978). Sampling Methods for Random Digit Dialing. Journal of the American Statistical Association, 73, 40-46.


Watts C, Heise L, Ellsberg M & Moreno, G. (2001). Putting women first: ethical and safety recommendations for research on domestic violence against women. (Document WHO/EIP/GPE/01.1). Geneva: World Health Organization, Global Programme on Evidence for Health Policy.


Yu J & Cooper H. (1983). Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research, 20, 36-44.






File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorCDC User
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy