NCVS CS Field Test - OMB Submission - Statement A Final

NCVS CS Field Test - OMB Submission - Statement A Final.docx

National Crime Victimization Survey (NCVS) Companion Survey (CS) Field Test

OMB: 1121-0351

Document [docx]
Download: docx | pdf

NCVS CS Field Test - OMB Submission – Statement A


A. Justification


A1. Necessity of Information


The Bureau of Justice Statistics (BJS), of the U.S. Department of Justice, requests clearance to conduct a large-scale field test for activities related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program. BJS, in consultation with Westat under a cooperative agreement (Award 2010-NV-CX-K077 National Crime Victimization Survey on Sub-National Estimates), has planned methodological research to develop a companion survey to the NCVS data collection in order to support lower cost, sub-national estimates. This activity falls under authorities of the Omnibus Crime Control Act of 1968 in which BJS is charged to “conduct or support research relating to methods of gathering or analyzing justice statistics;” (Section 302(c)(12)). The next phase of research tests an address-based sampling (ABS) design with only mail-based surveys to collect data for reliable local area estimates of criminal victimization.


Overview


Since 2008, BJS has initiated numerous research projects to assess and improve upon the core NCVS methodology. During 2009, BJS met with various stakeholders, including the Federal Committee on Statistical Methodology and representatives from state statistical analysis centers, state and local law enforcement agencies, the Office of Management and Budget, and Congressional staff to discuss the role of the NCVS. The discussions included the need for sub-national estimates and the challenges and potential methodologies for providing these estimates. The purpose of the current research is to develop and evaluate a cost effective sub-national companion survey of victimization.


In the first phase of research of the NCVS Companion Survey (CS), BJS attempted to use the existing NCVS instruments adapted to a computer-assisted telephone interview (CATI) environment using an address-based sample (ABS). Based on the results of this research we have concluded that it is extremely difficult, if not impossible, to replicate NCVS estimates of victimization rates using a low-cost data collection approach. The NCVS is a large and complex survey, with many potential sources of relative bias compared with low-cost alternative data collection approaches, including nonresponse, mode effects, and house effects in data collection and processing. It does not seem feasible to control for all of these differences in a low-cost vehicle, regardless of the sample design or data collection mode(s). NCVS estimates of victimization rates are very sensitive to many of these factors, so estimates may change substantially when even small deviations occur in the survey process. Truman and Planty (2012) describe how the victimization estimates in the core NCVS changed when the sample size increased and new interviewers were needed; Rand (2008) reviews some effects when the sampled geographic areas changed and the data collection software was revised.


In the next phase of research, we have begun developing a low-cost, self-administered approach that can support sub-national estimates of crime victimization, using a less complex instrument than the current NCVS. The goal of this Companion Survey Field Test is to generate a survey that could parallel NCVS and Uniform Crime Report (UCR) estimates over time, rather than replicate either of them, and could be used to assess whether local initiatives are correlated with changes in crime rates. A secondary goal is to assess change over time, as the Field Test will be administered over two years, with a cross-sectional ABS in 2015 and a second ABS sample 2016 (in 2016 we will select an independent cross-sectional sample and will also subsample some 2015 addresses to receive a follow-up in 2016). The rationale for collecting data in two years is that we are able to assess the ability of the instruments to detect change over time. An additional feature of the surveys being tested is the inclusion of a set of questions on perceptions of neighborhood safety, fear of crime, and police effectiveness, which would allow the survey to be used to assess changes in these perceptions as well. This information is not currently available from the NCVS. Supporting Statement B provides more detail on the proposed methodology and more in-depth discussion on the research goals of the CS Field Test.


A2. Needs and Uses


The NCVS provides the largest national forum for victims to describe the characteristics and consequences of victimization. Since the inception of the NCVS, there has been demand for data at a sub-national level. The three major reviews of the NCVS program (Penick and Owens, 1976; Biderman et al., 1986; Groves and Cork, 2008) all point to the demand local criminal justice administrators have for empirical information to shape policy. In the early years, a series of surveys were conducted in cities (e.g., Hindelang, et al., 1978). This included surveys in eight “Impact” cities to assist in evaluating crime prevention programs in those areas. These surveys were conducted outside of the regular NCVS data collection system with designs that differed from the national survey. For example, they included a 12-month reference period and were conducted as one-time surveys over relatively short periods of time. These surveys were not continued, partly because variation in implementation across cities seemed to confound analyst abilities to compare rates across areas (Penick and Owens, 1976).


Since these early years, the demand for local area estimates has remained strong. A number of states have conducted their own surveys by using a mailed paper questionnaire or by telephone (e.g., Giblin, 2003; Haddon and Christenson, 2005). The BJS has tried to meet the demand for information in several different ways. One was to provide both technical assistance and software to conduct victimization surveys. A second was to publish MSA-level estimates for 20 locations using the current sample design (e.g., Lauritsen and Schaum, 2005). In both cases, these efforts found interested audiences. However, they have not been widely used to shape local policy. The current Request for Applications (RFA) presents an opportunity significantly to advance the NCVS program’s capability for producing sub-national data by developing and evaluating “a cost effective sub-national companion survey of victimization” (RFA, p. 4). The idea is to administer a companion survey (CS) in areas that include NCVS data collection (e.g., MSA). The CS and the NCVS would be blended together to produce estimates that have acceptable levels of precision for key estimates. Ideally this design would form the basis of developing and generating small area (or synthetic) estimates for regions that are not covered by a CS. As mentioned in the RFA, an alternative approach is also being investigated by the Census Bureau to expand the NCVS sample in targeted areas. Relative to expanding the sample, an important advantage of the CS methodology is its flexibility. The CS can be scaled and administered to fit the funding environment, as well as shaped to meet user needs without altering the core NCVS. This flexibility is one design feature that has been proposed as a highly desirable feature of a redesigned NCVS (Groves, and Cork, 2008; Cantor and Lynch, 2007).


If the Field Test concludes that a low-cost approach is viable for producing sub-national estimates, BJS, along with State and local criminal justice agencies can use the approach for assessing change at the State or local level.


A3. Use of Technology


Since the goal of the CS study is to generate a low-cost option for assessing change at the local level, we have opted to reduce the reliance on technology in the design of the Field Study. The self-administered survey will be conducted using mailed paper instruments. The rationale is that we want a design that can be easily implemented by local jurisdictions. One option to incorporate technology into the CS administration would be to offer a concurrent web-based survey option. However, this would increase costs and complexity (both operational and analytic), and has actually been found to depress response rates compared with a mail-based survey alone (Millar and Dillman, 2011).


At this time, we plan to use automated call prompts using Interactive Voice Response (IVR) technology to non-responding households where a telephone number is available. IVR technology is generally inexpensive and this would be a realistic option for local jurisdictions opting to launch their own surveys. We also plan to use scannable forms and technology for capturing the data. This may not be an option for local jurisdictions, and so final instruments will be available in both PDF and Word so that jurisdictions can adapt them for their own use.


A4. Efforts to Identify Duplication


To date, we have identified no studies that duplicate the Companion Survey goals of creating a standardized, low-cost method of estimating victimization at the sub-national level. The NCVS is currently the only tool for collecting national victimization data on crimes not reported to the police. Two other databases maintain data on police reports, including the Uniform Crime Reports (UCR) and the National Incident-Based Reporting System (NIBRS). The UCR is an FBI-maintained database tracking murder, robbery, rape, aggravated assault, burglary, theft, vehicle theft, and arson. Local law enforcement agencies voluntarily report UCR data to the FBI. There is variability across localities, however, in the percentages of crimes are reported to the police and also variability in how those crimes are classified by the law enforcement agencies (Rosenfeld, 2007). The NIBRS was developed to replace the UCR, expanding the types of crimes it tracks and addressing some of the limitations of the UCR approach. However, implementation of the NIBRS has been slow. Neither the UCR nor the NIBRS are able to provide overall estimates on victimization because these databases only include incidents reported to the police and uploaded to the databases. Findings from the NCVS suggest that only about half of violent victimizations are reported to the police (Langton, et al., 2012), which means that the UCR data are not ideal for generating estimates of overall victimization in the U.S.


BJS has investigated other methods for producing victimization estimates at the sub-national level in the past. These have included local area oversampling in the NCVS as well as modeling efforts to support small area estimation. Oversampling within the core NCVS is very expensive, and while BJS has hope for small-area estimation (SAE) in the future, there are many sub-national areas that are not well covered by the NCVS and so would not be candidates for SAE modeling.


In addition to BJS investigation, there have been efforts by some States and localities to produce victimization rates at the local level. A recent review conducted for BJS by the Justice Research and Statistics Association (JRSA) identified twenty-five data collection efforts completed over the past dozen years conducted by 14 States. Some of these surveys have been conducted using BJS grant funding, while others were funded at the local level. All of the reviewed instruments are different, their data collection methods vary, and it is unclear what testing was completed in order to develop the instruments and methodologies. The goal of the NCVS Companion Survey is to provide a standard data collection strategy and instrumentation that would include space for questions tailored to the locality. Currently there are no such instruments available.


A5. Efforts to Minimize Burden


All materials that are provided to the respondent have been designed to be easy to use and to read. The written materials (e.g., advance letters) have been written to be as short and direct as possible. Cognitive interviews and pretest debriefing interviews have been completed to ensure questions are easily understood by respondents and can be answered with a minimal amount of effort. In addition, the instruments include skip patterns so that respondents only provide detailed information if a victimization has occurred.


A6. Consequences of Less Frequent Collection


The Companion Survey Field Test is planned as a two-time collection in 2015 and again in 2016. The results will be used to make recommendations for whether the proposed methodology might be used by BJS and local jurisdictions for supporting local area estimates.


A7. Special Circumstances Influencing Collection


These data will be collected in a manner consistent with the guidelines in 5 CFR 1320.6.


A8. Federal Register Publication and Outside Consultation


The research under this clearance is consistent with the guidelines in 5 CFR 1320.6. The 60 and 30-day notices for public commentary were published in the Federal Register. No comments were received. Various methods and content experts have been consulted, including:


Dr. Mike Brick – Director Westat Survey Methods Unit

[email protected]

301-294-2004


Dr. Sharon Lohr – Senior Statistician

[email protected]

301-738-3512


Dr. David Cantor – Senior Methodologist

[email protected]

301-294-2080


A9. Payment or Gift to Respondents


As has been documented elsewhere (e.g., Brick and Williams, 2013; Curtin, et al., 2005), it is increasingly difficult to achieve high response rates in surveys. In some instances, incentives have been found to be cost neutral as the price of the incentive is offset by the reduction in field time and contact attempts necessary to garner participation (RTI, 2002; U.S. Department of Health and Human Services, 2010).


Maximizing statistical power and coverage will be critical for the project since less than 4 percent of the respondents are expected to report a violent crime in the past 12 months. Young people and minorities, who consistently exhibit high nonresponse rates in household surveys, are at higher risk of being a victim, which makes the risk of nonresponse bias relatively high for the critical estimates of this research. Several studies have found that incentives are particularly effective for minority and low income groups (Singer, 2002). These groups are also subject to higher risk of violent crime.


An important goal of the Field Test is to inform design decisions for the final Companion Survey design. Maximizing response rates will be necessary for comparing the results of the CS to the current NCVS, which has response rates in the high 80s. Maximizing response rates for the proposed study will reduce the extent that observed differences with the NCVS are due to nonresponse bias.


We propose providing a pre-paid $2 incentive in the initial mailing to all sampled addresses. In a meta-analysis of recent studies, Mercer et al. (2014) found that a $2 prepaid incentive can increase mail survey response rates by approximately 10 percentage points.


A10. Assurance of Confidentiality


All respondents will be given assurance that all responses will be protected as required under Title 42, United States Code, Section 3732. All respondents who participate in the survey will be presented with this information in the cover letter accompanying the questionnaire. BJS and Westat hold in confidence any information that could identify an individual according to Title 42, United States Code, Sections 3735 and 3789g. Victimization rates will be published in aggregate form.


As required under Title 42 USC, section 3879g, BJS and its data collection agents will take all necessary steps to mask the identity of survey respondents, including suppression of demographic characteristics and other potentially identifying information, especially in situations in which cell sizes are small.


A11. Justification for Sensitive Questions


The Companion Survey asks about experiences that may be sensitive for some respondents (e.g. rape and sexual assault). Given the objective of the CS — to estimate the amount of victimization in the nation — this is inevitable. All respondents have the option of refusing to answer the survey, and are able to skip any question they are uncomfortable with.


A12. Estimate of Hour Burden


The total estimated burden is 37,575 hours.


Burden hours reflect the relative rarity of crime and the need to screen a large number of people to obtain sufficient numbers of victims in each primary sampling unit (PSU) to enable comparisons across all conditions, across PSUs, and to the NCVS. The calculation of burden hours for the NCVS-CS Field Test is provided in Table 1.



Table 1. Burden hour calculations

Type of data collection

# of total responses per sample address

# completed surveys 2015

# completed surveys 2016

Avg. mins. to complete per response

Year 1 hours

Year 2 hours

Total burden

Mail survey – Incident-Level version

Independent Sample

1

37,575

37,575

10.5

8,767.5

8,767.5

17,535

Overlap Sample

2

12,525

12,525

10.5

8,767.5

8,767.5

17,535

Avg. minutes per non-victimized household: 9.5 minutes

Avg. minutes per victimized household: 15.1 minutes

Mail survey – Person-Level version

Independent Sample

1

37,575

37,575

12.0

10,020.0

10,020.0

20,040

Overlap Sample

2

12,525

12,525

12.0

10,020.0

10,020.0

20,040

Avg. minutes per non-victimized household: 11.6 minutes

Avg. minutes per victimized household: 13.8 minutes

Note: In 2015 a cross-sectional sample will be selected. In 2016, 25 percent of the 2015 sample will be sent a follow-up survey in 2016. The remainder of the 2016 sample will be independent of the 2015 sample.


A13. Estimate of Respondent Cost Burden


With the exception of their time, there is no actual cost to respondents to participate in the NCVS-CS field test.


A14. Estimated Cost to Federal Government


The total cost of the NCVS CS Field Test is estimated as $6,989,700.


A15. Reasons for Change in Burden


There are no changes in burden since the NCVS-CS Field Study is a new data collection.


A16. Project Schedule and Publication Plan



Pending approval from the Office of Management and Budget (OMB), the NCVS-CS field study data collection is scheduled to begin in Sept. 2015 (See Appendix C – CS Field Test Timeline). Year 1 collection will close Nov. 2015. Year 2 Data collection will begin Sept. 2016 and close Nov. 2016. Data will be reviewed and cleaned on a flow basis as they are received. File creation will be done within 2 months after data collection ends. Analysis and reporting for year 1 will be conducted from Feb. – April 2016 (See Part B, Section 2.4 for more information on data collection procedures.). Year 2 data analysis and reporting will be conducted from Feb.-April 2017. A final report, including recommendations for a self-administered survey kit will be prepared May-July 2017.


At the conclusion of the Field Test, a final report National Crime Victimization Survey Companion Study will be provided to the public on the BJS website in 2017. Additional methodological research papers may be issued by BJS and Westat staff as resources permit. The types of statistics that will be available in these reports include response rates, measures of productivity such as offenses reported per respondent or household, and counts and rates for the purpose of evaluating changes in estimates due to methodology (See Part B, Section 2.4 for details). Data collected under a generic clearance will not be used to calculate substantive results/estimates that will be released outside the agency.


A17. Expiration Date Approval


The OMB control number and expiration date will be published on all forms given to respondents.


A18. Exceptions to the Certification Statement


There are no exceptions to the Certification Statement.

References


Biderman, A.D., D. Cantor, J.P. Lynch, and E. Martin (1986). Final report of research and development for the redesign of the National Crime Survey. Prepared for the Bureau of Justice Statistics. Washington, DC: Bureau of Social Science Research, Inc.

Brick, J.M., and Williams, D. (2013). Explaining rising nonresponse rates in cross-sectional surveys. ANNALS of the American Academy of Political and Social Science, 645, 36-59.

Cantor, D. and Lynch, J. (2007). Addressing the challenge of costs and error in victimization surveys: The potential of new technologies and methods. Pp. 281-302, In Hough, M. and Maxfield, M. (Eds) Surveying Crime in the 21st Century. New York: Criminal Justice Press.

Curtin, R., Presser, S., & Singer, E. (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly, 69, 87–98.

Giblin, M. (2003). Measuring adult criminal victimization: Findings from the Anchorage Adult Criminal Victimization Survey. Report for the Alaska Justice Statistical Analysis Center to the Bureau of Justice Statistics. JC 0109.021. Anchorage: Justice Center, University of Alaska Anchorage.

Groves, R.M. and D.L. Cork (2008). Surveying Victims: Options for Conducting the National Crime Victimization Survey. Panel to Review the Programs of the Bureau of Justice Statistics. National Research Council, Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

Haddon, M. and J. Christenson (2005). Shedding light: 2004 Utah Crime Victimization Survey. Salt Lake City: Utah Commission on Criminal and Juvenile Justice, Research and Data Unit.

Hindelang, M., M.R. Gottfredson and J. Garofalo (1978). Victims of Personal Crime: An Empirical Foundation for a Theory of Personal Victimization. Cambridge, MA: Ballinger.

Langton, L., Berzofsky, M., Krebs, C., and Smiley-McDonald, H. (2012). U.S. Department of Justice Office of Justice Programs Bureau of Justice Statistics Special Report: Victimizations Not Reported to the Police, 2006-2010. August 2012, NCJ 238536.

Lauritsen, J.L. and R.J. Schaum (2005). Crime and victimization in the three largest metropolitan areas, 1980-98. NCJ 208075.

Mercer, A., Caporaso, A., Cantor, D. and Townsend, R. (2014) “Monetary incentives and response rates in household surveys: How much gets you how much?” Accepted by Public Opinion Quarterly

Millar, M.M. and Dillman, D.A. (2011). Improving response rates to web and mixed-mode surveys. Public Opinion Quarterly 75(2): 249 – 269.

Orchowsky, S., Trask, J., and Stabile, R. (2014). Justice Research and Statistics Association Special Report: Report on Characteristics of SAC Victimization Surveys. July 2014.

Penick, B.K.E. and M.E.B Owens (1976). Surveying Crime. Panel for the Evaluation of Crime Surveys. Committee on National Statistics, Academy of Mathematical and Physical Sciences. Washington, DC: National Academy of Sciences.

Rand, M.R. (2008). Criminal victimization, 2007. Tech. Rep. NCJ 231327, Bureau of Justice Statistics, Washington, DC.

Richard Rosenfeld (2007). Explaining the Divergence between UCR and NCVS Aggravated Assault Trends. In J. P. Lynch and L. A. Addington, eds., Understanding Crime Statistics: Revisiting the Divergence of the NCVS and UCR. Cambridge: Cambridge University Press, pp. 251-268.

RTI, 2002. Report on Incentive Experiments in the 2001 National Household Survey on Drug Abuse. July 2002.

Singer, E. (2002). The use of incentives to reduce nonresponse in household surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp.163–177). Chichester: Wiley.

Truman, J.L. and Planty, M. (2012). Criminal Victimization, 2011. NCJ 239437. Washington, DC: Bureau of Justice Statistics, http://bjs.ojp.usdoj.gov/content/pub/pdf/cv11.pdf.

U.S. Department of Health and Human Services 2010. Report on a Respondent Payment Experiment in the Medical Expenditure Panel Survey. October 2010.





9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPamela Giambo
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy