Memo to OMB Re: Pretest of Mail Survey

NCVS-CS - Description of 2014 Pretest of a Mail Survey (revised).docx

Research to support the National Crime Victimization Survey (NCVS)

Memo to OMB Re: Pretest of Mail Survey

OMB: 1121-0325

Document [docx]
Download: docx | pdf



U.S. Department of Justice


Office of Justice Programs


Bureau of Justice Statistics

Washington, D.C. 20531


MEMORANDUM


To: Shelly Wilkie Martinez

Office of Statistical and Science Policy

Office of Management and Budget


Through: Lynn Murray

Clearance Officer

Justice Management Division


William J. Sabol, Ph.D.

Acting Director

Bureau of Justice Statistics


From: Michael Planty

Chief, Victimization Statistics


Date: March 19, 2014


Re: BJS Request for OMB Clearance for a Pretest of a Mail Survey Instrument under the National Crime Victimization Survey (NCVS) Redesign Generic Clearance, OMB Number 1121-0325.


The Bureau of Justice Statistics (BJS) requests clearance to conduct a pretest of mail instruments under the OMB generic clearance agreement (OMB Number 1121-0325)) for activities related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program. BJS, in consultation with Westat under a cooperative agreement (Award 2010-NV-CX-K077 National Crime Victimization Survey on Sub-National Estimates), is conducting the next phase of research to develop a methodology for a low cost alternative to the NCVS. The next phase of research tests an address-based sampling (ABS) design with only mail-based surveys to collect data for reliable local area estimates of criminal victimization.


In accordance with the generic agreement, BJS is submitting to OMB for clearance the materials for the next phase of the companion survey research. BJS wishes to use the approved generic clearance for a pretest to assess questionnaire instruments for this collection. The overall goal of the pretest is to assess the proposed instruments to ensure that respondents are willing and able to use the instruments to report victimizations within their households, with enough detail to classify incidents accurately into relatively broad categories. This request will be followed, if the pretest is successful, by a request for a larger field test to be conducted in 2014-2015.

Overview


Since 2008, BJS has initiated numerous research projects to assess and improve upon the core NCVS methodology. During 2009 BJS met with various stakeholders, including the Federal Committee on Statistical Methodology and representatives from state statistical analysis centers, state and local law enforcement agencies, the Office of Management and Budget, and Congressional staff to discuss the role of the NCVS. The discussions included the need for sub-national estimates and the challenges and potential methodologies for providing these estimates. The purpose of the current research is to develop and evaluate a cost effective sub-national companion survey of victimization.


In the first phase of research of the NCVS Companion Survey (CS), BJS attempted to use the existing NCVS instruments adapted to a computer-assisted telephone interview (CATI) environment using an address-based sample (ABS). Based on the results of this research we have concluded that it is extremely difficult, if not impossible, to replicate NCVS estimates of victimization rates using a low-cost data collection approach. The NCVS is a large and complex survey, with many potential sources of relative bias compared with low-cost alternative data collection approaches, including nonresponse, mode effects, and house effects in data collection and processing. It does not seem feasible to control for all of these differences in a low-cost vehicle, regardless of the sample design or data collection mode(s). NCVS estimates of victimization rates are very sensitive to many of these factors, so estimates may change substantially when even small deviations occur in the survey process. Truman and Planty (2012) describe how the victimization estimates in the core NCVS changed when the sample size increased and new interviewers were needed; Rand (2008) reviews some effects when the sampled geographic areas changed and the data collection software was revised. Complete details on the initial phase of research are provided in Appendix A (the report of the Pilot research findings).


In the next phase of research we have begun developing a low-cost, self-administered approach that can support sub-national estimates of crime victimization, using a less complex instrument than the current NCVS. The goal would be to generate a survey that could parallel NCVS and Uniform Crime Report (UCR) estimates over time, rather than replicate either of them, and could be used to assess whether local initiatives are correlated with changes in crime rates. Appendix B presents the rationale for the design to be tested and summarizes the development and testing of the two mail self-administered instruments.


Analytic Goals of the Instruments


Draft instruments were developed using content from the current NCVS as well as content from extant state and local area crime surveys. Two versions are being tested – one that collects details at the incident level, and one that collects information at the person level. Copies of the initial draft instruments were provided to OMB in a submission for cognitive testing. Based upon the testing to date, we have revised the instruments for both approaches; these are included as Appendix C. Each of these instruments would rely on a household-level respondent to report for all adult household members:



  • Abbreviated Person-Level Instrument. This first questionnaire would be an abbreviated instrument that departs considerably from the core NCVS. The goal is to develop a self-administered instrument that focuses on one person within a household at a time, and is sufficiently brief to increase the likelihood of survey cooperation. We call this instrument the Person-Level approach because it asks questions about each adult in the household. The strengths of this instrument are its simplicity, brevity, and ease of administration; its weakness is that resulting estimates would be only person-based, not incident-based.


  • More Detailed Incident-Level Instrument. The second instrument asks about victimization incidents, and associates them with household members. In this approach we use a detailed set of questions to collect data on the “most recent” property and personal crimes. This instrument is more complex than the Person-level, but much less complex than the NCVS-1 and NCVS-2. If successful, it would yield estimates similar to those from the core NCVS. The strength of this instrument is that it should yield a level of detail that is closer, but not identical, to that of the core NCVS; its weakness is that it is complex and may be difficult for respondents to self-administer.



Person Level Instrument. Estimates derived from the Person-level Instrument will represent the number of persons victimized at least once for each of the enumerated violent crimes (see below), and the number of households victimized by broad categories of property crime. These would be comparable to the prevalence estimates recently published for the NCVS (Lauritsen and Rezey, 2013).1 This version of the CS will not be able to produce an estimate of the number of times a person/household was victimized or the number of crime incidents, which are the more common estimates provided by the NCVS. For purposes of tracking trends or developing local policies, it should be sufficient to understand how different people/households are touched by victimization.

The beginning or end of the person-level instrument includes questions on perceptions of community safety and the police. These questions are discussed in a subsequent section (see the section headed “non-crime questions”).

The remainder of the questionnaire collects information on household experiences with crime. An initial section includes questions about household break-ins and theft of household property, including motor vehicles. The survey then shifts to asking about crimes experienced by each adult in the household. The questionnaire asks whether an adult in the household has experienced different types of crimes at least once during the 12 month reference period. Data will be collected for up to four adults in the household2. At the beginning of the section, the respondent is asked to summarize, in his/her own words, up to three victimizations that occurred against the subject within the last 12 months. These items are intended to give respondents an opportunity to tell their story. These data will also be used as qualitative information about the incident.

Once completing the summaries, the respondent is asked a series of questions to characterize the violent crimes experienced by adults in the household to approximate the following NCVS Type of Crime categories for violent crimes:

1. Assault. Items 18 and 19 ask about being attacked or threatened. Aggravated and simple assault are distinguished by whether the incidents involved a weapon (18a,b, 19a,b)3 and whether there was an injury (18c).

2. Robbery. This is distinguished by whether the attack or threat involved stealing something (18d, 19c).

3. Rape and Sexual Assault. The items on the type of attack and the type of sexual assault are asked in Items 20 and 21.

4. Domestic and Intimate Partner Violence. Items 18e, 19d, 20d, 21d ask about the relationship between victim and offender. These can be used to classify the event into one of these two categories.



Because the instrument is at a person level and we want to keep it relatively short, it is not possible to replicate more detailed NCVS type of crime criteria. The goal is to approximate the important distinctions. Appendix B presents more detail on the estimates that the two instruments would support.

Further, the CS will not impose an NCVS-like hierarchy when classifying crimes. For example, a respondent who was attacked and raped should answer both the assault (Item 18) and sexual assault (item 20) questions. The tabulation on the CS would include this person as a victim of both sexual assault and assault. On the NCVS, the sexual assault would take priority.4 The NCVS and CS rules are more equivalent when aggregating to broader categories such as violent and property crimes.

There are also items designed to collect data on property crimes, including Burglary (Item 9), Motor Vehicle Theft (Item 11), and larceny (Item 12). Similar to the NCVS, the intent is to count these crimes for the entire household, rather than at a person level. Items 23 – 26 ask about identity theft and credit card fraud.

Two additional pieces of information will be collected. For property crimes, the value of the items is collected (items 9g, 11a, 12b, 22b). This is collected as a measure of severity. For example, the value of property stolen in the Adult 1 question will be combined with the value reported to the questions for the other adults to estimate the total value for the household. For personal crimes a question is included on whether any of the crimes were reported to the police (items 18f, 19e, 20e, 21e, 22c). Local jurisdictions are particularly interested in incidents that may not be reported to the police. These data will be aggregated at a household level for all crimes, since the question does not distinguish between violent and property crimes (e.g., % of households that reported at least one crime to the police).



Detailed Incident-level Instrument. Estimates from this instrument will include counts of the number of persons/households that have been a victim of different types of crimes, which may be compared with those from the person-level instrument. The incident-level instrument will also support estimates the number of victimizations by type and victimization rates as defined by the NCVS. It should also be possible to compute a prevalence rate, as with the person/household-level instrument above.

This instrument begins with collecting information about the household and its members, and also includes the same “non-crime questions” as the person-level instrument. The victimization questions collect data on any incidents involving an adult household member. These items are divided into questions on Violent Crime and Thefts/Break-ins. The Violent Crime section starts with a series of screening questions asking if anyone in the household has been a victim. The next four sections ask about the details of the four most recent Violent Crime incidents that occurred against a household member. Each section asks for details that are needed to classify and describe the incident. It begins by asking for the month/year of the incident and for a summary of what happened. The remaining items ask for details needed to classify the incident into one of the major violent crime categories of:

1. Rape and Sexual Assault. Victim was confronted (Item 9) and there was a sexual assault of some type (Items 16 – 19).

2. Robbery. Victim was confronted (Item 9) and the perpetrator attacked, attempted to attack or threatened victim with harm (Items 13 - 15) and something was stolen (items 25, 26).

3. Assault. Victim was confronted (Item 9) and the perpetrator attacked, attempted to attack or threatened the victim with harm (Items 13 - 15). Simple assault is when there is no injury (item 20) and no weapon was involved (Item 12). Aggravated assault is when there is an injury or there is a weapon involved.

4. Domestic and Intimate Partner Violence. This uses Item 11 to classify incidents into these groups. This can also be narrowed down to ‘Serious’ incidents, as defined by the NCVS.



Respondents are asked to provide these details for up to two incidents that occurred against members of the household. There is space to enter two more incidents by providing the month/year of occurrence and a detailed summary (e.g., item 53). If there are more than 4, the respondent is asked to provide the number of additional incidents.

There are a few questions included in the Violent Crime section that will be used to describe the event in more detail: (1) whether the police were informed and, if so, what they did (Items 23, 24); (2) the dollar value of anything that was stolen; and (3) the location of the incident.

After the Violent Crime sections, the respondent is asked about thefts and break-ins, structured similarly to the Violent Crime Section. It begins with a series of screening questions asking about theft of property, break-ins and car thefts. The next four sections ask about the details of the four most recent incidents. Respondents are asked not to report details for a property crime if the incident was already reported in the violent crime section. This will allow imposing the same hierarchy used by the NCVS.5

Each section begins by asking for the month/year of the incident and for a summary of what happened. The remaining items ask about details needed to classify the incident into one of the major property crime categories of:

1. Burglary. The perpetrator broke into the home or tried to break in (item 71, 72) and there was evidence of a break-in (item 73).

2. Motor Vehicle Theft. A motor vehicle was stolen or someone tried to steal a motor vehicle (Items 76, 77)

3. Larceny. Something was stolen or someone tried to steal something (Items 74, 75).

There are a few questions included in these sections that will be used to describe the event in more detail. One is whether the police were informed (item 79) and the location of the crime (Item 70). This section provides space for the collection of all details for up to 4 incidents. At the end of the fourth incident, respondents are asked to provide the number of any additional incidents that may have occurred against the household.

The final section collects data on vandalism and identity theft/credit card fraud. These collect the total number of incidents that occurred for each type of crime. This section also collects data on household income.



Non-crime” questions. Both questionnaire versions include questions on perceptions of nuisance crimes and disorder, fear and safety, and police performance and legitimacy. These “non-crime” indicators are independent from police statistics and provide a perspective from the community. Following is an overview of these indicators and examples of each:

Nuisance and disorder: “Neighborhood residents are concerned about a broad range of problems, including traffic enforcement, illegal dumping, building abandonment, and teenage loitering (Skogan and Hartnett, 1997). One aspect of this new and larger police agenda is an untidy bundle of problems that I have labeled “disorder.” For many purposes, it is useful to think of these problems as falling into two general classes: social and physical (Skogan 1996).”


  • On the whole, is this neighborhood a good place to live?

  • People around here are willing to help their neighbors.

  • This is a close knit neighborhood.

  • People in this neighborhood share the same values.

  • On the whole, problem is litter, broken glass or trash on sidewalks/streets?

  • Do you think public drinking is a problem in your neighborhood?

  • Do you think trash and junk on front lawns and public areas are problems in your neighborhood?

  • Do you think aggressive panhandling is a problem in your neighborhood?



Fear and safety: Research on fear of crime conceptualizes it in one of four ways. Three definitions are cognitive in nature, reflecting people’s concern about crime, their assessments of personal risk of victimization, and the perceived threat of crime in their environment. The fourth definition of fear is behavioral and defines fear by the things people do in response to crime. These include avoiding activities and areas, restrict behaviors, and increase home and self-prevention.


  • How much of a problem is crime in your neighborhood?

  • People in this neighborhood can be trusted.

  • People in neighborhood generally get along with each other.

  • How fearful are you of being a victim?

  • To what extent are you fearful that someone will physically attack you?

  • To what extent are you fearful that someone will rob you?

  • To what extent are you fearful that someone will break into your home?

  • To what extent are you fearful that someone will damage your home or property?

  • To what extent are you fearful that someone will break into your car?

  • To what extent are you fearful that someone will hurt your kids?

  • % of homes with home alarms

  • % of homes with firearm/guns for protection

  • Active neighborhood watch program



Citizens’ perceptions of police performance and legitimacy: These indicators include measures of police performance, production, quality of police service, visibility of policing, police-citizen contacts, and satisfaction with police and police encounters.


  • % of crime reported to police (total, serious, firearm)

  • Number of arrests to incidents

  • Clearance rate

  • Police good job dealing w/ problems that concern people?

  • When call 911, does help arrive quickly?

  • How effective is the police department in dealing with neighborhood problems?

  • Involuntary, police-initiated contact with the police?

    • Was the stop legitimate and did the police behave appropriately?

  • Voluntary contact with the police?

    • How satisfied were you with the police efforts?


We have included a selection of such questions for two reasons: (1) in response to widespread interest, especially among local jurisdictions, in such measures in concert with victimization measures; and (2) to reduce the potential for “topic salience bias” in the mail questionnaires. Topic salience bias would occur if, in this instance, households experiencing a crime were more likely to return the survey than those who had not. Including questions salient to a wider audience should reduce the potential for this kind of bias.


A concern about including such questions is that they might have an unintended effect on reporting victimizations. To assess this threat, the pretest will include a split-ballot experiment; in each instrument, half of the surveys will place the non-crime items at the beginning and half at the end.


In the interests of burden and cost, we have limited the number of “non-crime” questions to one page on the mail instruments. If the pretest is successful, these questions could be tailored to the needs and interests of each local jurisdiction.



Instrument Pretest


Cognitive testing of the instruments began in the July 2013 and concluded in December 2013. Appendix B summarizes the results of this testing. We plan now to conduct a small-scale national pretest (the current request for this generic clearance approval) in advance of a larger field test. The goals of the pretest are to:


  • assess the unit and item response rates;

  • determine what level of detail respondents provide when not prompted by an interviewer; and

  • investigate reasons for incomplete instruments or improperly completed questions.


We plan to select a simple random sample of 2,500 addresses in the Continental U.S. and mail half of the sample an incident-level instrument and half a person-level instrument. Each address will receive a Wave 1 questionnaire with a cover letter (Appendix D) and a $2 incentive, followed by a postcard reminder. Those not responding 20 days after the wave 1 questionnaire will be mailed a wave 2 questionnaire. A final Federal Express mailing will be sent to nonrespondents 10 days after wave 2. Based on this methodology, we are assuming that we will achieve a 50 percent response rate, and that 12 percent of addresses will be vacant or nonresidential. We are also estimating that about 20 percent of responding households will report a victimization. Based on these assumptions we project that 110 completed incident-level surveys and 110 person-level surveys will be received from households reporting a victimization.


In order to assess the impact of including the non-crime questions in the NCVS Companion Survey, we plan an experiment for the pretest. Half of the instruments will include the non-crime questions at the beginning, while the other half will present them at the end of the instrument. The goal will be to assess the placement of the questions on (1) response rate, (2) the estimates of crime, and (3) the distribution of responses to the non-crime questions. Research questions include, for example:


  • Whether placing the non-crime questions at the beginning helps improve response rates by engaging respondents.

  • Whether placing non-crime questions at the end results in more negative responses regarding perceptions of safety and law enforcement.

  • Whether placing non-crime questions at the beginning affects reports of crime.



Survey Instruments


The draft instruments are included in Appendix C. There are two versions of each instrument. In one version the non-crime questions (about safety and police sentiment) are presented first, before questions about crime. In a second version the non-crime questions are presented toward the end of the survey, after the questions about crime. This experiment will allow us to assess the impact of the non-crime questions on response rate and also on estimates of crime.



Debriefing Interviews


We plan to conduct about 40 debriefing interviews with respondents to the mail pretest. As completed pretest surveys are returned, the research team will review the data to identify households to contact for a telephone debriefing. The purpose of the debriefing will be to explore the accuracy and completeness of responses captured by the instruments, as well as to explore further the usability of the instrument forms. Each household will receive $40 remuneration for their participation.


The goal of the debriefing interviews is to identify and correct features of the instruments that result in incomplete responses or insufficient detail to generate estimates. The debriefings will focus on how respondents completed the instruments independently and whether there were challenges in understanding or answering any of the questions without interviewer support. We will also explore whether the captured responses accurately portray victimization. The goal of this exploration will be to assess whether data generated by the instruments provide a valid picture of the respondents experiences. Finally, we plan to revisit some of the same objectives of the cognitive interviews, since the experience of respondents in the field may be different than those in a laboratory setting.


The major objectives of the pretest debriefing interviews include:


  1. Validity of questions

    • Has the instrument been able to accurately portray the experiences of the respondent?

  2. Usability

    • Was the respondent able to easily navigate and understand the survey questions?

  3. Questions to support TOC coding

    • Is the respondent’s understanding of these questions congruent with what is needed for crime coding?

  4. Proxy response

    • In the Incident Approach, do respondents think about other household members or focus on themselves?

    • Does the Person Approach work better for focusing on others?

    • Are there certain types of crimes more likely to be missed?

  1. Distinguishing between violent crime and thefts/break-ins in the Incident Approach

    • Are respondents able to associate crimes with one broad category or the other?

  2. Open-ended questions

    • What do respondents think we are asking for in the incident descriptions? How could it be better focused or get them to write more?


Debriefing Protocol. Once the research team identifies completed surveys meriting followup, we will contact select households by telephone in order to conduct the debriefing calls. Target households will be informed that they will receive $40 to complete a 45 minute telephone interview. A draft of the debriefing protocol is provided in Appendix E and an interview recruiting script is presented in Appendix F.



Burden Hours for Pretest and Debriefing Interviews


The estimated burden is indicated below.



Data collection effort

Number of responses per respondent

Anticipated

Number of respondents

Time per response

Total time across all respondents

Incident Level Mail Survey


1

550

8.8 mins

80.7 hours

Person Level Mail Survey


1

550

14.3 mins

131.1 hours

Debriefing interview

1

40

45 mins

30 hours







241.8 hours



The NCVS generic clearance allocated a predetermined combination of sample cases and burden hours that could be used for NCVS redesign efforts. The current sample size and burden hours in fall within the remaining allocation. Following an evaluation of the screeners described here, BJS will request a separate clearance to conduct a full test of the instruments in 2014-2016.



Field Test


If the pretest yields a reasonable response rate and it appears that respondents are able to report victimizations accurately enough to classify them at a relatively high level of aggregation, we plan a full-scale field test of the mail survey. This will require a separate, full-scale clearance request. Depending on the result of the pretest, we will test either one or both of the mail instruments. The field test will include two cross-sectional waves of mail questionnaires 12 months apart, following the methodology described for the pretest. The sample will be drawn from about 40 large MSAs, and each wave will comprise about 200,000 addresses; a proportion of the sample will be included in each wave, that is, will be re-surveyed in the second wave. We anticipate concentrating a disproportionate amount of sample in one MSA, allocated to geographic substrata to be determined.


Field test data will be used to develop MSA-level estimates of victimization. The estimates will be compared across instruments (if both are used) and with the core NCVS and crime report data. The goal is to determine whether cross-MSA variation in the estimates is similar to that observed in other data, rather than to compare victimization levels. Similarly, we will examine the trend across waves to see whether the magnitude and direction of trends is similar to that observed in other sources. We will also use the data from the oversampled MSA to demonstrate how the design can assess variation in victimization rate and citizen attitude levels and trends within a jurisdiction.


If the field test proves successful, we will have a tested, relatively low-cost survey methodology that can be used by BJS, local jurisdictions, or others to assess trends in small areas or to compare victimization rates and citizen attitudes across jurisdictions.


Finally, we will explore whether data generated by such a survey can be combined with core NCVS data to produce blended small area estimates, or can inform models for producing such estimates when applied to core NCVS data.

1 Lauritsen, J.L. and M. L. Rezey (2013) Measuring the Prevalence of Crime with the National Crime Victimization Survey. US Department of Justice, Office of Justice Programs, Bureau of Justice Statistics, NCJ 241656.


2 Based on data from the American Community Survey we estimate that only 0.6 percent of households have 5 or more adults. This means that asking about 4 adults will cover the vast majority of households. Note that estimates on household size were used to help guide the calculations on average burden of the person level instrument.

3 Question numbers refer to Version 1 of the instrument.

4 It will be possible to derive estimates from the NCVS which relax the hierarchy and mimic the logic of the CS.

5 On the NCVS, all violent crimes take priority over property crimes.

9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSeptember 15, 2005
AuthorGerry Ramker, BJS
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy