NCVS CS Field Test - OMB Submission - Statement B Final 3_30_15

NCVS CS Field Test - OMB Submission - Statement B Final 3_30_15.docx

National Crime Victimization Survey (NCVS) Companion Survey (CS) Field Test

OMB: 1121-0351

Document [docx]
Download: docx | pdf

NCVS CS Field Test - OMB Submission – Statement B


B. Collections of Information Employing Statistical Methods


B1. Respondent Universe and Sampling Frame


The overall goal of the study is to test whether a lower-cost data collection procedure can produce reliable crime rate statistics at local levels, and to test which questionnaire is best suited for this purpose. Local areas in the proposed NCVS Companion Study include Core-Based Statistical Areas (CBSAs) (2013, OMB1), and areas within CBSAs. The population of interest for the Field test consists of all households in the 40 largest CBSAs in the US, where CBSA size is measured by number of households. There will be particular focus on three of the largest CBSAs: Chicago, Los Angeles and Philadelphia2 , and these three CBSAs will be sampled at a higher rate than the other 37 CBSAs. CBSAs are chosen as the geographical units so comparisons can be made with CBSA crime rates from the existing NCVS that is conducted by the Census Bureau for BJS. The sample will be an address-based sample (ABS). The sampling frame will be the USPS list of addresses in each of the 40 CBSAs and will be obtained from a vendor.

Stratification

The three large CBSAs will be stratified according to the availability of crime statistics for local areas. The major strata will be the central city and the remainder of the CBSA, as crime rate data are available from police departments for the central cities, but not necessarily for the entire CBSA. (Note that central city here is not necessarily the same as the OMB definition for the CBSA.) Within the central city, geographic subareas will be further stratified using police department data on crime rates into low, medium and high. The geographic subareas will be defined by the level of geography for which crime statistics are available. For example, in Chicago crime rates from the Chicago Police Department are available for the 77 community areas comprising the central city, where community areas are defined by groups of census tracts. Using this data we can stratify the central city into Low, Medium and High crime strata (as was done for the NCVS Pilot Study in Chicago) and treat the area outside the central city as a Remainder stratum. This allows control over the sample size for the subareas so they can be compared with adequate precision. For the Philadelphia and Los Angeles CBSAs, we would create low, medium, high crime strata in the central city stratum using either available neighborhood crime statistics or raw crime data that can be geocoded to a Census geography, such as tract.

Within the remaining 37 CBSAs there will be no stratification, as we will only be producing estimates for the entire CBSA.

Sample Sizes

The target sample size is 7,500 completed interviews in each of the three large CBSAs chosen for subarea estimates and 2,100 completed interviews in each of the remaining 37 CBSAs, for a total of 100,200 completed interviews. The initial sample size of addresses needed to obtain this is approximately 225,170, assuming a response rate of 50% and a vacancy rate of 11%. The initial sample size in each of the three large CBSAs would be 16,854 addresses and 4,720 addresses in the remaining CBSAs. The basis for our response rate assumption is the CS NCVS Pretest, which was based on a representative sample of 2,500 addresses across the US. The pretest obtained an overall response rate of 50.4% (54.5% using the CASRO method) using a mail questionnaire and no telephone follow-up for nonresponse.

The larger sample size in the three large CBSAs allows adequate sample for producing subarea estimates. In the three largest CBSAs, the sample size would be allocated to the central city and Remainder strata. The Remainder stratum would be allocated the sample size it would receive under proportional allocation if 2,100 interviews were targeted for the entire CBSA; the central city stratum would receive a sample size equal to 7,500 minus this number. For example, in Chicago about 31% of the households are located in the central city and 69% in the Remainder of the CBSA, based on Census 2010 data. Therefore 1,440 completes (.687*2,100) would be allocated to the Remainder stratum, and 6,060 (7,500-1,440) to the central city. This prevents the Remainder stratum from being undersampled and keeps the allocation to this stratum proportional, as it would be in the 37 CBSAs where the target is only 2,100 interviews and there is no stratification. The 6,060 would be equally allocated to the Low, Medium, and High crime rate strata, unless this results in such unequal sampling rates that the design effect due to unequal weights exceeds 2 for the central city. In this case, a compromise allocation would be determined with as equal as possible sample sizes in the three strata and a design effect <=2.

Sample Selection

Addresses will be sampled with equal probability within each stratum in a CBSA. Prior to sampling the address frame will be sorted geographically by state and zip code (the sampling program’s default sort) within each stratum. Note that the Remainder stratum in the Philadelphia and Chicago CBSAs will contain addresses in more than one state.

Expected Level of Precision

An important goal of the analysis is to test whether the correlation between victimization rates for the proposed Companion Study and the core NCVS rates and FBI Uniform Crime Rates is significantly different from zero. For the Field test, victimization rates comparable to the existing NCVS rates cannot be calculated because the ILS and PLS questionnaires collect only the four most recent crimes, not all crimes (see Analysis section), so we will calculate the proportion of households and persons reporting a crime (“touched by crime”) instead. A high correlation between the NCVS CS rates and the core NCVS rates across the CBSAs would indicate that the CS is able to detect differences in victimization rates across localities. We will have 40 pairs of victimization rates to calculate a correlation coefficient for each data source and type of crime. For example, we will be able to calculate the correlation between the proposed NCVS CS rates and the existing NCVS rates for property crime, and also for violent crime. The analysis may also involve regressing the proposed NCVS CS rates on the existing NCVS rates using the set of 40 CBSAs. We can do a similar analysis using the FBI UCR crime rates.

To determine the likelihood of detecting a significant correlation, a power analysis was conducted under varying assumptions about the correlation and the number of completed interviews. The population mean and variance for the true crime rate distribution were estimated by first estimating property and violent crime rates for 38 of the largest CBSAs using the 2012 FBI UCR, then averaging across the 38 CBSA estimates for each crime type to estimate the population mean. The population standard deviation across CBSAs was estimated from the variability of the 38 CBSA estimates. (Two of the 40 CBSAs were omitted because they lacked UCR data for 2012.) The power analysis was then run separately for property and violent crime rates. The power analysis was repeated using 2009-2011 NCVS crime rates for the same CBSAs to estimate the population parameters. The population mean and standard deviation were calculated by pooling 2009-2011 NCVS data to increase the stability of the estimates. The power analysis accounted for the sampling error in the CS CBSA estimates by estimating the attenuation of the correlation coefficient that would occur because of the sampling error.

The power analysis showed that if the true correlation coefficient (rho) is at least .5 and the number of completed interviews for the CS crime rate estimates is 2,000, the power to detect a significant correlation is about .86 for property crime and .82 for violent crime. For the three large CBSAs, this assumes no design effect from oversampling the central city stratum. The results of the power analysis using the FBI UCR data to estimate the population parameters are given in Tables 1 and 2 below. The power analysis using the 2009-2011 existing NCVS data gave slightly higher power for the same assumptions.

Table 3 gives the power for testing differences between proportions for different subareas within the three large CBSAs. These calculations assume that the sample is allocated proportionally across strata. Table 4 gives 95% confidence interval half-widths for proportions based on a range of sample sizes. Power calculations are given for differences of two low proportions (such as estimated victimization rates) and also for differences of two higher proportions (which might arise for some of the community questions).These tables show, for example, that in the Chicago CBSA, we would be able to estimate proportions for Low, Medium and High crime areas with CI half-widths of no more than 2 to 3% and detect differences of 4 percentage points between the areas with power of .77 or better (assuming an effective sample size of 1,000 to 2,000 completes per area after accounting for a design effect of 2).

Estimates of Change

An important objective of the low-cost alternative is to provide estimates of change over time at a local level. The change estimates could be especially valuable for assessing the effects of programs or interventions introduced within a local area. The characteristics of interest may be estimates of changes in rates of victimization or attitudes as measured in non-crime items.

The design of the Field Test will include two administrations of the survey spaced by one year to provide guidance on design decisions associated with measuring change. The sample of addresses will be partially overlapping, with 25 percent of the addresses sampled in the first year included again in the second year. The addresses sampled in the overlap portion of the sample will be retained for the second year even if they do not respond in the first year. The remaining two-thirds of the sample in each year will be sampled independently.

The purpose of the overlap is to provide information on the effectiveness of overlapping the sampled addresses. Statistically, we know that a completely overlapping design is most efficient for measuring change if the correlation over time is positive. However, sending surveys to the same addresses more than once could introduce effects that need to be considered in planning. For example, response rates may differ for those in the overlap (they may be higher or lower). Other types of effects such as conditioning error (time-in-sample effects) could also occur.

The Field Test will enable us to examine some of these effects. Correlations over time from the same addresses will be computed from the overlap sample for victimization rates and for estimates from the non-crime items. The response rates (and costs) for the overlap and non-overlap samples will be computed to assess the effect of overlapping on the propensity to respond. Estimates of change in the proportion of households or persons “touched by crime” and estimates of non-crime items will also be compared to evaluate whether surveying the second year has effects on responses to these items. At the aggregate level across all 40 CBSAs, the sample sizes are sufficient for estimating all of these types of statistics with a high degree of precision.

While these objectives are important, it is also worth noting that the Field Test has limitations with regard to evaluating estimates of change. One limitation is that there are not accurate measures of change at the local level to compare against the outcomes of the survey. The core NCVS is not very precise at this level, the UCR has other quality issues, and changes in estimates in a one-year period are likely to be too small to measure well. Nevertheless, we can compare the estimates of change in the proportion touched by crime across the aggregate of the 40 CBSAs from the Companion Survey and the core NCVS. This type of comparison can also be done separately for the overlapping and independent samples. While not at the local level it does provide a quality check on the Companion Survey estimates of change.

Another limitation is that the use of this data collection approach for evaluating interventions at the local level should be more tailored than can be done in the Field Test. For example, non-crime items specific to the intervention could be used, the sample could be targeted based on the areas within the CBSA getting the intervention, and the pre- and post-surveys could be timed to capture the full effect of the intervention. These are not feasible for the Field Test. The estimates from the Field Test are also computed for doing the survey after one year, and different schedules would lead to different estimates of correlations and response effects.

Table 1. Power for test of H0: Rho=0, based on FBI UCR Property Crime Rates

Rho

n=1000

n=1500

n=2000

n=2500

0.2

0.1897

0.2008

0.2070

0.2110

0.3

0.3737

0.3981

0.4117

0.4203

0.4

0.6056

0.6401

0.6586

0.6701

0.5

0.8166

0.8478

0.8633

0.8726

0.6

0.9456

0.9618

0.9689

0.9729

0.7

0.9919

0.9958

0.9972

0.9978

0.8

0.9996

0.9999

1.0000

1.0000

0.9

1.0000

1.0000

1.0000

1.0000


Table 2. Power for test of H0: Rho=0, based on FBI UCR Violent Crime Rates

Rho

n=1000

n=1500

n=2000

n=2500

0.2

0.1672

0.1825

0.1918

0.1980

0.3

0.3229

0.3576

0.3783

0.3919

0.4

0.5292

0.5821

0.6121

0.6314

0.5

0.7389

0.7940

0.8227

0.8402

0.6

0.8951

0.9323

0.9490

0.9581

0.7

0.9737

0.9880

0.9928

0.9950

0.8

0.9968

0.9992

0.9997

0.9998

0.9

0.9999

1.0000

1.0000

1.0000


Table 3. Power for Two-sided Test of H0: P1-P2=0, alpha=.05

 


n1=600

n1=800

n1=1,000

n1=1,500

n1=2,000

P1

P2

n2=600

n2=800

n2=1,000

n2=1,500

n2=2,000

10%

5%

0.92

0.97

0.99

1.00

1.00

10%

6%

0.73

0.84

0.91

0.98

1.00

10%

7%

0.46

0.58

0.68

0.84

0.93

10%

8%

0.23

0.29

0.35

0.48

0.60








50%

45%

0.41

0.52

0.61

0.78

0.89

50%

44%

0.55

0.67

0.77

0.91

0.97

50%

43%

0.68

0.80

0.88

0.97

0.99

50%

42%

0.79

0.90

0.95

0.99

1.00




Table 4. 95% Confidence Interval Half-Widths for a Proportion based on Sample Size n

Sample

Proportion

Size n

10%

20%

30%

40%

50%

60%

70%

80%

90%

 

(%)

(%)

(%)

(%)

(%)

(%)

(%)

(%)

(%)

100

5.88

7.84

8.98

9.60

9.80

9.60

8.98

7.84

5.88

200

4.16

5.54

6.35

6.79

6.93

6.79

6.35

5.54

4.16

300

3.39

4.53

5.19

5.54

5.66

5.54

5.19

4.53

3.39

400

2.94

3.92

4.49

4.80

4.90

4.80

4.49

3.92

2.94

500

2.63

3.51

4.02

4.29

4.38

4.29

4.02

3.51

2.63

600

2.40

3.20

3.67

3.92

4.00

3.92

3.67

3.20

2.40

800

2.08

2.77

3.18

3.39

3.46

3.39

3.18

2.77

2.08

1000

1.86

2.48

2.84

3.04

3.10

3.04

2.84

2.48

1.86

1500

1.52

2.02

2.32

2.48

2.53

2.48

2.32

2.02

1.52

2000

1.31

1.75

2.01

2.15

2.19

2.15

2.01

1.75

1.31


B2. Collection Procedures


B2.1 Instrumentation


Draft instruments were developed using content from the current NCVS as well as content from extant state and local area crime surveys. Two versions are being tested – one that collects details at the incident level, and one that collects information at the person level. Copies of the initial draft instruments were provided to OMB in a submission for cognitive testing. Since that time, the instruments have been revised based on the results of the cognitive testing and based on the findings from a small-scale pretest. Based upon the testing to date, we have revised the instruments for both approaches; these are included in Appendix A. Each of these instruments would rely on a household-level respondent to report for all adult household members:


  • Person-Level Survey. This first questionnaire departs from the core NCVS in that it focuses on victims rather than on incidents of crime. The Person-Level Survey (PLS) asks questions about each adult in the household. The strengths of this instrument are its simplicity and ease of administration; its weakness is that resulting estimates are person-based, not incident-based, and so is not directly comparable with published NCVS crime rates. Pretest respondents had some difficulties with the PLS design, however the problems were clear and easily remedied.


  • Incident-Level Survey. The second instrument asks about victimization incidents, and associates them with household members. In this approach we use a detailed set of questions to collect data on the “most recent” household property and personal/violent crimes. This instrument is more complex than the PLS, but much less complex than the NCVS. If successful, it would yield estimates similar to those from the core NCVS. The strength of this instrument is that it should yield a level of detail that is closer, but not identical, to that of the core NCVS. Based on findings from the small-scale pretest and debriefing interviews, the Incident Level Survey (ILS) generally performed well. Navigation or comprehension issues identified by the pretest have been addressed in the revised instruments for the Field Test.


Structure of the Person Level Survey


Estimates derived from the PLS will represent the number of persons victimized at least once for each of the enumerated violent crimes (see below), and the number of households victimized by broad categories of property crime. These would be comparable to the prevalence estimates recently published for the NCVS (Lauritsen and Rezey, 20133). This version of the CS will not be able to estimate the number of times a person/household was victimized or the number of crime incidents, which are the more common estimates provided by the NCVS. For purposes of tracking trends or developing local policies, it should be sufficient to understand how different people/households are touched by victimization.


The beginning or end of the person-level instrument includes questions on perceptions of community safety and the police. These questions are discussed in a subsequent section (see the section headed “non-crime questions”).


The remainder of the questionnaire collects information on household experiences with crime. An initial section includes questions about household break-ins and theft of household property, including motor vehicles. The survey then shifts to questions about crimes experienced by each adult in the household. The questionnaire asks whether an adult in the household has experienced different types of crimes at least once during the 12 month reference period. Data will be collected for up to four adults in the household. Is a respondent reports a crime, they are asked to summarize victimizations that occurred against the subject within the last 12 months. The open ended responses will be used as qualitative information about the incident, and will also serve to support data editing, as is done in the current NCVS.


The respondent is asked a series of questions to characterize the violent crimes experienced by adults in the household, with the goal of approximating the following NCVS Type of Crime categories for violent crimes:


  1. Assault. Items 30-48 ask Adult 1 about being attacked or threatened. Aggravated and simple assault are distinguished by whether the incidents involved a weapon (items 32 and 42) and whether there was an injury (item 34).

  2. Robbery. This is distinguished by whether the attack or threat involved stealing something (items 35 and 44).

  3. Rape and Sexual Assault. Questions about unwanted sexual activity are asked in Items 49-66.

  4. Domestic and Intimate Partner Violence. Items 36, 45, 54, and 63 ask about the relationship between victim and offender. These can be used to classify the event into one of these two categories.


Because the instrument is at a person level and we want to keep it relatively short, it is not possible to replicate more detailed NCVS type of crime criteria. The goal is to approximate the important distinctions. Appendix B presents more detail on the estimates that the two instruments would support.


There are also items designed to collect data on property crimes, including Burglary (Item 10), Motor Vehicle Theft (Item 20), and larceny (Items 21 and 67). Similar to the NCVS, the intent is to count these crimes for the entire household, rather than at a person level. Items 70 – 72 ask about identity theft and credit card fraud. Finally, all survey sections ask whether any of the crimes were reported to the police (items 16, 24, 37, 46, 55, 64, 69, and 73). Local jurisdictions are particularly interested in incidents that may not be reported to the police.


Structure of the Incident-level Survey


This instrument will ask respondents to report and describe discrete victimization events experienced by any adults in the household. It will support estimates of the number of victimizations by type and victimization rates as defined by the NCVS. Since the respondent is asked to identify the victims of each incident, it should also be possible to compute a prevalence rate, as with the person/household-level instrument above.


This instrument begins with collecting information about the household and its members, and also includes the same “non-crime questions” as the person-level instrument. The victimization questions collect data on any incidents involving an enumerated adult household member. These items are divided into questions on violent crime and thefts/break-ins. The violent crime section starts with a series of screening questions asking if anyone in the household has been a victim. The next four sections ask for details of the four most recent violent crime incidents that occurred against an enumerated adult. Each section asks for details needed to classify and describe the incident. It begins by asking for the month/year of the incident and a summary of what happened. The remaining items ask for details needed to classify the incident into one of the major violent crime categories, including:


  1. Rape and Sexual Assault. Victim was confronted (Item 9) and there was a sexual assault of some type (Items 16 – 19).

  2. Robbery. Victim was confronted (Item 9) and the perpetrator attacked, attempted to attack or threatened victim with harm (Items 13 - 15) and something was stolen (items 25, 26).

  3. Assault. Victim was confronted (Item 9) and the perpetrator attacked, attempted to attack or threatened the victim with harm (Items 13 - 15). Simple assault is when there is no injury (item 20) and no weapon was involved (Item 12). Aggravated assault is when there is an injury or there is a weapon involved.

  4. Domestic and Intimate Partner Violence. This uses Item 11 to classify incidents into these groups. This can also be narrowed down to ‘Serious’ incidents, as defined by the NCVS.


Respondents are asked to provide these details for up to two incidents that occurred against members of the household. There is space to provide general information for two additional incidents by providing the month/year of occurrence and a detailed summary (e.g., item 53). If there are more than four incidents, the respondent is asked to provide the number of additional incidents.


There are a few questions included in the violent crime section that will be used to describe the event in more detail: (1) whether the police were informed and, if so, what they did (Items 23, 24); (2) the dollar value of anything that was stolen; and (3) the location of the incident.


After the violent crime section, the respondent is asked about thefts and break-ins, structured similarly to the violent crime Section. It begins with a series of screening questions asking about theft of property, break-ins and car thefts. The next four sections ask about the details of the four most recent incidents. Respondents are asked not to report details for a property crime if the incident was already reported in the violent crime section. This will allow imposing the same hierarchy used by the NCVS.


Each section begins by asking for the month/year of the incident and for a summary of what happened. The remaining items ask about details needed to classify the incident into one of the major property crime categories of:

  1. Burglary. The perpetrator broke into the home or tried to break in (items 72 and 73) and there was evidence of a break-in (item 74).

  2. Motor Vehicle Theft. A motor vehicle was stolen or someone tried to steal a motor vehicle (Items 77 and 78)

  3. Larceny. Something was stolen or someone tried to steal something (Items 75 and 76).


There are a few questions included in these sections that will be used to describe the event in more detail. One is whether the police were informed (item 80) and the location of the crime (Item 71). This section provides space for the collection of all details for up to four incidents. At the end of the fourth incident, respondents are asked to provide the number of any additional incidents that may have occurred against the household.


The final section collects data on vandalism and identity theft/credit card fraud. These collect the total number of incidents that occurred for each type of crime. This section also collects data on household income.


Non-crime” questions


Both questionnaire versions include questions on perceptions of nuisance crimes and disorder, fear and safety, and police performance and legitimacy. These “non-crime” indicators are independent from police statistics and provide a perspective from the community. Following is an overview of these indicators and examples of each:


Nuisance and disorder: “Neighborhood residents are concerned about a broad range of problems, including traffic enforcement, illegal dumping, building abandonment, and teenage loitering (Skogan and Hartnett, 1997). One aspect of this new and larger police agenda is an untidy bundle of problems that can be considered as “disorder.” For many purposes, it is useful to think of these problems as falling into two general classes: social and physical (Skogan 1996).” Examples include:


  • On the whole, is this neighborhood a good place to live?

  • People around here are willing to help their neighbors.

  • On the whole, problem is litter, broken glass or trash on sidewalks/streets?

  • Do you think public drinking is a problem in your neighborhood?


Fear and safety: Research on fear of crime conceptualizes it in one of four ways. Three definitions are cognitive in nature, reflecting people’s concern about crime, their assessments of personal risk of victimization, and the perceived threat of crime in their environment. The fourth definition of fear is behavioral and defines fear by the things people do in response to crime. These include avoiding activities and areas, restrict behaviors, and increase home and self-prevention. Examples include:


  • How much of a problem is crime in your neighborhood?

  • How fearful are you of being a victim?

  • To what extent are you fearful that someone will break into your home?

  • % of homes with home alarms

  • Active neighborhood watch program


Citizens’ perceptions of police performance and legitimacy: These indicators include measures of police performance, production, quality of police service, visibility of policing, police-citizen contacts, and satisfaction with police and police encounters. Examples include:


  • Police good job dealing w/ problems that concern people?

  • When call 911, does help arrive quickly?

  • How effective is the police department in dealing with neighborhood problems?

  • How satisfied were you with the police efforts?


We have included a selection of such questions for two reasons: (1) in response to widespread interest, especially among local jurisdictions, in such measures in concert with victimization measures; and (2) to reduce the potential for “topic salience bias” in the mail questionnaires. Topic salience bias would occur if, in this instance, households experiencing a crime were more likely to return the survey than those who had not. Including questions salient to a wider audience should reduce the potential for this kind of bias.


A concern about including such questions is that they might have an unintended effect on reporting victimizations. To assess this threat, the pretest will include a split-ballot experiment; in each instrument, half of the surveys will place the non-crime items at the beginning and half at the end.


In the interests of burden and cost, we have limited the number of “non-crime” questions to one page on the mail instruments. If the Field Test is successful, these questions could be tailored to the needs and interests of each local jurisdiction.


Instrument Testing


Cognitive testing of the ILS and PLS instruments began in the July 2013 and concluded in December 2013. A small-scale pretest was conducted in June-August 2013, and findings used to revise the instruments in advance of the Field Test. The goals of the small-scale pretest were to:


  • assess the unit and item response rates;

  • determine what level of detail respondents provide when not prompted by an interviewer; and

  • investigate reasons for incomplete instruments or improperly completed questions.


We selected a simple random sample of 2,500 addresses in the Continental U.S. Half the sample received an ILS instrument and half a PLS instrument. Each address received a wave 1 questionnaire with a cover letter and a $2 incentive; a postcard reminder followed about a week later. Those not responding 20 days after the wave 1 questionnaire were mailed a wave 2 questionnaire. A final Federal Express mailing was sent to nonrespondents about three weeks after wave 2. Based on this methodology, we surpassed a 50 percent response rate. Based on the previous CS Pilot Study (which attempted to replicate the NCVS using standardized, computer-assisted telephone interviewing) we had estimated that about 20 percent of responding households would report some type of victimization. We found that the prevalence was lower in the pretest, with 16.8 percent of ILS respondents reporting a crime and 12.8 percent of PLS respondents reporting a crime. Note that due to small sample sizes, these pretest differences are not statistically significant.


In order to assess the impact of including the non-crime questions in the CS, we embedded an experiment where half of the instruments included non-crime questions at the beginning, and half at the end of the instrument. The goal was to assess the impact of placement on: (1) response rate, (2) the estimates of crime, and (3) the distribution of responses to the non-crime questions. Results from the pretest were inconclusive. There are indications that placement did not impact response rate, nor did placement have an impact on how the non-crime questions were answered. There may be an indication that placement affects crime reporting, with upfront placement potentially improving respondent recall, but the difference was not statistically significant. Since the pretest was inconclusive, we plan to continue this experiment into the Field Test.



B2.2 Data Collection


As indicated earlier, there are two main instruments to be tested in the CS Field Test (refer to Appendix A). These include an Incident Level Survey that asks household respondents to describe up to four household property crimes4 experienced in the past 12 months, as well as describe up to two violent crimes that the enumerated adults may have experienced in the past 12 months. The ILS data will be at the incident-level, with an observation for each crime reported and described. The second instrument is a Person Level Survey that asks household respondents to report whether the household has experienced certain property crimes in the past 12 months, and whether each adult (up to four) has experienced certain violent crimes in the past 12 months. The PLS data will be at the household and person-level, with an observation for the household overall, and for each adult.


Households selected for the CS Field Test will be asked to complete either the ILS or the PLS instrument. The data collection methodology includes:

  • Wave 1 Survey Packet. A survey packet will be mailed using USPS 1st class postage. The packet will include an introductory letter and a $2 incentive.

  • Thank you / Reminder. All sampled households will receive a thank you /reminder message.

    • Addresses without a telephone # from Directory Assistance will be mailed a postcard.

    • Addresses where a telephone # is available will receive either a postcard or an automated telephone call (we plan a split ballot experiment where some addresses will receive a postcard and others will receive a telephone reminder).

  • Wave 2 Survey Packet. The wave 2 mailing will be sent only to nonresponding households. This survey will also be mailed using USPS 1st class postage, but it will not include a monetary incentive. The packet will include a cover letter that highlights the importance of response.

  • Thank you / Reminder. Note that the CS pretest did not include a thank you / reminder after the wave 2 survey, and instead went directly to the wave 3 mailing. However, depending on Field test response we may consider a targeted thank you / reminder. The goal would be to improve response in advance of the wave 3 mailing.

  • Wave 3 Survey Packet. This wave 3 mailing will be sent only to nonresponding households. In the CS pretest, the wave 3 packet was mailed to all nonrespondents using Federal Express delivery. We plan to do the same for the Field Test.

  • Optional wave 4 mailing. The CS pretest data collection efforts ended with the wave 3 mailing. Depending on response in the Field Test, one option may include adding a targeted wave 4 mailing for low response groups (such as high crime blocks).


The schedule for data collection assumes a Fall 2015 start;. Appendix C includes the draft schedules. Appendix D includes English language examples of the supplemental materials (e.g., letters, postcards); materials will also be available in Spanish.

B2.3 Estimation


We will employ statistical weighting adjustments to reflect the oversampling in the large CBSAs and help compensate for differential survey response rates and undercoveragee A household base weight will be calculated as the inverse of the household’s probability of selection. The household base weights will be adjusted for nonresponse by cells formed from variables correlated with response propensity, as identified by the logistic regression model. Nonresponse adjustment factors are designed to reduce the potential bias caused by differences between the responding and non-responding population. The adjustment factors are calculated as the reciprocal of the weighted response rates for the adjustment cells.

Poststratification or raking of the nonresponse-adjusted weights to household control totals will be done by tenure (owner vs renter), household size (number of adults in the household), and location (central city vs remainder of CBSA) for each CBSA, using the household’s questionnaire responses to define poststratification cells or raking dimensions. Missing values for variables needed to form cells will be imputed. Control totals by location can only be used if the central city stratum was defined using Census geography variables, or if control totals for the central city stratum are available from some other source. The household weight would be used in household level estimates, such as the percent of households victimized by a property crime.

The two instruments collect the information on adults in the household differently, but each allows for the individual adults in the household to be identified along with their demographic characteristics. A person base weight will be assigned to each identified adult in the household as the final household weight, since there is no sampling of adults within the household. The person base weights will be raked by age, sex, race and education within CBSA to compensate for 1) failure to collect data for all identified adults in the household, and 2) undercoverage due to failure to roster or list all the adults (up to 4) in the household. Item response rates for age, sex, race and education will be considered before deciding on the final raking dimensions. The person weights would be used in calculating person level estimates, such as the percent of persons victimized by a violent crime.


B2.4 Analysis



The NCVS CS Study is intended to explore whether a low-cost self-administered victimization survey can provide valuable information for comparing victimization rates across local areas and assessing local area changes in victimization rates over time.

Two versions of the instrument are being tested: a person-level survey (PLS), and an incident-level survey (ILS). The PLS asks about victimization experiences of the adults in the household (for up to four adults), and the ILS first collects a household roster (for up to four adults) and then asks about victimization incidents occurring to adults in the household.

Both questionnaire versions include questions on fear of crime, police performance, and perceptions of safety—the “non-crime” questions. These questions have exactly the same wording on the PLS and the ILS. Half of the questionnaires in each of PLS and ILS have the non-crime questions at the beginning of the survey, and the other half have the non-crime questions at the end of the survey. There are thus four experimental groups: PLS with non-crime questions at the beginning, PLS with non-crime questions at the end, ILS with non-crime questions at the beginning, and ILS with non-crime questions at the end.

Objective 1: Compare the PLS and ILS versions of the instrument, and the placement of non-crime questions at the beginning or end of the survey.

The four experimental groups will be compared with respect to household response rates and demographics of rostered/listed adults. All comparisons will make use of the blocked design for the study, with CBSA as the blocking unit. Additionally, the item nonresponse patterns will be compared for the groups. A nonresponse bias analysis will be conducted for each instrument, following the protocol in Section B.3.

Because the PLS and ILS collect information on the adults in the household using different formats, the percentages of households with one, two, three, or four or more adults for each instrument will be compared with percentages of these quantities found for each CBSA from the American Community Survey. The percentages of the rostered/listed adults from each instrument in different age/race/sex/education categories will be compared with American Community Survey percentages for those categories.

We will also compare the demographic profiles of the household respondent (“person 1” on the questionnaire) for the four experimental groups.

The core NCVS and the UCR both collect a tally of crime incidents, and thus are able to estimate a victimization rate as the number of victimizations occurring per 1,000 households or per 1,000 persons. Neither the PLS nor the ILS is designed to be able to calculate victimization rates using this definition. Instead, each is designed to measure the rate of persons or households “touched by crime”—that is, the percentage of persons or households who have been victimized at least once in the past 12 months. The four experimental groups will be compared with respect to four “touched by crime” measures:

A. Percent of households reporting at least one crime

B. Percent of households reporting at least one property crime

C. Percent of households reporting at least one violent crime

D. Percent of persons reporting at least one violent crime



In addition, the responses to the non-crime questions will be compared for the four experimental groups, to ascertain whether they systematically differ in terms of the respondents’ perception of neighborhood safety or police effectiveness. The comparison will be made using both unweighted percentages, and using percentages that adjust for the possibly different demographic compositions of the household respondents.

Objective 2: Evaluate the ability of the CS to compare victimization rates across areas.

Although the PLS and ILS use a different measure of victimization than do the NCVS and UCR, we would expect the percentages of persons reporting at least one violent crime on the CS surveys to be highly correlated with the violent crime victimization rate from the NCVS or UCR. We will compute the percentages for items A through D for each CBSA, then calculate the correlation between those percentages, across CBSAs, for

  • PLS and ILS

  • PLS/ILS and UCR CBSA victimization rates

  • PLS/ILS and core NCVS CBSA victimization rates (calculated using multiple years)

  • PLS/ILS and core NCVS CBSA estimates of “touched by crime” in previous six months



Similar analyses will be performed within each of the oversampled CBSAs, comparing the “touched by crime” rates from PLS/ILS for the high-, medium, and low-crime areas as identified by statistics from the central city police department.

We will also examine the distributions of victimizations reported on the PLS/ILS by types of crime, and compare these with distributions from the core NCVS.

Objective 3: Evaluate the ability of the CS to detect changes in touched-by-crime rates over time.

The core NCVS does not have sufficient sample sizes in most CBSAs to be able to reliably estimate a one-year change in victimization or touched-by-crime rate within a CBSA. To look at change over one year, we will compare the direction of the average change in the touched-by-crime rate using the CS with the analogous statistic from the core NCVS, for each of the four experimental groups cross-classified by PLS/ILS and placement of non-crime questions. Additionally, we will compare the estimates of change in the touched-by-crime rates and in the non-crime questions, for the overlap sample in the CS (25% of the addresses within each CBSA) and the non-overlapping sample.

Objective 4: Evaluate “non-crime” questions.

There has been increasing interest in assessing police performance and perceptions of community safety. This type of information could be particularly valuable at the local level, providing information for local governments on public opinion about the community. This may be especially useful for localities interested in assessing the impact of any community-level safety or policing initiatives. The CS could be an effective tool for capturing such data, and could also provide a mechanism for localities to compare themselves to some benchmark or to similar cities/areas. The Field Test instruments include placeholder questions designed to test the impact of these non-crime questions on survey response and on crime reporting. Since the recent pretest was small and its results inconclusive, we plan to continue experimentation into the Field Test; as indicated earlier, half of the surveys will include these non-crime questions at the beginning of the instrument while the other half will include these questions at the end.

B3. Describe methods to maximize response rates and to deal with issues of non-response.


In preparation for the Field Test, BJS conducted a small-scale pretest to assess both the instruments and the data collection methodology. In the pretest, there was an initial survey mailing, followed by a thank you /reminder postcard, and up to two additional survey mailings to nonrespondents. A $2 incentive was included in the initial mailing, and the 3rd survey mailing was delivered via Federal Express. The pretest methodology achieved an overall response rate of 52.6 percent.


We found higher response from addresses located in Census blocks with fewer African Americans, fewer Hispanics, and more stable households (defined as those at the same address for a year or more). The methodology also worked better in regions outside the North East. Based on a study of response propensity in the pretest, we anticipate that households with a higher response propensity are more likely to be from lower crime areas. In order to ensure that our Field Test estimates are reliable, we plan to monitor response rates based on those demographics most correlated with victimization; these include gender, age, race-ethnicity, educational attainment, and income. We will use a combination of survey data and block-level Census data to monitor response across groups known to have higher victimization rates.


Expected Response Rates

The pretest was a nationally representative sample of 2,500 mailing addresses and is the basis for our expected response rate for the proposed CS. A completed questionnaire for the pretest was defined as a questionnaire with at least one completed item for both the ILS and the PLS. The pretest obtained an overall minimum response rate of 50.4% using a mail questionnaire and mail follow-up (no telephone follow-up) for nonresponse. The CASRO response rate was 54.5%.

The data collection methodology for the pretest included a wave 1 survey mailed using first-class USPS. The package included a cover letter introducing the study and a $2 incentive. A follow-up postcard reminder was mailed about a week later to all sampled households. A wave 2 survey was mailed only to nonrespondents about 2 weeks after the postcard reminder. A final wave 3 survey was mailed to nonrespondents using Federal Express, about 3 weeks after the wave 2 mailing. Data collection was concluded 4 weeks after the final mailing. The final distribution of the sample was:

Completed questionnaires 1,151

Nondeliverables (Postal returns) 217

Refusals, Blank questionnaire returned 34

No questionnaire returned 1,098

Total: 2,500


The response rate of 50.4% is calculated as 1151/(2500-217), where only the questionnaires returned as undeliverable by the USPS are assumed to be ineligible. The CASRO response rate assumes an eligibility rate of 84.5% for the 1,098 questionnaires not returned, based on the eligibility rate for the sampled addresses where eligibility status is known, i.e. (1151+34)/(1151+34+217)=.845. The CASRO response rate is calculated as 1151/(1151+34+.845*1098) = .545. The questionnaires used in the proposed CS would be similar to the ILS and PLS questionnaires used in the pretest. The follow-up procedures for the proposed field test will involve at a minimum the level of effort expended for the pretest (see Data Collection Procedures). In the pretest, 92% of the sampled addresses were located in a CBSA. Therefore we believe the pretest is a realistic predictor of the response rate we might expect for the proposed Field test.

Addressing Final Non-Response

We will use three approaches to identify and adjust for nonresponse bias: 1) response propensity analysis to profile the nonrespondents, 2) comparison of respondents to known population data, and 3) statistical weighting adjustments (described above).

Response Propensity Analysis

We will use logistic regression to model response propensity as a function of variables that are available for both survey respondents and nonrespondents. These are the questionnaire type (ILS vs PLS), whether the address had a matching landline phone number from the vendor, the household’s location (central city vs remainder of the CBSA, crime stratum), and demographic characteristics of the block and block group containing the sampled address, using Census 2010 and 5-year American Community Survey (ACS) data. Demographic characteristics may include percent of population that are over age 50, percent Black nonHispanic, percent Hispanic, and percent Male; the percent of households that are owned (as opposed to rented), that are one-person, below the poverty level, that own a vehicle, and are living at the same address as one year ago.

In the pretest, the logistic regression model found that completed questionnaires were more likely to come from households with a landline phone number available from the vendor, households located in blocks with a lower percent Hispanic or Black population, and tracts with a higher percent of households at the same address one year ago.



Nonresponse Bias Analysis

The nonresponse bias analysis will consist of identifying factors associated with both response propensity and the probability of reporting a crime in the questionnaire. To the extent these are the same variables, the potential for nonresponse bias exists in the survey estimates. Logistic regression will be used to model the propensity to report a crime for the survey respondents as a function of the respondent’s demographic characteristics as reported on the questionnaire, and neighborhood block or block group characteristics from the 2010 Census or ACS.

For example, in the pretest the logistic regression model found that the percent of households below the poverty level in the block group and whether the household has a landline phone number available from the vendor were highly correlated with the probability of reporting a crime. The availability of a landline phone number was also highly correlated with response rates. Households without a matching landline phone number are more likely to be cell-phone only, and it is well-known that cell-phone only households are more likely to be lower income. Since poverty rates are also positively correlated with being victimized by crime, the possibility for nonresponse bias exists: the survey estimates may be underestimates of the true crime rates unless poverty status or household income is accounted for in the estimation.

We will also compare the set of NCVS CS household respondents to the Census 2010 or ACS 5-yr household totals for the CBSA by household demographic characteristics, using the respondents’ self-reported characteristics from the questionnaire. Both unweighted and weighted comparisons will be made using the adjusted weights (see above), to see how effectively the weighting adjustments restore the sample distribution. Confidence intervals around the respondents’ estimated proportions for demographic categories will give an indication of whether the respondents differ significantly from the CBSA population.


B4. Describe any tests of procedures or methods to be undertaken.


The CS Field Test will include a number of tests, including:

  • Instrument approach. As described earlier, we will be testing two instrument approaches. One, the ILS, asks households to enumerate and describe recent incidents of victimization. The second instrument, the PLS, asks households to describe whether each adult in the household has experienced any types of victimization.

  • Influence of non-crime questions. A small series of questions are included about safety and the local police. The rationale for including the questions was that they are relevant to all respondents, and so would help reduce avidity bias (that is, those who have more experience with the survey topic are more inclined to respond to a survey). In addition, research indicates that safety and police contact questions can help promote recall of victimization (Cowen, et al., 1978)5. In order to test the impact of the non-crime questions on crime reporting and on response rates, half of the households will receive a questionnaire where the non-crime questions are asked on the initial page of the survey, and half will receive a questionnaire where the non-crime questions are asked on the final page.

  • Impact of bilingual materials on response. All addresses in Census tracts with a high percent of Spanish speakers will be sent survey packets containing materials in both English and Spanish. The same will be true for households where the last name is identified as likely Hispanic. The remaining households will be eligible for an experiment on the impact of bilingual materials on response; 2,000 such households will be selected to receive bilingual materials.

  • Use of automated telephone reminders. The sampled addresses will be sent to a directory-assistance vendor which will append telephone number, where available. Those addresses where a telephone number is unknown will be mailed a postcard reminder. The remainder will be assigned to one of two treatments: half will receive a postcard reminder, the other half will receive an automated, telephone reminder.



5. Contacts for Statistical Aspects and Data Collection


Westat will collect all the information for the CS. Dr. Mike Brick, Westat Vice President and Director of Westat’s Survey Methods Unit, and Dr. Sharon Lohr, Westat Vice President, provided consultation on the statistical and methodological aspects of the CS Field Test. Dr. Brick can be reached by telephone at 301-294-2004 or via email at [email protected]. Dr. Lohr can be reached at 301-738-3512 or [email protected].






1 Feb 2013 OMB Bulletin No 13-01. https://www.whitehouse.gov/sites/default/files/omb/bulletins/2013/b-13-01.pdf

2 These are the Chicago-Naperville-Elgin, IL-IN-WI Metropolitan Statistical Area, the Los Angeles-Long Beach-Anaheim, CA Metropolitan Statistical Area, and the Philadelphia-Camden-Wilmington, PA-NJ-DE-MD Metropolitan Statistical Area.

3 Lauritsen, Janet L. and Maribeth L. Rezey (2013). “Measuring the Prevalence of Crime with the

National Crime Victimization Survey.” Washington DC: U.S. Department of Justice, Bureau of

Justice Statistics. NCJ241656.

4 Here the crime is based on the perception of the respondent and may not be considered an NCVS reportable crime.

5 Cowan, C.D., L.R. Murphy, and J. Wiener. 1978. Effects of supplemental questions on victimization estimates from the National Crime Survey. In Proceedings of the Section on Survey Research Methods.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPamela Giambo
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy