NCVS Companion Study cognitive testing memo

NCVS-CS - Description of 2013 Cognitive Testing Task (071013).docx

Research to support the National Crime Victimization Survey (NCVS)

NCVS Companion Study cognitive testing memo

OMB: 1121-0325

Document [docx]
Download: docx | pdf



U.S. Department of Justice


Office of Justice Programs


Bureau of Justice Statistics

Washington, D.C. 20531


MEMORANDUM



To: Lynn Murray

Clearance Officer

Policy and Planning Staff

Justice Management Division


Through: William Sabol

Acting Director

From: Michael Planty

Chief, Victimization Statistics


Date: June 4, 2013


Re: BJS Request for OMB Clearance for Cognitive Testing under the National Crime Victimization Survey (NCVS) Redesign Generic Clearance, OMB Number 1121-0325.


The Bureau of Justice Statistics (BJS) requests clearance for cognitive interviewing tasks under the OMB generic clearance agreement (OMB Number 1121-0325)) for activities related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program. BJS, in consultation with Westat under a cooperative agreement (Award 2010-NV-CX-K077 National Crime Victimization Survey on Sub-National Estimates), is conducting the next phase of research to develop a methodology for a low cost alternative to the NCVS. The next phase of research includes an address-based sampling (ABS) design and relies solely on mail-based surveys to collect local area estimates of criminal victimization.


In accordance with the generic agreement, BJS is submitting to OMB for clearance the materials for the pretesting activities associated with the companion survey. BJS wishes to use the approved generic clearance for cognitive interviewing to develop and test questionnaires for this collection. The overall goal of the cognitive tests is to ensure that the instruments encourage sampled households to respond to the survey, allows respondents to navigate the instruments effectively, and accurately conveys the intended meaning of questions about victimization and related topics. This initial clearance will be followed by a request for a field test to be conducted in 2014-2015, which we describe briefly later in this document.



Overview


Since 2008 BJS has initiated numerous research projects to assess and improve upon the core NCVS methodology. During 2009 BJS met with various stakeholders, including the Federal Committee on Statistical Methodology and representatives from state statistical analysis centers, state and local law enforcement agencies, the Office of Management and Budget, and Congressional staff to discuss the role of the NCVS. The discussions included the need for sub-national estimates and the challenges and potential methodologies for providing these estimates. The purpose of the current research is to develop and evaluate a cost effective sub-national companion survey of victimization.


In the first phase of research of the NCVS Companion Survey (CS), BJS attempted to use the existing NCVS instruments in a computer-assisted telephone interview (CATI) environment using an address-based sample (ABS). Based on the results of this research we have concluded that it is extremely difficult, if not impossible, to replicate NCVS estimates of victimization rates using a low-cost data collection approach. The NCVS is a large and complex survey, with many potential sources of relative bias compared with low-cost alternative data collection approaches, including nonresponse, mode effects, and house effects in data collection and processing. It does not seem feasible to control for all of these differences in a low-cost vehicle, regardless of the sample design or data collection mode(s). NCVS estimates of victimization rates are very sensitive to many of these factors, so estimates may change substantially when even small deviations occur in the survey process. Truman and Planty (2012) describe how the victimization estimates in the core NCVS changed when the sample size increased and new interviewers were needed; Rand (2008) reviews some effects when the sampled geographic areas changed and the data collection software was revised. Complete details on the initial phase of research are provided in Appendix A (the report of the Pilot research findings).


In the next phase of research we plan to develop a low-cost, self-administered approach that can support sub-national estimates of crime victimization, using a less complex instrument than the current NCVS. The goal would be to generate a survey that could parallel NCVS and Uniform Crime Report (UCR) estimates over time, rather than replicate either of them, and could be used to assess the impact of local initiatives. The first step in this phase of research is instrument design, including cognitive testing of draft instruments.


Instrument design


Based on the findings from the first phase, for the next phase of R&D in the NCVS Companion Survey (CS), BJS is investigating a low cost option that BJS and local jurisdictions could use to assess changes in victimization over time or across jurisdictions. Given the unique complexities associated with the Census Bureau’s NCVS data collection and processing requirements, the goal is to provide BJS and jurisdictions with an instrument and a methodology that can be implemented at the local level. BJS or others could also use this methodology across jurisdictions for cross-sectional comparisons. CS level estimates would not equate directly to those from the NCVS, and CS data would not be combined with that from the NCVS for blended estimates. Instead, the CS would support estimates of change over time within an area or of cross-sectional differences across areas congruent to similar comparisons using the core NCVS.


We are considering including questions that assess concepts beyond crime victimization, such as perceptions of individual and community crime and safety, fear of crime, interactions with the police, and perceptions of police performance or attitudes toward the police. Such questions serve two different purposes: (1) to engage respondents who have not been crime victims; and (2) to provide additional information of interest to BJS or local jurisdictions.


At this time we are considering two different instruments, each of which would rely on a household-level respondent to report for all household members:


  • Abbreviated Person-Level Instrument. This first questionnaire would be an abbreviated instrument that departs considerably from the core NCVS. The goal is to develop a self-administered instrument that enables focuses on one person within a household at a time, and is sufficiently brief to increase the likelihood of survey cooperation. We call this instrument the Person-Level approach because it asks questions about each adult in the household. Appendix B includes the current draft of the person-level instrument. The strengths of this instrument are its simplicity, brevity, and ease of administration; its weakness is that resulting estimates would be only person-based, not incident-based.


  • Detailed Incident-Level Instrument. The second instrument is the Incident-Level approach because it asks about victimization incidents, and associates them with household members. In this approach we use a detailed set of questions to collect data on the “most recent” property and personal crimes. This instrument is more complex, and thus may be difficult to self-administer. If successful, it would yield estimates similar to those from the core NCVS. Appendix C and C2 include the current draft of this instrument (one document includes the questionnaire, while the second document includes a flap that would allow respondents to refer to the household matrix). The strength of this instrument is that it should yield a level of detail that is closer to that of the core NCVS; its weakness is that it is complex and may be difficult for respondents to self-administer.


Survey Content – Crime Victimization. Both instruments would rely on proxy reports within households. The person-level approach would identify whether each adult household member was a victim of a crime in the reference year, but does not support estimating the number of incident of victimization for the person. In addition to property crimes for the household, the instrument will allow categories the major types of victimization. In the incident-level approach we ask a household informant to answer relatively detailed questions about the most recent violent crime and the most recent property crime that occurred to the household members in the reference year, and an abbreviated set of questions about other recent incidents. We do recognize that the use of a proxy respondents could result in an underestimation of crime and certain crime types.


Estimates from the person-level instrument would focus on victimization and produce estimates of prevalence rates, but it does not capture the number of incidents so victimizations rates cannot be estimated. For example, the instrument would support the following person-level estimates (either cross-sectional or change over time):


  • Percentage of adults reporting1 being attacked by an offender with a weapon in the past year;

  • Percentage of adults reporting being threatened with a weapon in the past year;

  • Percentage of adults reporting being attacked by an offender without a weapon in the past year;

  • Percentage of adults reporting something taken directly from them by force in the past year;

  • Percentage of adults reporting a sexual attack (including attempts) in the past year;

  • Percentage of adults reporting an attack by an intimate partner or household member in the past year;

  • The above estimates could be subset by gender, race/ethnicity, and age.


The person-level instrument also includes questions about property crime:


  • Percentage of households reporting a break in (including attempts) in the past year;

  • Percentage of households reporting property theft in the past year;

  • Percentage of households reporting vehicle theft in the past year;

  • The above estimates could be subset by tenure, length of residency, and household income.


The incident-level instrument could support estimates of the number of incidents and victimizations and in that sense is more similar to the core NCVS. This instrument also includes general questions about vandalization, bank fraud, and identity theft. It collects information about the four most recent personal crimes that household members experienced, and asks the respondent to indicate which household members were victims of each crime. At this time we anticipate that the incident-level instrument will be able to produce the following incident-level and person-level estimates:


  • Number of serious violent crimes and percentage of adults experiencing a violent crime, in the past year;

  • Number of simple assaults and threats and percentage of adults experiencing them in the past year;

  • Number of property crimes and percentage of households experiencing a property crime in the past year.


One of the issues to be explored in cognitive testing and conditionally in the later field test is the extent to which the incident-level instrument can collect information to support more detailed type-of-crime classification.


Survey Content – Non-Crime Information. The person-level instrument currently includes questions not specifically about crime victimization. There are two main goals for these questions: (1) to gain the interest of non-victimized respondents, and (2) to provide additional information of interest to local jurisdictions.


Increasing Survey Response. We know from recent research that there can be a “topic salience bias” in surveys. That is, those who have interest and experience with the survey subject are more likely to respond to the survey than those who lack these attributes. An example comes from a national fishing survey where anglers are much more likely to respond to the survey than non-anglers, resulting in overestimates of fishing prevalence. We speculate the same could be true for a survey to estimate victimization rates. The goal of including engaging questions about safety in the neighborhood is to have items at the beginning of the survey that are relevant to all respondents, increasing the likelihood that respondents who have not experienced a victimization will complete and return a survey.


Asking respondents about their fear of crime or attitudes toward the police may be an effective way of eliciting participation. Though the public may exaggerate the risk of serious criminal victimization (Warr, 2000), asking respondents about safety in their communities may resonate with their concerns and encourage them to respond. Moreover, survey data suggest that there is a gap between public expectations of the police and the police force’s ability to deliver services (Skogan, 1990). Asking respondents questions about their perceptions of the police may serve as a “hook” that encourages them to participate in the survey, and collecting such data may valuable to local areas in and of itself.


Utility to Local Jurisdictions. The second goal for these questions is to generate estimates that would be relevant to a local jurisdiction. The current questions included in this section of the person-level survey are all adapted from subnational crime surveys (including the Minnesota Crime Survey, the Utah Crime Survey, and the Questionnaire on Crime and the Oswego, IL Police Department for Citizens). If this survey is implemented in specific jurisdictions, then the questions might be revised to be more policy- relevant to the jurisdictions.


The questions will be tested during the cognitive interviews to assess whether the selected questions are relevant to respondents. We will explore whether there are other issues they identify as being more relevant to them. We will also explore whether their reaction to starting the survey with questions about crime as opposed to questions about safety.


If the person-level questionnaire performs well in cognitive testing, we would include it in the later field test after modifying it based on the cognitive lab results. In this field test we tentatively plan a split ballot experiment to compare two versions of the instrument: (1) set 1 will include the engaging questions before asking about victimization, and (2) set 2 will exclude these questions and ask about victimization immediately. This design will allow us to assess whether there is a difference in response rate from including the questions, as well as whether there is an effect on victimization estimates from including this additional content.


Survey Content – Informed Consent. At this time we recommend covering only the adult population with these instruments. The data will be maintained as confidential and will not be released by our contractor, except under court subpoena. The list of Frequently Asked Questions on the back of the survey cover letter will address handling of the data and confidentiality.




Field test mode and sample design


In Phase 1 of this research effort, we utilized a mail screener to obtain telephone contact information, then a standardized, computer-assisted telephone interview (CATI) to collect core NCVS data. The response rate from this approach was extremely low, consistent with the recent experience of many national telephone surveys. As the number of households with landlines plummets, as call-screening technology evolves, and as telemarketing calls continue to permeate U.S. households, there is a growing reluctance to answer the telephone to any unrecognized telephone number. Also, when residents do answer the phone, new social norms mean that many Americans simply hang up the phone without allowing the interviewer to engage them. In the Phase 1 of this research, 43 percent of useable telephone numbers resulted in a refusal to participate, while in another 25 percent, no one in the household ever picked up the telephone2.


The proposed design of this next phase of research would be a mail-based survey using ABS over two time periods. One of the Phase 1 approaches used the mail to contact households and obtain information about recent events. This mail approach appeared to have merit in terms of the information collected and the response rate, and suggested households might be willing to respond to such a survey. The goal would be to determine whether a self-administered mail-based survey could reliably assess changes in local-area victimization over time and cross-sectional differences across local areas. BJS will select the largest 20-40 NCVS PSUs for this next phase of research. The rationale for selecting the largest PSUs is to ensure that we have comparable data from both the Uniform Crime Reporting (UCR) Program and the NCVS. If possible we will consider choosing some PSUs that participate in National Incident-Based Reporting System (NIBRS) to have detailed police incident-based information to compare. We also want to include PSUs where we expect some variability in the crime rates based on the UCR. The design will likely include two cross-sectional samples but with some overlap between wave 1 and wave 2 of the survey (to support the measurement of change over time). If the approach works, then BJS will consider followup testing with smaller PSUs to ensure that the approach is generalizable nationally, not only with larger metropolitan areas.


Unlike the NCVS, and unlike the phase 1 research of the NCVS-CS study, this phase of research would not include an interviewer-administered survey for the purpose of generating estimates. All estimates would be based solely on responses to the mailed, self-administered questionnaires. The goals of this phase of research are to assess each of the approaches in terms of data quality and effectiveness at supporting local jurisdictions in assessing victimization change over time.


Assessing Reliability and Validity of the Instruments. The planned field test will include followup telephone interviews to assess the validity of the instruments and nonresponse bias. We also plan to assess unit response rates, item missingness, proper use of skip patterns, logical consistency within the instrument, and the effects of proxy response. We will also compare the victimization rates of the two instrument options with each other, with the NCVS, and with the UCR to assess whether one of the instruments is better at tracking these extant data sources.


When we submit a request for the field test, we will describe this process in greater detail.


Cognitive Testing


For the current request, we are asking for clearance to conduct cognitive testing of the two instrument options. In a broad sense, the goal of the cognitive interviews is to identify and correct features of the instruments that seem to discourage overall response. The cognitive test will also assess the language used, the layout and design, and question ordering (e.g., “order effects”). Table 1 lists more detailed goals of the cognitive interviewing. All cognitive interviews will be audio and video recorded for note-taking purposes only.


Table 1. Major Objectives of Cognitive Testing for the Mail-only NCVS-CS


  1. Proxy response

    1. In the Incident Approach, do respondents think about other people or focus on themselves?

    2. Does the Person Approach work better for focusing on others?

    3. How comfortable/confident are respondents about reporting for others, particularly for young adults and teens?

  2. Multiple incidents

    1. How do respondents think about the most recent or the most serious incident – how do they decide which one to report?

    2. In the Incident Approach, how well do respondents handle moving from one incident to another?

  3. Multiple eligible people in household

    1. In the Incident Approach, how well are respondents able to assign person numbers for the incidents they report?

    2. In the Person Approach, do respondents include all eligible people and fill out sections for them?

  4. Distinguishing between violent crime and thefts/break-ins in the Incident Approach

    1. Are respondents able to associate crimes with one broad category or the other?

  5. Effectiveness of probes

    1. Do the probes seem to capture all types of crimes that we intend to capture?

  6. Open-ended questions

    1. What do respondents think we are asking for in the incident descriptions? How could it be better focused or get them to write more?

  7. Questions to support TOC coding

    1. Is Rs’ understanding of these questions congruent with what is needed for coding? (May need to write up intent statements for these questions)

  8. Order effects

    1. Does asking for property crimes first (person approach) affect reporting of personal crimes, or vice versa for the incident approach?


Once BJS receives OMB approval, Westat will recruit no more than 50 participants for these cognitive interviews. The plan is to interview respondents locally (DC/Baltimore area) as well as in other metropolitan areas. The current schedule assumes we conduct these cognitive interviews in August and September, assuming OMB approval in early July. So that we can thoroughly test the crime questions, we plan to over-recruit participants who were recent victims of crime (past year).


Recruiting. BJS will recruit individuals from households where members have been recent crime victims, as well as from households where there are no recent crime victims. The former type of respondent will help us explore the crime questions, and the latter type of respondent will help us explore motivations for survey response among non-crime victims.


Recruiting will be conducted using on-line advertising outlets, such as Craig’s List. We plan to conduct a small number of cognitive interviews (about 5-6 with each instrument), revise the instruments based on findings, then conduct another set of interviews. This process would proceed until no major issues are uncovered. We would not exceed 50 interviews, which is the maximum burden we are requesting. Based on input from OMB, we will use an incentive of $40.


Testing Protocol. After signing a consent form (Appendix D), each cognitive interview respondent will be assigned a primary instrument to review (either the incident-level or the person-level). The moderator will provide a full packet, as the respondent might receive in the mail, including an outer envelope, cover letter (Appendix E), instrument, and business-reply envelope. The moderator will ask the respondent to review the letter and complete the instrument as he/she would if received in the mail. As they complete the survey, respondents will be invited to comment about anything that concerns them, including formatting, wording, and sensitive or confusing content. All cognitive interviews will be audio and video recorded for note-taking purposes only.


Recruits who reported no household crime experience will be given a scenario/vignette to use while answering the questions. Examples of scenarios include:


  • In January 2013, you were on a week-long business trip. When you arrived home, you saw that your front door had been broken open. When you walked in, a young man ran up to you, knocked you down and ran out of the house. You discovered that about $5,000 worth of electronic equipment was missing and $1000 in cash. You immediately reported it to the police. You did not suffer any injuries from being knocked down.


  • In January 2013, your next-door neighbor had a party that was very loud and disruptive. Another member of your household (e.g., spouse/ roommate) went to your neighbor to complain. Your neighbor called your [spouse/roommate] a whiner and punched him/her in the face. They were taken to the hospital and treated for a broken nose.


Once the respondent has completed the instrument, the moderator will walk through the answers with the respondent. The moderator will use general probes rather than directive probes, such as “What does this answer mean?” and “Can you say more about that?” If it is clear that the respondent made a mistake, such as missing a skip pattern or a changed answer, the moderator will probe to determine what happened and may investigate potential solutions. Similarly, the moderator will probe any questions or text where the respondent’s facial expressions had suggested difficulty, confusion, or concern. Probes might include “You seemed to hesitate here at Q___. What were you were thinking about?” For each survey item there may be item-specific probes, for example, probes about specific words, such as “force,” “threat,” “attack,” or “weapon.”


Once the moderator and respondent walk through the individual questions in the instrument, the moderator will ask global questions about the instrument. This might include asking the respondent to discuss who should complete the interview and mail it back, or asking whether there was any part of the instrument that was inappropriate or a “show stopper” that would preclude them from completing the survey and/or mailing it back. The moderator will also investigate the issue of proxy reporting and how the respondent would answer for other members of the household.


Finally, if time permits, the respondent may be asked to skim the other instrument and compare it with the reviewed survey. The respondent may be asked to discuss impressions of the two and potentially express a preference.


Approach. BJS has asked Westat to conduct these interviews iteratively –when a significant flaw is identified, the testing will be halted, the instrument revised, and then testing re-started. This method is common in usability testing. It stresses the conservation of resources and usually leads to a series of flaws being discovered and corrected. Moreover, respondents in the cognitive lab tend to focus on the most obvious problems and not notice more subtle issues. The iterative testing approach allows these more obvious issues to be repaired early in the testing process so that more subtle and nuanced findings can be identified later in the process.


Burden Hours for Usability Testing


The estimated burden for the cognitive interviews is indicated below. We anticipate that interviews will take no more than 90 minutes and respondents will be provided with a $40 reimbursement as compensation for their time. We recommend a higher incentive because respondents will need to provide their own transportation to a test site, and will be asked to spend one and a half hours on testing tasks.


Estimated Burden of the Cognitive Interview Task

Maximum Number of respondents

Number of responses per respondent

Time per response

Total time across all respondents

50

1

1.5 hours

75 hours


The NCVS generic clearance allocated a predetermined combination of sample cases and burden hours that could be used for NCVS redesign efforts. The current sample size and burden hours in the cognitive testing fall within the remaining allocation. Following an evaluation of the screeners described here, BJS will request a separate clearance to conduct a full test of the instruments in 2014-2015.


Appendices:

  • Report from Pilot Phase of Research

  • Mail-based instrument version 1 (person level)

  • Mail-based instrument version 2 (incident level)

  • Sample Consent form

  • Cover Letter

References


Warr, Mark. “Fear of Crime in the United States: Avenues for Research and Policy,” in David Duffee (ed.), Measurement and Analysis of Crime and Justice: Criminal Justice 2000, Volume 4. Washington, D.C.: National Institute of Justice, 2000.


Skogan, Wesley. 1992. Disorder and Decline: Disorder and Decline: Crime and the Spiral of Decay in American Neighborhoods. University of California Press.



Rand, M.R. (2008). Criminal victimization, 2007. Tech. Rep. NCJ 231327, Bureau of Justice Statistics, Washington, DC.

Truman, J.L. and Planty, M. (2012). Criminal Victimization, 2011. NCJ 239437. Washington, DC: Bureau of Justice Statistics, http://bjs.ojp.usdoj.gov/content/pub/pdf/cv11.pdf.


1 Includes those who “reported” via a proxy.

2 These are unweighted percentages, and ineligible and unlocatable sample cases are removed from the denominator.

2


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSeptember 15, 2005
AuthorGerry Ramker, BJS
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy