Cognitive Test Report

Attachment_2_Cognitive_Test_Report.pdf

Research to support the National Crime Victimization Survey (NCVS)

Cognitive Test Report

OMB: 1121-0325

Document [pdf]
Download: pdf | pdf
Attachment 2. Report of SCV Cognitive Test
Findings
 
 

Methodological Research to Support
the National Crime Victimization
Survey

Findings from the NCVS Mail Survey
Cognitive Testing Activities

August 2011

Submitted To
U.S. Department of Justice 
Office of Justice Programs 
Bureau of Justice Statistics 
Grants.gov upload 
 
Submitted By
RTI International 
P.O. Box 12194 
Research Triangle Park, NC 27709‐2194 
Telephone: (919) 541‐6000 

Table of Contents
Section

Page

1.

Development of the Mail Survey Instrument ..................................Error! Bookmark not defined. 

2.

Preliminary Assessment of the Mail Survey Instrument.................Error! Bookmark not defined. 

3.

Phase 1 Cognitive Test Activities ...................................................Error! Bookmark not defined. 
3.1 Round 1 Cognitive Testing ........................................................................................................ 4 
3.2 Round 2 Cognitive Testing .......................................................Error! Bookmark not defined. 

4.

Overall Findings and Conclusions from the Mail Survey Cognitive Testing ActivitiesError! Bookmark not defin

Appendices
A.
B.
C.

SCV Mail Survey Instrument....................................................................................................... A-1
SCV Household Roster ................................................................................................................ B-1
SCV Mail Survey Cognitive Interview Guide ............................................................................. C-1

List of Exhibits
Exhibit
1 

Summary of Mail Survey Revisions Resulting from Preliminary Cognitive Test FindingsError! Bookmark not de

2

Summary of Mail Survey Revisions Resulting from Round 1 Cognitive Test FindingsError! Bookmark not defin

 

 

Page

Findings from the NCVS Mail Survey Cognitive Testing Activities
This report summarizes the results of the cognitive testing of the mail survey instrument
developed for the National Crime Victimization Survey (NCVS) mode and incentive study being
conducted by RTI International under the title “Survey of Crime Victimization (SCV).” Section 1
provides a brief discussion of the development of the mail survey instrument. Section 2
describes the preliminary assessment of the draft mail survey instrument conducted prior to
OMB review, including findings from the assessment and areas targeted for further refinement
and testing. Section 3 describes the results of the Phase 1 cognitive testing activities, which
included two rounds of cognitive testing following OMB review and approval of the test protocol.
Finally, Section 4 provides overall findings and conclusions from the testing activities and
implications for the Phase 2 field test. Copies of the final version of the mal survey instrument,
the household roster, and interview guide used in the cognitive test are provided in Appendices
A, B, and C, respectively.
1. Development of the Mail Survey Instrument
To facilitate self-administration, RTI created a reformatted, single-instrument version of the
NCVS Screener and Crime Incident Report (CIR) for mail administration. This involved
reviewing each question and response set in the Screener and CIR, identifying with BJS the
items critical for crime classification, assessing the complexity of each item for selfadministration via paper-and-pencil, and determining methods for simplifying the respondent
task by eliminating or revising complex skip patterns. Basic respondent demographic questions
from the NCVS Control Card were incorporated into the draft instrument, along with the
household roster items.
Preliminary assessment of the mail survey instrument (described in Section 2) identified target
areas for additional refinements. Additionally, to reduce survey length, questions about the
characteristics of the offender(s) (e.g., in a gang, drinking or on drugs), injuries or
hospitalizations resulting from the crime, steps taken to protect self or property during the crime,
and presence of others during the crime were removed from the instrument.
2.

Preliminary Assessment of the Mail Survey Instrument

To inform refinements to the mail survey instrument, RTI conducted a preliminary cognitive
assessment of the instrument content and format. Cognitive interviews usually require a small
number of participants, typically less than 10. Ackerman and Blair (2006) note that the number
of cognitive interviews performed for any given project is generally somewhat small due to
budget and schedule constraints. Testing is generally done in an iterative fashion with
subsequent rounds of cognitive interviews testing the materials revised in response to findings
from the first round of testing.
The survey literature does not provide explicit guidance on the optimal number of cognitive
interviews or the number of pretest iterations. The current NCVS questions have been
cognitively tested, but reformatting these questions for a self-administered mail survey was
expected to present substantial challenges. Preliminary cognitive interviews were envisioned as
a method of identifying specific target areas on which to focus additional developmental work. A
small number of cognitive interviews were conducted to provide insight into the viability of
administering the questions in a self-administered format. The issues identified during the
preliminary testing indicated certain problem areas for mail administration. Additional testing
was undertaken to determine whether the mail instrument is a viable option for the field test.

Between December 2010 and early January 2011, 9 cognitive interviews were conducted at RTI
by survey methodologists experienced in cognitive interviewing methods. Participant recruitment
for the cognitive interviews was carried out by RTI using advertisements placed on Craig’s List
for the Raleigh-Durham, NC, area and in RTI internal classifieds, and through postings at local
public health departments, domestic violence shelters, and other similar locations. Interested
candidates were first screened to determine their eligibility for the cognitive interview. The
screening script contained questions on crime experiences (similar to the Screener) as well as
questions on basic demographic characteristics in an effort to recruit a diverse mix of
participants.
RTI staff and their family members were not eligible to participate in the cognitive test.
Additionally, persons who had not experienced a crime in the past 6 months, were under age
18, or did not speak English were excluded. To ensure participants would be eligible to fill out
the majority of the SCV questionnaire, selected candidates had at least one crime experience
that is a focus of the survey instrument (e.g., theft, break-in, or attack of any kind). Additionally,
candidates with a variety of crime experiences were chosen in order to test as many different
questions and routing patterns in the mail survey instrument as possible. Cognitive interview
subjects were selected from the pool of screened, eligible candidates.
Cognitive interviews were conducted in person at RTI’s main campus in North Carolina. All
participants signed a consent form prior to beginning the interview, which was read to them by
the interviewer. A copy of the form was provided for the participant’s records. The consent form
included a separate request to audio record the interview to facilitate note-taking, with
recordings to be destroyed shortly after the summary reports were prepared and analyzed. All
reports were written in a common summary shell that was exported into Excel so that responses
to the same questions could be seen for all participants.
During the cognitive interview, participants were first asked to complete the hardcopy mail
survey instrument on their own. To maximize confidentiality during the interview, participants
were instructed to record only first and last initials when answering the household roster items
on the mail survey, and to enter “Xs” for their phone number. After completing the screening
portion of the survey, they participated in a guided think-aloud process with the interviewer in
which the respondent was asked to discuss individual questions and response sets in the
instrument to gauge their ease or difficulty in completing the survey, their ability to successfully
navigate through the instrument (for example, following skip instructions and marking answer
choices), and their understanding of definitions and terminology in the survey.
Next, participants were asked to continue with the rest of the survey (first CIR, followed by
additional CIRs where applicable) and when finished, went through the same think-aloud
process, discussing any problems they encountered in completing the survey. The interviews
averaged 86 minutes and included a review of a number of questionnaire items, including some
that had been cognitively tested previously for the NCVS. This was to look for any context
effects that may have been introduced with the removal of some items and to gauge how well
the items worked in a self-administered format. The screener portion of the survey averaged 7
minutes; while the first CIR took 13 minutes to complete (the average length for the subsequent
CIRs was much less, about 7 minutes for the second, and 8 minutes for the third CIR among
respondents who experienced more than one crime). All cognitive interview participants
received $40 cash as compensation for their time.
The results of the preliminary cognitive testing are summarized below:

•

Respondents often made errors in filling in the household roster questions. They did not
read or follow the provided instructions, and included themselves or other persons who
should have been excluded from the roster.

•

Respondents had difficulty following skip patterns on a number of items. Some questions
could not be easily located when skipping, or respondents failed to see and follow
provided skip instructions. In particular, respondents found it problematic when the skip
patterns required them to turn multiple pages and locate a question that was somewhere
other than the top left corner of the page, or when a question involved different skip
patterns depending on the answer the respondent selected.

•

Respondents did not understand the meaning of some of the question terminology,
including “evidence,” “incident,” “dwelling,” or “offender.” Additionally, there was
confusion about how to answer some CIR questions when the crime incident occurred
somewhere other than the respondent’s home (e.g., at work).

•

Respondents had difficulty providing the age of household members in the roster and
understanding that the income question was seeking annual income for the household.

•

Respondents had difficulty keeping track of the specific crime incident they were being
asked to provide details for in the CIR. In some cases, respondents combined multiple
crime incidents into one CIR, or tried to split out crimes that occurred in the same
incident across multiple CIRs. Additionally, the questions and skip instructions specific to
crime series (multiple incidents of the same type of crime) were not easily understood.

•

Overall, respondents expressed concern about the length and complexity of the
hardcopy survey instrument, including the number of questions they were being asked to
answer and the wordiness of some items.

•

Finally, the test identified a number of items where consideration should be given to
clarifying the intent of the question and/or expanding or refining the response options
based on the information provided by the cognitive interview respondents.

In response to these preliminary test results, additional cognitive testing activities were planned.
Exhibit 1 summarizes the mail survey revisions resulting from the preliminary cognitive test
findings.
 

Exhibit 1.

Summary of Mail Survey Revisions Resulting from Preliminary Cognitive
Test Findings

Preliminary Cognitive Test Findings

Resulting Mail Survey Revisions

Errors in filling out the household roster

The household roster and questions about the number of
children in the household were removed from the mail survey.
Enumeration of household members will be done in the CAPI
and CATI interview with the household respondent. Only basic
demographic information about the mail survey respondent
was retained in the hardcopy form, including gender and age.

Navigation errors (e.g., difficulty in
following skip instructions)

Skip patterns were simplified by the removal of some questions
in the survey (this was also necessary to decrease survey
length and minimize burden). Additional navigation arrows
were inserted next to some answer choices to direct
respondents’ attention to skip instructions.

Comprehension problems with some
survey terminology (e.g., offender,
dwelling, evidence)

A definition for “offender” (the person who committed the
crime) was inserted in several questions. “Dwelling” was
replaced by “home.” “Evidence” was avoided and instead a
descriptive approach (e.g., “How could you tell” instead of
“What was the evidence?”) was taken.

Difficulty reporting exact age of
household members

A categorical variable with pre-coded response choices
replaced the open-ended age variable. With the removal of the
household roster, age was only captured for the mail survey
respondent.

Difficulty reporting annual household
income

To clarify that the question was seeking annual rather than
weekly or monthly income, the first two response options (less
than $4,999 and $5,000–$9,999) were combined into one
category (less than $10,000). Also, the phrase “in the past 12
months” was underlined for emphasis.

Problems in keeping track of specific
crime incident being discussed

Questions related to crime “series” were modified to more
closely mirror the wording and placement of those in the CAPI
and CATI instruments. Questions about the number of each
type of crime were added to follow each gate question in the
Screener. Each individual page of the CIR was also labeled
with “Incident 1,” “Incident 1 (continued),” etc.

Overall length of survey
instruments/number of questions

The length of the mail survey instrument was reduced by 5
pages as a result of the removal of the household roster from
the Screener and a number of questions from the CIR,
including detailed questions about the characteristics of the
offender (e.g., in a gang, drinking or on drugs), injuries or
hospitalizations resulting from the crime, steps taken to protect
self or property during the crime, and presence of others during
the crime.

Clarification/Refinement of question text
and/or response options

Response options were collapsed into fewer categories in
some items. For example, “rape,” “attempted rape,” and
“sexual assault” were combined into one response option, as
were “purse” and “wallet.” Three questions about the
relationship of the offender to the respondent were collapsed
into one item, as were three questions about contact with
authority.

3.

Phase 1 Cognitive Test Activities

The Phase 1 cognitive testing activities were iterative in nature, with refinements made to the
survey instrument based on respondent feedback and consultation with BJS, and retesting of
revised items occurring with new respondents. The goal of the testing was to evaluate:
1. Respondent reactions to, and effectiveness of, alternative wording and formatting of
some questions, including the household roster,1 age, and crime series questions;
2. Respondent reactions to, and effectiveness of, simplified terminology and definitions for
problematic concepts like “dwelling” or “offender;”
3. Effectiveness of simplified skip patterns and instructions, including use of directional
arrows;
4. Respondent burden in completing a further streamlined and shortened instrument;
5. How respondents report on different kinds of crimes (e.g., theft, assault) that occurred at
the same time;
6. How respondents report on multiple incidents of the same kind of crime occurring on
different dates (e.g., 2 thefts); and
7. How respondents report on a series of crimes, that is, more than 5 crimes that are
similar in nature and cannot be recalled in enough detail to be distinguished from one
another (e.g., domestic abuse).
To achieve cognitive test goals 1–3, the structured interview guide for the cognitive test included
specific probes asking about the respondent’s understanding of select terms in the survey and
whether respondents noticed the instructions or the skip instructions. Additionally, the
interviewer collect observation data that indicated the frequency with which the respondent
skipped or missed a question that should have been answered, skipped to the wrong item on
the paper form, or hesitated or seemed confused by a particular question or instruction.
This information was used to identify specific questions or survey instructions that required
probing by the interviewer or further revision and testing. To assess respondent burden
(cognitive test goal 4), interviewers timed respondents on how long it took to complete each
section of the questionnaire, including the Screener and each CIR. The timing data was used to
consult with BJS on the necessity of further reductions to the mail survey length and complexity
to reduce burden for the Phase 2 field test.
To achieve cognitive test goals 5–7, the interviewers went over the crime reports and
specifically probed respondents on their understanding of how they should handle specific
scenarios, including: (1) several different types of crimes that occurred at the same time (e.g.,
                                                            
1

Even though the household roster was removed from the mail instrument and the household
enumeration will occur during the CAPI/CATI interview with the household respondent, we tested the
household roster as a separate instrument in case it needs to be implemented in subsequent waves of
data collection when the focus is on self-administered modes.

robbery and assault); (2) multiple incidents of the same type of crime (e.g., 2 thefts) and how
they determined which one to discuss in each CIR; and (3) how to report on crimes that occur
frequently and cannot be distinguished from one another (e.g., a crime series, such as partner
violence). The information was used to determine if refinements or additions to survey
instructions were needed as the respondent moves from the Screener questions to the first CIR,
or from one CIR to the next.
An additional goal of the cognitive test was to assess how the improved household roster would
work in a self-administered environment and whether respondents would be willing to provide
their personal demographic information and that of other household members. Participants were
asked to complete the household roster as a separate form and probed on their (1) willingness
to provide such information in a mail questionnaire, and (2) any possible problems they
encountered when filling out the form. Interviewers kept a separate record of the time required
to complete the roster to assess burden.
As described above, the metrics that were used to evaluate the instrument consist of direct
observations and the respondent’s answers to the probing questions in the cognitive interview
guide. The observational data were captured at the question level and included:
•

The time required to complete the Screener and each CIR (cognitive test goal 4)

•

The items where the respondent hesitated or appeared to have trouble answering the
question (cognitive test goals 1, 2, 3)

•

The items where the respondent changed his/her answer (cognitive test goals 1, 2, 3)

•

The items where the respondent struggled with navigation, such as following a skip
instruction (cognitive test goal 3)

•

The items left blank by the respondent that should have been answered (cognitive test
goal 3; determined after interview completion by review of completed paper survey)

During the cognitive interviews, the observational data were used by the interviewer to identify
which specific survey questions or instructions are problematic and should be probed in detail.
Information then obtained from the respondents directly, in response to the interviewer’s
questions, were used to evaluate: (1) the effectiveness of specific revisions to the question or
response choice wording; (2) the decision-making process used in navigating from item to item,
including the visibility and understanding of instructions on the paper form; and (3) awareness
and understanding of the purpose of some design features, such as the header at the top of
each CIR page or the instruction boxes. Goals 5–7 are address through direct questioning of the
respondent about his/her experiences in the survey and cognitive thought processes.
Phase 1 testing involved two rounds of cognitive interviews, conducted between June 16, 2011
and August 4, 2011. As in the preliminary testing, interviews were conducted at RTI’s main
campus by survey methodologists experienced in cognitive interviewing methods. Participant
recruitment was carried out by RTI using advertisements placed on Craig’s List for the RaleighDurham, NC, area and in RTI internal classifieds, and through postings at local public health
departments, domestic violence shelters, and other similar locations. Interested candidates
were screened to determine their eligibility for the cognitive interview and in an effort to recruit a
diverse mix of participants. RTI staff and their family members were not eligible to participate.

Additionally, persons who had not experienced a crime in the past 6 months, were under age
18, or did not speak English were excluded.
3.1

Round 1 Cognitive Testing

The Phase 1 cognitive test protocol mirrored that of the preliminary assessment including
participant recruiting and screening procedures, informed consent procedures, and use of a
guided think-aloud process to gauge respondent reactions to specific elements of the revised
survey instrument. The first round of testing (Round 1) was conducted with 7 participants. The
interviews averaged 81 minutes and included a review of a number of questionnaire items,
including changes that were introduced after the preliminary cognitive assessment. The
Screener portion of the survey averaged 6 minutes; while the first CIR took 9 minutes to
complete. The average length for the subsequent CIRs was much less, about 3 minutes among
respondents who experienced more than one crime. However, it is important to note that this
average is based on only two respondents who were confused about where to report their
crimes and tried to do so in CIR1. All cognitive interview participants received $40 cash as
compensation for their time.
Overall, respondents were able to navigate through the revised instrument relatively well. Some
skip instructions (especially those associated with individual response options) were not as
obvious as others, and more often participants committed errors of commission (answering an
unnecessary item) rather than omission (skipping an item in error). Terms that were found
problematic in the preliminary assessment and revised for Phase 1 testing were not found to be
problematic in this round of testing. Additionally, no one seemed to hesitate when answering the
survey questions and those who took longer made sure to read every single word.
Respondents were found to have very good memories of their crime experiences – what
happened, when, who were the offenders, etc.
No response changes were observed during the self-administration; however, many
respondents wanted to change their answers to the “how many times” questions in the
Screener during the think-aloud part of the interview as the interviewers probed to determine the
number of unique incidents that had been experienced. In the CIR section of the instrument,
most respondents noticed the header added to each CIR, but were confused about which
crimes to describe in the first CIR and whether and how to fit all of their crime experiences into
the first CIR.
The major problems identified during Round 1 cognitive testing are summarized below:
•
•

•

The screener gate questions, asking if respondents had experienced a particular crime,
were perceived unrelated to the follow up questions, asking how many times such
crimes had occurred during the past 6 months;
Overall, the idea of a crime “incident” was not well understood. Respondents doublecounted crime experiences in the Screener. In fact, it seemed as if the Screener was
perceived as a check list and participants did not notice the instructions to exclude
crimes they had already reported in previous questions. Participants did not think in
terms of “incidents,” that is, considering all the types of crimes they might have
experienced at one time during one incident. Instead, in a check list fashion, they wanted
to check every type of crime they had experienced, even if it took place at the same time
as something they had already reported in a previous question.
Participants felt mentally unprepared for the CIRs that were to come. Most who
experienced more than one crime tried to report everything they had experienced in the

•
•
•

•
•

first CIR as they did not know another one was coming, or were confused about which
crime they should be describing;
Some Screener instructions that used the word “reported” were misinterpreted by
respondents to mean “reported to the police”;
The question series from the CIR about stolen purse or wallet were found confusing –
participants had a hard time following the instruction boxes;
Overall, respondents seemed to focus only on 2-3 key words in the Screener questions
and then either assume they knew what the question was asking or “make up” the
question in their heads. Not reading the entire question might explain why some crimes
were double-counted and the reference period not considered;
In Screener question 4a, the words “threats” and “thefts” were missed on several
occasions, because “attacks” was placed first;
Most respondents were able to follow the skip logic, but not consistently. In general,
skips placed in Instruction boxes were evident to respondents; however, some skips
routing off of individual survey answers were not apparent or followed.

We also tested the household roster with subset of the cognitive interview participants. As
modifications to the SCV study design resulted in the household roster being offered only at
Wave 1 via CATI and CAPI, the roster items were removed from the draft mail survey
instrument. However, in light of issues identified during the preliminary assessment of the items,
we conducted a small test of a stand-alone version of the roster with 3 Round 1 interview
participants. The roster assessment was limited to a smaller number of participants as a result
of time constraints during the interview or participant’s living situations2. Two of the three roster
respondents did not live with another adult and did not experience any problems in answering
the questions or providing information about themselves. Moreover, when probed hypothetically
as to whether they would be willing and able to provide the roster information about others living
in their household, both provided confirmatory responses. The roster respondent who did live
with another adult also did not have any concerns about providing information about that
person. All three respondents took about ½ minute to complete the roster.
In response to the Round 1 test findings, a second round of cognitive testing (Round 2) was
planned. Exhibit 2 summarizes the mail survey revisions resulting from the Round 1 experience.
 

                                                            
2

One respondent reported he was living in a homeless shelter, so the household roster was not
administered.

Exhibit 2.

Summary of Mail Survey Revisions Resulting from Round 1 Cognitive Test
Findings

Round 1 Cognitive Test Findings
Screener gate questions not perceived
as related to follow-up count questions

Resulting Mail Survey Revisions
Wording of the follow-up count questions revised to exactly
match the terminology used in the gate questions. Bracketed
directional arrows were added in an effort to visibly link the
gate and follow-up questions, with the follow-up questions
slightly indented under their gate items.

Concept of crime “incident” not
understood, leading to double counting
of events within the Screener

The 6-month reference period was reintroduced in the follow
up questions (“How many times?”) as some participants
reported on life-time crimes. Capitalized italics were used for
instructions that reminded respondents to exclude any crimes
reported in previous questions.

Respondents not mentally prepared for
CIRs, showing confusion about what to
include in the first CIR

To help prepare respondents for each CIR, definitions of “crime
incident” and “household”, as well as a reminder about the 6month reference period, were added to the first page of the
questionnaire. Instructions were also added at the beginning
of each CIR on how to think about “crime incidents.” The
response box where respondents describe what happened
during the crime incident was moved from the end of the CIR
to the very beginning in an effort to anchor responses.

Some Screener instructions that used
the word “reported” were misinterpreted
by respondents to mean “reported to the
police.”
The CIR question series about stolen
purse or wallet was found confusing;
respondents had a hard time following
the Instruction boxes.

Instructions removed or reworded to eliminate use of word
“reported”

Respondents tended to focus only on 23 key words in the Screener questions
and did not read the entire item, missing
key information (e.g., reference period)
and potentially leading to doublecounting of events.
In Screener question 4a, the words
“threats” and “thefts” were missed on
several occasions, because “attacks”
was placed first.
Respondents could not consistently
follow skip logic for all items.

The 6-month reference period was reintroduced in the follow
up questions as some participants reported on life-time crimes.
Capitalized italics were used for instructions that reminded
respondents to exclude any crimes reported in previous
questions.

The Instruction box between the CIR questions related to
stolen purse, cash or wallet was removed to simplify the
question flow and the words “if any” were added to the
question asking about the value of the stolen cash, purse or
wallet.

The words “attack”, “threat” and “theft” were underlined for
better visibility.

Skip instructions reformatted for some items for greater
consistency and increased visibility. Individual skip instructions
were placed in white background ovals to make them
consistent with overall question skip instructions.

3.2 Round 2 Cognitive Testing
The Round 2 cognitive test protocol mirrored that of Round 1, including participant recruitment
procedures, informed consent procedures, and use of a guided think-aloud process to
evaluation respondent reactions to the revised mail survey instrument. Eight participants took

part in Round 2 of the cognitive test. The interviews averaged 70 minutes and focused primarily
on changes necessitated after Round 1. The Screener portion of the survey averaged 8
minutes; while the first CIR took 10 minutes to complete (the average length for the subsequent
CIRs was 9 minutes for CIR2 and 7 minutes for CIR3 among respondents who experienced
more than one crime).
The major findings from the Round 2 cognitive test are summarized below:
•
•

•
•
•
•

•
•

Despite the effort to graphically (and visually) convey the relationship between each
Screener gate question and the associated follow-up (count) question, all but one
participant did not perceive the questions as related;
Double-counting of crimes in the Screener continued to be an issue. Even though some
respondents acknowledged reading the instructions not to count anything they had
already included in previous questions, they still felt they wanted to put every crime on
paper, even if multiple types of crimes happened in one incident. As in Round 1,
respondents tended to treat the Screener items as a check list, checking things off as
they went along regardless of whether they had happened at the same time as
something previously reported;
Respondents found some Screener questions to be redundant (e.g., multiple items
about theft) and suggested that some could be combined;
Even though we administered the household questions to everyone, most respondents
were thinking of only their own personal experiences only when answering about crimes;
As in Round 1, participants did not seem to read the Screener questions entirely – they
commented that the questions were too long and complicated;
In contrast, respondents found the CIR easy to fill out. Moving the crime description
response box to the beginning of the CIR proved to be a good strategy to anchor
respondents and help them keep track of the crime and time period about which they
were answering. Improvements to the format of skip instructions were also effective.
The change to the question series related to stolen cash, purse or wallet seemed to work
well – no one expressed confusion in answering these questions or navigating through
the routing instructions;
Overall length of the questionnaire was found intimidating. Furthermore, participants
perceived the Screener much harder than the CIR.

We also tested the household roster with 4 cognitive participants, 2 of whom lived with at least
one adult household member. None of the participants had difficulty completing the roster or
expressed concern about providing such information in a mail survey. Consistent with Round 1,
the household roster took about ½ minute to complete.
4. Overall Findings and Conclusions from the Mail Survey Cognitive Testing Activities
Careful consideration was given to the results of the preliminary cognitive assessment with 9
respondents and to the 15 cognitive interviews conducted across Rounds 1 and 2. A number of
improvement strategies were found to be effective during testing. For example, use of a
consistent style of directional arrows and white text ovals for skip instructions at both the
question- and response option-level decreased navigation errors during self-administration.
Additionally, changes in the wording of some items removed confusion about key terminology,
such as “offender” and “dwelling unit.” Collapsing of some response options and survey items
simplified the response and navigation task. Finally, allowing respondents to describe the crime
incident they were responding about at the beginning of each CIR rather than at the end

provided an effective means of anchoring the respondents and keeping them focused on the
incident being discussed.
In spite of these improvements, several critical challenges could not be easily overcome in spite
of several iterations of instrument refinement and testing. Specifically, the Screener questions
were perceived to be too long, complex, and repetitive in all rounds of testing. Respondents did
not read the entire question and often missed key pieces of information, including the survey
reference period, instructions to exclude crimes previously mentioned, or nuances in the
question itself related to the location of the crime or the person responsible (e.g., crimes
committed by “someone you know”). In spite of additional formatting, rewording, and use of
indentation, very few respondents understood that the follow-up count questions in the Screener
were associated with the gate question. Instead, they viewed the count questions as a new or
different question entirely. Finally, in completing the Screener, respondents tended to view all of
the gate questions as a check list rather than as a series of cues to facilitate recall of all crime
incidents experienced during the reference period. As such, even if the respondent experienced
only one crime incident involving multiple types of crime (e.g., a break-in that included the theft
of several items and some form of assault), they answered “yes” to each individual gate
question about “break-in,” “theft” and “assault”, leading to over-counting of incidents within the
Screener. This problem then lead to confusion as the respondent moved from the Screener to
the first CIR. They did not know which crime or crimes to describe in the first CIR and often tried
to cover every incident in the first CIR. Only with the assistance of the interviewer did
respondents with a multi-crime incident or with multiple crime incidents work through this issue
and successfully proceed through the CIRs.
Based on these findings, we believe considerable reworking of the survey instrument, including
rewording and restructuring of items in the Screener and possibly the CIR, is needed to arrive at
a mail survey that can be effectively completed in a paper-and-pencil, self-administration format
for the NCVS. The current instrument requires interviewer (or, for Web, computer) intervention
and assistance to ensure respondents understand how to think about, count, and report on the
crime incidents they experienced during the reference period. Moreover, consideration should
be given to further reducing the length and complexity of the survey instrument as respondents
found it to be very long and intimidating. Because these issues cannot be resolved without
further, more extensive questionnaire redesign and testing, we recommend that BJS eliminate
the mail survey option from the SCV experimental design.

References
 
Ackerman, A.C., and J. Blair. 2006. “Efficient Respondent Selection for Cognitive Interviewing.”
Paper presented at American Association for Public Opinion Research, Montreal,
Canada.

Appendix A:
 
 

SCV Mail Survey Instrument

Appendix B:
 

SCV Household Roster

Appendix C:

SCV Mail Survey Cognitive
Interview Guide


File Typeapplication/pdf
File TitleMicrosoft Word - Attachment_2_Cognitive_Test_Report.docx
Authorrhair
File Modified2011-12-22
File Created2011-11-11

© 2024 OMB.report | Privacy Policy