passback Q and A

RSA BJS passback response 12-18-2012.docx

Research to support the National Crime Victimization Survey (NCVS)

passback Q and A

OMB: 1121-0325

Document [docx]
Download: docx | pdf

Response to OMB Questions on Request for Clearance under OMB No. 1121-0325

Bureau of Justice Statistics

Contact: Allen J. Beck and Shannan Catalano

12/18/2012



I just want to start by pointing out that it isn’t entirely clear whether this package belongs under the NCVS Generic. If BJS plans to submit subsequent phases under this OMB clearance number it should be clearer on the connection to the NCVS redesign.

BJS is seeking approval under the generic clearance (OMB Number 1121-0325) to develop and test competing methodologies for collecting self-report data on rape and sexual assault in an effort to determine the optimal design for measuring these crimes. BJS deems these activities to be related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program because the current design of the NCVS prohibits the type of methodological research required to address questions of competing methodologies within the ongoing core NCVS. The findings from this research will be used to determine the optimal design for measuring rape and sexual assault and whether this design can be accommodated with the current NCVS program or whether an alternative collection is necessary. The objective of this research is to improve data collection methodology and measurement within the NCVS program.



Official estimates of rape and sexual assault differ for variety reasons including—

  • Context of the survey;

  • Population covered;

  • Definition of target events;

  • Reference periods;

  • Focus and structure of screeners;

  • Identification and classification of events.



More generally these differences represent a public health versus criminal justice approach to measuring rape and sexual assault. For example, as an omnibus crime survey, the NCVS mandate requires BJS to collect victimization data on a wide range of crime types from all individuals age 12 or older residing in a household within the context of a crime survey.



This approach contrasts markedly with the public health approach (e.g., NVAWS and NISVSS), which generally incorporates explicit and behaviorally specific questions to elicit target events of sexual violence within a health and safety survey context. The current design of the NCVS limits such use of behaviorally specific cueing for rape and sexual assault for several reasons:



First, all eligible individuals residing in the household are administered the same NCVS interview. There is no established protocol barring the administration of interviews while other household members are present. For crimes like rape and sexual assault this interviewing procedure raises concerns for respondent safety and confidentiality. These concerns are particularly problematic due to the high likelihood of offender presence within the same household as the victim.



Second, the omnibus nature of the survey requires the NCVS to query households on a broad range of crime types ranging from petty theft to rape and sexual assault. This “wide net” capture of crimes differs in nature and content from the intensive and focused cueing strategy BJS will be testing under the rape and sexual assault research.



As indicated in the June 2012 request for the NCVS generic clearance, the work on rape and sexual assault was justified and included in the anticipated burden statement. This work will be conducted by Westat, under a cooperative agreement, Award 2011-NV-CX-K074 Methodological Research to Support the National Crime Victimization Survey: Self-Report Data on Rape and Sexual Assault. The work will proceed in three stages:

  1. Cognitive and exploratory interviews;

  2. Feasibility test;

  3. Pilot test.



The clearance request process for this research follows the same approach previously used by BJS for the subnational companion study. Clearance to conduct each stage of the project will be requested. Developmental work including cognitive interviewing and the feasibility test will be requested under the NCVS generic clearance. After OMB review and approval of the feasibility test and completion of the feasibility test, BJS will submit a separate request for a full OMB review for the pilot test portion of the Self-Report Data on Rape and Sexual Assault – Pilot Test.


1. What specifically does BJS hope to learn from repeating the same interview to the same person two weeks apart?  Is this supposed to be a question reliability study?  Is it supposed to be a way to improve wording to increase reliability?  The probes don’t seem especially suited to the second, and the rigor doesn’t seem especially suited to the first.


The main study will be conducting re-interviews on a sub-sample of respondents. The purpose of these re-interviews is to measure the reliability of the questionnaires. The plan is to repeat the interviews in the same mode, two weeks after the initial interview during the pilot test. Analyses of the re-interview data will then examine the reliabilities for each mode and compare across modes.


The objective for the re-interviews during the pilot test differs from the re-interview objectives for the cognitive interviewing:


The purpose of the re-interviews during the cognitive testing portion of development is to test the proposed procedures to be used on the main study. Consequently, we do not intend to probe extensively in the re-interviews in the cognitive test. The goal is to primarily see how the respondents react and move through the re-interview procedure. However, we have built into the protocol a check between what was reported in the first and second interview. If there is a difference, the reasons for the discrepancies will be probed. This information will be used as additional information when revising the survey instrument.


Since we submitted the OMB package, we have received feedback from the DC Rape Crisis Center that a re-interview with their clients would not be advisable. They also advised us to keep the cognitive interview to 60 minutes. Taking this guidance under consideration, we have modified our plans and will not conduct re-interviews of respondents identified in advance as victims by the DC Rape Crisis Center. The revised request now includes re-interviews to test procedures for up to 70 respondents. The burden estimate has been modified to reflect these changes.



2. Two strategies are proposed for selecting women for inclusion in the cognitive interview sample: one is to recruit 10 women through the DC Rape Crisis Center and the other is a more general recruitment effort (via flyers and advertisements) plus a short eligibility screening questionnaire to identify 70 women ages 18-40. Though the cover memo indicates that women who indicate having experienced a rape or sexual assault incident “will be given priority” in moving toward actual cognitive interviews, it would seem that there is a potential for the sample to be a mix of known rape and sexual assault victims and likely but not necessarily victims. What is ambiguous is whether the two contact-approach groups will be treated as homogeneous for purpose of assigning “treatments”/survey approach-and-question combinations, or will there be an effort to split the known victims evenly between the phone-based and personal interview-based questionnaire groups?


3. Likewise, the description of recruitment indicates that 60 interviews will be conducted in English and 20 in Spanish, but are the two language groups considered “identical” for distribution across the survey techniques?


4. Most fundamentally, the memo does not seem to address a basic question, perhaps because it seemed too “obvious,” but which would still be good to make clear: Does each cognitive interview respondent experience one of the survey modes (ACASI or telephone), and is there an even split between the modes (40 “see” ACASI, 40 “see” telephone)?


6. Beyond the distribution of known-vs-likely-victims and English-vs-Spanish described above, there are also a series of “halvings” that are described in the cover memo: half probed on the event history calendar and screener and the other half on the incident form; half experiencing questions roughly ordered from least to most serious and the others not; half getting a series of variant wordings (outlined on p. 5-6 of the memo) and half not. After all the divisions and halvings, is BJS confident that the cognitive interviews will provide sufficient information on the myriad “treatments” to provide good direction.


These questions are related to the same issue – how will the respondents be assigned to the different interviewing treatments? Before addressing this directly, we propose making a change to the number of cognitive interviewing treatments. Based on feedback that we received from individuals on the National Academy Sciences panel1 that is advising BJS on the design of the Rape and Sexual Assault questions, we have eliminated the treatment that varied the order of the questions by the seriousness of the items. All versions of the screener will order the questions from most to least serious (as they currently appear in Appendix E1/E3). The advantage associated with moving from most to least serious event in the survey is that it may reduce possible duplication of responses between screener items because events that involve rape may also involve many of the behaviors that are described in the less serious events (e.g., unwanted touching and alcohol related assault). By asking the questions for the most serious events first, the questionnaire provides a way for the respondent to report an event by what is likely to be the most salient part of the event before being asked about the other characteristics of an event. When asked about subsequent, less serious, aspects of the event, the respondents may be less inclined to try to report the event as an additional occurrence.


The allocation of the subjects is provided in Figure 1. All allocations will be equally distributed across the treatments. For example, as shown in Figure 1, there will be an even split with 40 cognitive interview respondents administered by an interviewer and 40 will be self-administered. The two different recruitment groups (DC Rape Crisis Center and General) will be equally allocated across the interviewer-based and self-administered questionnaire groups. As shown in Figure 1, there are four different combinations of mode and questionnaire versions. The 10 women recruited from the DC Rape Crisis Center will be equally allocated to each of these four groups, i.e., 2 to 3 of these women allocated in each of these groups.


Similarly, the English and Spanish cognitive interview respondents will be allocated evenly across the four screener version by mode groups. For example, we will have five Spanish respondents in each group and 15 English speaking respondents per group.


The final row of Figure 1 shows that within each of the 4 groups, half will have the interviewer probe during the screener, while half will probe during the detailed incident form.


The strategy for the groupings is intended to provide qualitative information on any differences between the different alternative questionnaire versions. As shown above, we will have approximately 10 interviews for each mode x version combination which are actually probed, while there will be 20 interviews once considering those that are not probed. This should provide adequate data to detect qualitative differences between the different approaches.2


5. Likewise, it is mentioned on p. 3 that the main study proposes to conduct re-interviews using the same mode as the initial administration, and p. 9 indicates that the cognitive interview re-interview/follow-up “will use the same mode and instrument”. Does this mean/confirm that each respondent in the cog interviews sees only one survey mode in both the first and second interviews?


Yes. The first and second cognitive interviews will be conducted using the same mode.


7. Cover memo p. 4: The service where recruitment advertisements will be posted does business as craigslist, not Craig’s List.


We will make this correction in the memo.


8. Cover memo, p. 4, notes that “ACASI questions will be simulated using hardcopy versions of the screens for the respondent to review” and that “the interviewer will read the questions to the respondent, simulating the ACASI voice.” To be clear, is the vision for the main study for the “audio” of ACASI to be supplemental (an audio overlay on top of question text that will also appear on-screen, and that some respondents could presumably “skip” by answering quickly) or the primary mode of administration (the question text will only be administered on the recording)? Does this initial cognitive testing “lift the blind”, so to speak, by allowing the respondent to experience both text and audio?


The pilot study will be administered as an ACASI, with the respondent reading the questions on the screen while audio, through headphones, will provide a synchronized narrative of the questions. Our goal for the cognitive interviews is to simulate this ACASI administration by presenting both the question text and audio. For the cognitive interviews, the interviewer will be reading the questions as a proxy for the audio part of ACASI in the lab. For purposes of the cognitive interviews, we will “lift the blind” so that respondents will be able to experience both the text and audio and respond as they would during an actual ACASI interview. Many of the functional aspects of ACASI will be examined during the feasibility testing scheduled for summer of 2013.


9. Cover memo, p. 8: There is less specification on exactly how and in what numbers the various “vignettes” will be presented. Does the statement that the pretesting/cognitive testing will involve “3 to 4 of these scenarios” refer specifically to 3-4 combinations of relationship/intent/consent conditions that will be presented to every respondent, or will each individual respondent have a potentially different mix of conditions? The same applies to the 3x3 coercion/relationship “conditions”, of which “one or two” will be administered?


Each version by mode group of cognitive interview respondents will include 3 to 4 combinations of the alcohol conditions and one psychological coercion scenario. There are a total of 27 different vignettes for the alcohol-related scenario. A particular respondent will be assigned 3 to 4 of these 27 vignettes, with the goal of covering all 27 vignettes across all respondents. If the 27 vignettes are divided up into 9 groups of 3 a piece, each vignette will be presented approximately 4 times for the telephone group (40/9 ≈ 4) and four times for the self-administered group. Respondents will additionally be presented one of the 9 psychological coercion scenarios. The purpose of the cognitive interviews is to assess respondents’ understanding of the task and the descriptions. This feedback will be used to modify the instructions on the task and wording of the vignettes.


10. P. 10 indicates that BJS/Westat anticipates about 10 percent attrition between the two interviews. P. 4 of the memo suggests that participants will receive $40 for the first interview and $40 for the second. The contact materials for the DC Rape Crisis Center (the “known” victims) clearly indicate that “respondents will receive $40 for each interview as a token of appreciation” and that “you will be paid $40 for each interview.” But the recruitment screener for the more general approach strategy says that “we are offering $80 for participating in two interviews,” which could be “misread” in two ways (only receive $80 at the end of interview 2, or $80 per interview). Why is the wording different for the group that’s the largest portion of the recruit pool?


As noted in response to question 1, we have modified the plan for the re-interviews by excluding victims referred by the DC Rape Crisis Center. Respondents from the DC Rape Crisis Center will receive $40 for the one interview. We have modified the wording on the flyer to reflect this change. For the general population, the wording on the recruitment screener has been updated to read: “To thank participants for their time and feedback, we are offering respondents $40 for the first interview and $40 for the second interview”.


11. Presumably, the “flyer” in Appendix A is purely a rough draft. But it seems odd that the text starts off by saying that the purpose is to “improve the way we measure rape and sexual assault” (emph added)---even though the only “actor” then identified on the flyer, by way of a logo, is the DC Rape Crisis Center. Clearly, the “we” wording is meant to be conversational, but right now it is confusing. Also, in the block paragraph of text, “crisis” in the name of the Center should be initial-capitalized.


We have changed the text and corrected the error in the name of the DC Rape Crisis Center:


The purpose of this study is to improve the way information about rape and sexual assaults is collected. Westat, a social science research organization in Rockville, Maryland, is working with the U.S. Bureau of Justice Statistics (BJS) to develop and test these survey methods. Westat wishes to recruit women survivors of rape or sexual assault to help test the questions that are being considered for the survey. Beginning in February 2013, Westat would like to speak with women who are age 18 and older, English speaking, and a previous victim of rape or sexual assault. These interviews will be completed in person, at the DC Rape Crisis Center, and will last about 60 minutes. Respondents will receive $40 for the interview as a token of appreciation for their time.”


12. Consent form, Appendix C: The signature block is labeled “Researcher’s Signature” ---presumably you mean “Respondent Signature” or just “Signature”?


This is intended to read as is --- “Researcher’s Signature.” We obtained a waiver of documentation of informed consent from the Westat IRB. This was done to reduce the risks of accidental release of the name of the respondent. The “Researcher Signature” line provides the study with an additional tracking measure to ensure that the consent form was administered.


13. There are a lot of typographical errors in the interview protocols - some of which are probably artifacts of copying and pasting. That may or may not be wildly relevant, except that it’s unclear whether this is the same text that the respondent will see/react to (and so whether the survey could appear “sloppy” to them), or whether they will be seeing different screenshots. On p. 18-19 (App E1) alone, for instance, the question numbers vary as to whether they are presented in boldface and with or without a period; the yes/no responses on IQ8 are lower case, unlike just about everything else, while the yes/no responses in IQ10c are inexplicably all caps; Question IQ22a presents a long run-on sentence and is missing a question mark; and, the last few questions put a period after the end of the label for response 4 but not the others. Further example: F11 in the detailed incident form seems to explicitly denote ex-boyfriend or girlfriend as a separate response category by F23 (apparently through typo) does not, making it look like another clause of response option 1.


We have made revisions (e.g., correcting typographical errors and improving consistency in the formatting) to the interview protocols.


14. BJS should use standard, previously approved, confidentiality language throughout. Please fix.


We will add the follow text to the OMB clearance memo:

Assurance of Confidentiality

BJS’ pledge of confidentiality is based on its governing statutes Title 42 USC, Section 3735 and 3789g, which establish the allowable use of data collected by BJS.  Under these sections, data collected by BJS shall be used only for statistical or research purposes and shall be gathered in a manner that precludes their use for law enforcement or any purpose relating to a particular individual other than statistical or research purposes (Section 3735). BJS staff, other federal employees, and Westat staff (the data collection agent) shall not use or reveal any research or statistical information identifiable to any specific private person for any purpose other than the research and statistical purposes for which it was obtained.  Pursuant to 42 U.S.C. Sec. 3789g, BJS will not publish any data identifiable specific to a private person (including respondents and decedents).

We will also add the following to the Consent Form (Appendix C):

  • All information obtained during this study will be treated as confidential and will only be used to develop and improve the questionnaire. The data collected are protected by federal statute (Title 42 USC, Section 3735 and 3789g) and are protected from any request by a law enforcement or any other agency, organization, or individual.

  • Your answers will be combined with responses from other study participants when writing up reports and conducting analyses. Pursuant to 42 U.S.C. Sec. 3789g, neither BJS nor Westat will publish any data identifiable specific to a private person.


This language has been adapted to the current study and simplified to facilitate comprehension by the respondents.


15. Why isn’t the required PRA information provided to the respondent somewhere like on the consent form?


We propose putting the following statement at the bottom of the consent form:


NOTIFICATION TO RESPONDENT OF ESTIMATED BURDEN: Under the Paperwork Reduction Act, we cannot ask you to respond to a collection of information unless it displays a valid OMB control number. The burden for this collection of information is estimated to average 90 minutes per response, including the time for reviewing instructions. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to the Director, Bureau of Justice Statistics, 810 Seventh Street, N.W, Washington, DC 20531.


The OMB clearance number will be placed on the Consent form and on the first page of each instrument.


This statement will be modified for the DC Rape Crisis Center subjects to reflect a 60 minute burden estimate.


1 This feedback was received on December 11, 2012.

2 Willis, G.B. (2005) Cognitive Interviewing: A Tool for Improving Questionnaire Design, Sage, pp.5-7.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDavid Cantor
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy