National
Intimate Partner and Sexual Violence Survey (NISVS) Redesign
Task
5.1 Cognitive Testing Plan
Contract Number GS00F009DA
February 6, 2019
Prepared for:
Centers for Disease Control and Prevention
Prepared by:
Westat
An Employee-Owned Research Corporation®
1600 Research Boulevard
Rockville, Maryland 20850-3129
Westat will conduct a total of 120 cognitive interviews in April-June 2019 to support NISVS redesign efforts. Interviews will be conducted in two rounds, each with 60 interviews, including 20 in each round to test the Web instrument (40 total), 20 in each round to test the Paper instrument (40 total), and 20 in each round to test the CATI instrument (40 total). Note that the number of CATI interviews may change (i.e., the number of Web/Paper interviews may increase accordingly) if CDC determines that saturation in comments has been reached before the 40 interviews are complete. We anticipate each interview will take approximately one hour, and will provide respondents with $40 to help defray the costs of participating, such as transportation or child care.
Round 1 (n=60) |
Round 2 (n=60) |
||||
Web (n=20) |
Paper (n=20) |
CATI (n=20) |
Web (n=20) |
Paper (n=20) |
CATI (n=20) |
In each round, half of the CATI interviews will be conducted by telephone for the purposes of gathering an estimated timing of the instrument (10 per round). This will provide us an opportunity to go through the entire instrument with respondents uninterrupted to gather their feedback on the full experience (including the effects of placing the mental health questions at the beginning of the instrument). These will be conducted by a telephone interviewer, will be recorded and we will listen to them to assess flow and sensitivity of the new questions. These interviews will be conducted by one to two trained senior Westat female telephone interviewers and will include debriefing questions at the end of the interview to gather feedback on the experience.
The other 50 interviews per round (20 Web, 20 Paper, 10 CATI) will be conducted in-person so that cognitive interviewers can observe respondents as they work through the instrument. Interviewers do not focus on taking notes during the interview so that they can focus on respondent reactions, non-verbal cues, and administering scripted and spontaneous probes. Interviews will be audio and video recorded. (Note-taking is performed after the interview using notes and recordings, and may be written by the interviewer or by a trained note-taker. If someone other than the interviewer writes the notes, the interviewer is required to review the notes prior to finalization.) A team of 6 experienced female cognitive interviewers and 6 trained note-takers will be executing the two rounds of research.
Interviewers will use a mix of concurrent and retrospective probing to gather feedback on both item comprehension as well as the usability of the instruments.
In order to test the Web instrument, the interviews in Round 1 will focus on testing the items using a Westat-supplied laptop. The interview will focus primarily on gathering cognitive feedback on respondent experiences, but the interviewers will also be observing and probing on the usability of the web-based instrument. As respondents work through the survey, in addition to administering cognitive probes, interviewers will ask concurrent probes to follow up on any usability issues or problems encountered including:
Navigation through the survey
Generation and understanding of error messages
Reasons the user required help
Changing answers
Finding necessary information
Debriefing topics at the end of the interview will include:
Overall reactions to the look, feel, and usability of the web survey
Additional undiscussed issues or problems
In Round 2, we will ask half of the respondents assigned to the Web mode (n=10 out of 20) to bring their own device to the interview. In the recruitment screener, we will ask what type of device they typically use at home, and will recruit for a mix of phone and tablet users with different operating systems. This will allow us to gather usability data on how the instrument is performing on screens of different sizes and different operating systems (such as Apple, Samsung, etc.).
If we are having difficulty recruiting some hard-to-reach groups (e.g., male victims), we may need to expand our areas of recruiting to be more national in scope, in which case, we will be prepared to conduct some of the testing of the Web and CATI instruments using remote technology (such as WebEx, Google Hangout or Skype). Westat has successfully used these technologies for other federal cognitive testing studies (on behalf of the FDA and the Bureau of Justice Statistics), and we typically give the respondent a choice of which remote technology they would like to use, and train our interviewers on all three of the systems. We will schedule a test-run with the respondent prior to the interview to ensure they have proper access to the technology and will be able to successfully launch it at the time of the scheduled interview. In these situations, the interviewer will be able to see the respondent’s expressions (and for Web testing, the survey screen) as he/she responds to the questionnaire. (Note this approach will not be feasible with Paper questionnaire testing.)
The distribution of interviews by mode for the second round of testing is tentative. As noted earlier, we will complete a total of 60 interviews, but may shift the number of interviews across modes depending on the findings from Round 1 and the number of changes for a given mode between rounds.
For all interviews, we will implement safeguards to protect respondents, including informed consent procedures, having a distress protocol in place, and providing all respondents with a list of resources at the end of the interview in case they are upset by the issues raised during the interview. Our cognitive interviewers have all been trained in conducting research on sensitive topics, and a large portion of the training will focus on reviewing these practices. For the telephone-based CATI interviews, we will employ further measures to protect respondents by informing them at the time of recruitment that they should be in a private location for the interview, and confirming that they are in a private space before the interview begins. If at any point we suspect someone might be listening in to the interview (other than the research team), we will instruct the respondent to end the call and contact us later for a convenient time to reschedule it.
We will conduct the paper, web, and half of the CATI interviews (n=50 per round) in three different geographic locations, including the Washington, DC area, Raleigh, NC, and San Francisco, CA (with roughly 16 to 17 completed interviews in each location per round). These locations provide geographic diversity while allowing us to capitalize on local interviewers. The CATI interviews that are a full run-through will be conducted by telephone (n=10 per round), with no geographic limitations (participants will be recruited through Craigslist ads or a national panel). All interviews in each round will be completed in a three-week period.
Recruitment procedures will attempt to obtain a balance of respondents within mode across the following characteristics: victim status (victim/non-victim), gender (male/female), and age (over/under 40). (Note that age 40 has been selected as a recruiting cut-off because of the higher likelihood of identifying victims.) Respondents will exhibit one of each of these three dichotomous characteristics. We have not defined specific combinations of the characteristics in order to provide some flexibility during recruiting. Also, since some of these characteristics may be harder to identify and recruit, we may not be able to achieve balance on all characteristics. We will work with CDC to set a minimum floor for characteristics of interest (e.g., male victims). Victims will be defined as those who have ever experienced physical violence by a spouse or romantic partner, stalking and harassing behavior, and unwanted sexual experiences. The draft recruitment screener captures data on all characteristics of interest as well as other basic demographic questions such as Hispanic origin and race and education level.
We will use two primary methods to recruit participants. First, for the interviews conducted in Raleigh and San Francisco, we will use established focus group facilities to manage the recruitment. They will primarily use their in-house databases, which typically contain tens of thousands of local area residents and are continually being refreshed. To avoid “professional” respondents, we will screen out anyone who has participated in a research study in the past 6 months. We will conduct a training session with both facilities to ensure they understand the recruiting requirements and will request daily updates from them during the recruitment period.
Second, for the interviews being conducted in the DC area and by our CATI interviewers, we will use a mix of Westat’s in-house database and advertisements on Craigslist to recruit participants. While Westat’s database is primarily focused on the Washington, DC metro area, our CATI respondents do not need to be restricted to this area. As such, we can post Craigslist ads in other metropolitan areas around the U.S., including New York City, Chicago, Atlanta, and Los Angeles. We have successfully used Craigslist on several other studies to recruit both male and female victims of unwanted sexual experiences and victims of violence for cognitive testing. All potential participants will be routed to an online screener to determine eligibility. Those who are eligible to participate will be contacted by a Westat recruiter who will schedule a convenient time for the interview. Daily updates will be provided to the cognitive testing task leader.
We anticipate being able to achieve the recruitment goals by using these traditional recruitment methods. However, if we are unable to achieve the target number of completed interviews in hard-to-find groups such as male victims, we are prepared to utilize other approaches for recruitment, including engaging a national opt-in panel to conduct screener on a larger scale, or partnering with a local shelter or victim services organization to identify male victims.
Cognitive Interview Distribution by Round, Mode and Characteristic* |
||||||||
|
Round 1 |
Round 2 |
||||||
|
CATI-Timing |
CATI-In person |
Web |
Paper |
CATI-Timing |
CATI-In person |
Web |
Paper |
Under 40 |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
40+ |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
Men |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
Women |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
Victim |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
Non-Victim |
5 |
5 |
10 |
10 |
5 |
5 |
10 |
10 |
TOTAL |
10 |
10 |
20 |
20 |
10 |
10 |
20 |
20 |
|
|
|
|
|
|
|
|
|
*Note that we have not defined specific combinations of the characteristics in order to provide some flexibility during recruiting. However, a minimum of 5 male victims and 5 female victims will be included in the testing of each mode in both Rounds 1 and 2. |
In advance of Round 1, and no more than 1 week before interviews are scheduled to begin, Westat will hold a 1-day training session to orient the interviewing team to NISVS and the cognitive testing efforts; present the items for testing and associated probes along with all other interview materials and procedures; allow the opportunity to conduct one practice interview in each mode; and provide detailed instruction for using the usability issue log. We anticipate that CDC will attend this training (remotely) and provide some introductory context for the interviewers. (Training dates are shown in the timeline below.)
All of the six interviewers who have been selected for this study have been previously trained in cognitive testing methods and have worked on multiple cognitive testing studies, including those on sensitive topics. All interviewers will be required to attend this training. There are two overarching goals of the training. One is to standardize the interviewers’ approach by establishing a shared understanding of testing objectives, and sharpening interviewers’ abilities to probe similarly and competently on unanticipated issues that may arise during the course of an interview. Two is to provide interviewers with training on how to recognize and respond to potential respondent distress during the interview. Westat has developed a special training module on this topic, and although most of our interviewers have already been through this distress training for other studies, we will repeat it for the entire team, with plenty of time for questions and answers.
For Round 2, a 2-hour refresher training session will be provided. This training will include a review of protocols and revisions to the instruments.
All trainings will be held at Westat’s Rockville headquarters. Interviewers who are local will attend in person; those who are not local will attend via WebEx. CDC is encouraged to attend and be actively involved in all trainings.
The cognitive interview itself will consist of administration of informed consent, administering the NISVS instrument, recording usability issues (for Web and Paper) in the issue log, administering the cognitive probes, and asking spontaneous probes as needed.
The respondent-interviewer interaction begins with a brief introduction and consent. In-person respondents will be asked for their written consent to participate in the interview, and remote respondents will be asked for verbal consent. This includes consent to be audio and video recorded.
For the questions being tested in the self-administered modes (Web and paper modes), interviewers will ask the respondents to fill out the questionnaire as if they had received it at their home, but will interrupt at designated points in the protocol to administer the scripted and any emergent or conditional probes. They will also record notes as needed about usability issues in the issue log (such as confusion about how to enter a response, moving backwards in the survey, or spontaneous comments about the look and feel of the question or screen).
For the interviewer administered CATI mode, the interviewer will administer a subset of the NISVS items orally, along with concurrent cognitive probes for the items being tested. For contextual support, we will include questions leading up to the items being tested, but will not probe on those questions unless the respondent raises a concern.
During the interviews, the cognitive interviewers may take notes as needed. However, the purpose of these notes is not to create data for analysis. Instead, interviewers will limit their note taking to notes that aid them in administering the interview (e.g., noting a comment at one question that may impact a question later on or that he or she will come back to probe on). This is so that the interviewer’s focus and full attention can be on the respondent’s reactions to and feedback about the questionnaire.
Part of the web protocol involves gathering feedback on instructions for how to clear one’s browser history. This is a task intended to protect respondents from the perpetrator being able to see what website they have visited or what the content of the survey is. In Round 2, some of the respondents who test the Web mode will be asked to bring their own device to the interview, such as a mobile phone or tablet. In these interviews, the interviewer will not only get feedback on the clarity of instructions, they will also provide the opportunity for respondents to actually clear their browser history on the device they have brought to the interview.
Before concluding, the interviewer will check in with the respondent one last time for any comments or input that was not addressed earlier in the interview before concluding the session. At the conclusion of the session, the interviewer will hand the respondent the list of counseling resources, will give the incentive payment to the respondent, and will request a signed acknowledgement that the incentive payment was received.
Finally, we will take precautions throughout the interviews to ensure respondent safety and deal with emotional distress. All of these procedures are embedded within the different protocols and attachments, including administering the informed consent process, periodically checking in with the respondent to make sure s/he is alright to continue with the interview, activating the distress protocol on an as-needed basis, and the provision of resources at the end of the interview. For the telephone-based interviews (for the full CATI-run through, n=10 per round), we will confirm with the respondent that no one is within earshot of the interview, and will have periodic reminders to ensure no one else can hear what is being said.
As soon as possible (and no more than 5 days) after each interview is completed, the interviewer (or a trained note-taker) will use a note-taking summary template to record the data collected. Summaries will omit all information that could be used to identify respondents, each of whom will be assigned a unique identifier. Note-takers also will record all usability issues from the issue log into a spreadsheet that identifies the interviewer name, unique respondent ID, question at which the issue arose, and a description of the issue.
Westat’s recommendations for revisions to any tested items will be grounded in the data. Therefore, our standard approach to writing up interview summaries is that the note-taker must watch/listen to and base all summarized data on the interview recording. (If a respondent refuses to be recorded, the interviewer will be asked to take detailed notes during the interview, but we will minimize the likelihood of recording refusals by informing respondents at the recruitment phase that we would like to record the interview.) In a special training for the note-taking team, interviewers and note-takers will be reminded of this approach and informed that they are required to watch or listen to each interview recording in writing up the summary. Further, their write-ups must be a summary of the data generated by the respondent. These summaries will be descriptive of what the respondent said and will use quotes whenever necessary. Any note-taker opinions or observations they wish to include or are asked to include in the summary will appear in a separate section of the write-up and must be fully supported with quotes or examples from the data. Note-takers will also be reminded to liberally insert verbatim quotes in support of summarized data. If someone other than the interviewer is writing up the notes, the interviewer will be required to review the notes before finalizing them to ensure the note-taker captured everything that transpired during the interview.
The large volume of qualitative data that will be generated by the interviews, combined with the quick turnaround time for analysis and reporting, necessitates a tightly structured approach to data reduction. Because of the tight schedule between rounds 1 and 2, our primary approach to prepare for Round 2 will be to rely on a verbal team debriefing to be held within 1-2 days of the final Round 1 interviews being completed. Westat regularly conducts these types of debriefings, in which our interviewers compile notes from their interviews and we walk through the instruments, item by item to allow each interviewer to comment on any issues or problems that arose. In addition to this debriefing, interviewers will enter any usability issues into the issue log on a flow basis, no later than 5 days after the final Round 1 interview is completed. Each item in the issue log will be assigned a priority of high, medium or low.
After Round 2, we will follow the same procedures. Using notes from the verbal debriefings, interviewer summaries from each interview, and the usability logs, Westat will produce a final report that reflects key findings from both rounds of interviewing and provides final recommendations for each mode of the instrument. The recommendations report will be organized to include the following sections: executive summary; goals and objectives of the research; recruitment methods and an overview of the participants; methods for conducting the cognitive interviews; key findings; recommendations of the final wording for the questions and reasons for the recommendations; and lessons learned and conclusions.
The Westat team will maintain active communication (e.g., weekly conference calls) with the CDC team during both Round 1 and Round 2 and between the two rounds, updating them on the status of interviews, challenges encountered, and lessons learned to date. Any challenges and barriers to success in recruitment and interviewing will be discussed collaboratively so that adjustments may be made if necessary.
2018
Nov 16 Draft plan submitted
TBD Final plans (10 days after comments)
Nov 23-Dec 24 Draft and submit IRB package
2019
Jan 24 Obtain IRB approval
Jan 24-March 24 Obtain OMB approval
March 25-April 7 Recruitment for Round 1
April 2 Round 1 interviewer training
April 3 Round 1 note-taker training
April 8-April 26 Round 1 interviews
May 3 All Round 1 notes complete
May 3-May 17 Analysis and interim briefing on Round 1
May 6-May 17 Recruitment for Round 2
May 13 Round 2 interviewer training
May 14 Round 2 note-taker training (if needed)
May 20-June 7 Round 2 interviews
June 14 All Round 2 notes complete
June 14-June 28 Analysis and combined Round 1/Round 2 report
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Darby Steiger |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |