Attachment L
NISVS Workgroup Summary
To comply with the OMB’s remaining terms of clearance for 2016, CDC collaborated with BJS in convening a workgroup to obtain expert feedback and input on how to enhance the NISVS survey methodology. The purpose of the workgroup was to obtain input from consultants with a diverse mix of highly specialized expertise and insights to ensure that the NISVS survey remains state of the art and benefits from the most current advancements in survey based data collection and research. This input is to be used as part of the ongoing improvement process for the NISVS system as CDC seeks to provide high quality data to inform violence prevention efforts. The NISVS working group through the National Center for Injury Prevention and Control’s Board of Scientific Counselors (BSC) was charged with discussing relevant issues and proposing options for the BSC to consider related to improving NISVS methodology.
The working group was charged with addressing key issues in the following areas:
Response Rates: How do we increase response rates in a dual-frame random-digit-dial (RDD) telephone survey? How can incentive structures be enhanced?
Non-response Bias and Other Sources of Error: What are some effective ways of dealing with non-response bias?
Sampling Frame: Is a dual frame RDD telephone survey the best mode of administration for a survey involving sensitive topics such as NISVS?
Survey Administration/Selected Methodological Issues: Are there ways to enhance the call protocol to enhance respondent safety, comfort, and disclosure? How can NISVS continue to increase the reliability of state estimates?
Maximizing Opportunities for Federal Collaboration in Data Collection: How might federal surveys such as NISVS and NCVS be effectively positioned to operate interdependently? Are there other opportunities for collaboration?
Outlined below is a detailed synopsis of the discussion and recommendations provided by the NISVS Methodology Workgroup.
Topic: Inclusion of lifetime and 12 months estimates
Panelist echoed a pitch for including lifetime data, because in some cases there is concern about precision of estimates. Looking at a short timeframe has some benefits, but many of the health and mental health effects are very persistent. Not asking about occurrences before a year previously misses a lot of information. He emphasized that both are important.
Recommendation: Keep lifetime and 12 month estimates
Sampling Description: Frame, Sample Size, Land/Cell
Topic: Cell Phone as a personal device – Should we add a question regarding shared cell phone?
Panelist 1: CP should not be considered a personal device. Thinks older individuals may share CPs. In favor of a random selection procedure when CPs are shared. Concern: coercion/controlling relationships and controlling the CP to control access to other people. Concerned about someone checking the numbers. In terms of a shared phone, we should just select the person answering the phone.
Panelist 2: Treat CP the same as a LL. Perhaps a question could be added to the survey as to whether or not the CP is shared. Might provide insight into how large the problem is. In favor of this as opposed to waiting for R to volunteer that the phone is shared. Seemed to think all agreed – the issue is what to do with this information.
Panelist 3: Suggested that if only a small proportion of CPs were shared, e.g., 5-10%, that random selection burden on the interviewers might not be reasonable for a small gain. If a larger proportion were shared, he might alter his opinion.
Panelist 4: Not functional to try to re-contact the other individual via cell phone. Establishes yet another hurdle to reaching R, which could impact response rates.
Recommendation: Add a question to the survey to provide insight as to how often the cell phone is shared (understand the extent of the problem).
Topic: Cell Only Frame vs. Dual Frame
Panelist 1: Things moving rapidly toward CP only frame design, and that the proportion cell phone only depends on the population; not long before it will be a CP-only universe.
Panelist 2: As the population changes in terms of more CPs, more of the frame is allocated to CPs, making the selection proportionate. This is best in terms of the variance. Dropping the LL entirely will create a bias issue. However, it may be better in terms of mean square error to drop the LL fame because there is weight variation associated with dual frame that inflates variance. Single frame may be better for some estimates, and dual frame for others. This could vary by states (single frame for some, dual for others).
Panelist 3: Were it not for her concern regarding the restrictive access (in situations where controlling behavior is present), she would be in the CP-only camp. However, given this concern, she is afraid we might lose some important respondents.
Recommendation: While usage is rapidly changing with an increase in CP only households, stick with the dual frame for the data collection beginning in March, 2018. There is still some concern that we could be missing important respondents if we switch to an all CP frame at this time.
Topic: Phase Sampling
Consider changing how we do this. Suggests releasing a lot of sample initially, with fewer calls, perhaps 5, and no incentive. Phase-2 might consist of selecting 50% of non-respondents for follow-up, and with increased call efforts, perhaps 30 calls, and varying incentives depending on the demographic group.
Topic: Adaptive Design to mitigate nonresponse bias
Panelist 1: If there are demographic groups related to key survey outcomes, they can be used as proxies for nonresponse bias. If we have a list of numbers which contains selected demographic information such as race, age, sex, the number of call attempts can be set such that response rates across different demographic groups are equal. Helps reduce bias. Call attempts could vary by demographic group. Also makes weighting adjustments more efficient. The goal: to increase low response propensities to a higher level. Cut back calls when response propensity is high. May lower response rates but there will be less nonresponse bias.
Panelist 2: Asked if adaptive design might involve different selection criteria, call protocols and incentive structures. Another panelist confirmed that this was the case.
Panelist 3: Like the ideas proposed by another panelist from a theoretical standpoint. However, given that 70% of the target frame is CP, we will not have a list and demographics to employ such an approach would be unknown.
Topic: Testing what appears on caller ID and use of text exchanges
CDC: We can’t be sure what shows up in caller ID as it is dependent on the carrier, however, currently interviewers dial from a 1-800 number. If they reach an answering machine, a message is left with a call back number. Two other ideas were discussed in the meeting: (1) using a non-800 number, and (2) pushing text forward on the caller ID (e.g., Health/Injury Study). Again, we cannot be sure what shows up in caller ID as it is dependent on the carrier.
Panelist 1: Would be surprised if people felt tricked by this change in phase-2. Also would be surprised if this yielded improvement.
Panelist 2: In favor testing the non-800 number against the text. Does not think that young people listen to voicemails. They can see the number for a missed call and call back if they recognize it. Regarding advanced mailing, people don’t always open their mail and in some areas, mail is stolen.
CDC: Suggested a 3-arm experiment in phase-2 where call backs are randomly assigned into 1 of three groups: the 1-800 number group, the non-800 number group, and the text message group.
CDC: Wondered about the ability of the person who missed the call to text the caller (interviewer) back; he thought young people would respond positively.
Panelist 3: Liked the idea of the 3-arm trial. Wondered if there was a way to work with carriers nationwide so that there was consistency in what was displayed. He also wondered if there could be a CP text message asking someone if they are available to speak, or even conceptualizing a text as an advanced letter.
Panelist 4: Wondered about the value of increasing the response rate by a small amount, say 2%, and if our focus was on the wrong issue. We should be looking at non-response bias, as response rate is not a measure of bias (we should be looking at the R-Index) and in that vain, work the sample to achieve balance. Also pointed out that while we can get around the non-contact problem with the LL by sending an advanced letter, a text might help get around the non-contact problem when cell phones block calls. Also pointed out that there were some better ways of estimating the eligible among non-contacts (advances over what AAPOR uses). This would alter the denominator and increase response rates.
Panelist 5: Pointed out that we do not currently ask whether or not the respondent received/read the advanced letter.
All: Discussion about response rates and why we are focused on these. Political/practical necessity to report (credibility of data), potential to reduce nonresponse bias, some journals will reject outright if too low. Most doubted the high response rates reported by some studies, and were skeptical about an RDD attaining something higher than 33%.
Recommendation: There seemed to be some support for either a 2-arm or a 3-arm experiment in phase-2 for the data collection beginning in March 2018. That is, call backs using a 1-800 number, a non-800 number and a text message.
Recommendation: There also seemed to be some interest in a text exchange between the interviewer and the cell phone respondent, either briefly describing this as a health and injury study and then asking if R can talk, or going one step further and using text for a CP the way we use advanced letters for the LL. This would allow us to make them aware of the incentive. If yes, this could be implemented for the data collection beginning in March 2018.
Understanding Non-Response Bias
Topic: Benchmarking
Panelist 1: Likes the idea of adding a very specific measure to another survey that would be more directly comparable than using existing measures. Also indicated while there is not much to compare with domestically, we might be able to look at international studies in high income countries that are the US’s closest counterparts (they have more detailed measures of IPV and SV with questions more comparable to those in NISVS. Another possible comparison --NCVS just wrapped up a stalking victimization supplement. BJS also did a study in 5 cities comparing Audio Computer-Assisted Self-Interview (ACASI) Software and RDD. Questions are not comparable to NISVS but comparison could provide insight into differences in estimates from the 2 approaches.
Panelist 2: Indicated that the most critical items to benchmark would be measures of sexual assault and other types of IPV. Not much is out there. He did indicate that the World Health Survey (WHS) might provide a reasonable benchmark (did not include attempted sexual assault/rape; not sure about stalking. Not limited to intimates).
CDC: What about other sensitive topics for comparisons – e.g., HIV?
Panelist 3: Look into NCHS’s NSFG (National Survey of Family Growth) which deals with a lot of sexual issues [for possible benchmarks].
Recommendation: Identify other sources for benchmarking that include either questions similar to those asked in NISVS, or at least sensitive questions, and list variables that could be compared. We could also make comparisons of those benchmarks with the current data collection. The WHS and NCHS’s NSFG might be viable options Information from the BJS 5 cities study on estimates using different modes of data collection, once released, might provide insights into why our estimates using RDD, may differ from estimates generated using data collected via other modes.
Do the panelists have any other suggestions for specific surveys or types of questions?
Topic: Other Means to assess Non-response Bias
Panelist 1: Referenced the RAND Military Study, which used some innovative methods to reaching late responders (initial attempt was through a web survey), but noted that they had information on all those invited to participate – most studies do not.
Panelist 2: Could compare the design weighted (for selection probabilities) data with an external source such as the ACS on factors such as age distribution, race, sex, household size, etc. (Note: NISVS currently does this, but we may not be using that information to its fullest.) This would tell us who we are missing and might provide insight into non-response bias. Then consider other external totals in post-stratification to determine how much bias there is in the final weighted estimates. Mentions that we are creating weight variation with the institution of Phase-2. Rather than just combining, he suggests getting a Phase-1 estimate, a combined Phase-1 and Phase-2 estimate, and then calibrating somehow to retain the advantages associated with variance reduction (Phase-1) and reduction in bias (Phase-1 and Phase-2 which includes late responders). Calibration techniques could then go beyond external totals and use questionnaire variables. The resulting estimates are then ready to compare with benchmarks. Another panelist will share references with us. In response to a question by one panelist, Panelist 2 indicated that for LL frame, we might look into what variables predicted response propensity. Predictors of response propensity may then be useful in the adjustment process.
Panelist 2: Indicated that BJS did a NISVS-improved version of a CATI interview with an RDD, which included behaviorally specific screeners. They will be comparing results with an ACASI approach – results on different modes of collection will be out later this year.
Panelist 3: What about a small follow-up study of non-respondents with a few questions, say 4, capturing key information related to nonresponse, especially nonresponse that might be associated without our outcomes. This would be Phase-3, and offer an increased (e.g., $50) incentive.
Panelist 4: Regarding “Phase-3”, indicated that RTI has done this without an increased incentive, and that it this approach may offer insights into how much bias there is in our estimates. Can also be used in non-response adjustment.
CDC: Some states do a quick follow-back text survey where respondents answer with a button press. This may work for CPs.
Panelist 3: Wondered if sensitivity analyses had been performed on weight adjustments as a means of identifying non-ignorable bias. Omitting a variable and seeing its impact on the weights provides insights into non-ignorable non-response; non-response to variables not being used in the response propensity models. It would be important to understand the response propensity for both victims and non-victims. Some work has been done in this area and he will provide a reference.
Panelist 4: Agreed that it is just as important to gain participation from non-victims as it is from victims.
Recommendation: Add an experimental Phase-3 to the NISVS call-back protocol for non-respondents, whereby additional efforts are made for a subset of non-respondents in phase-2, possibly at a higher incentive level, and a few basic questions asked that capture key information related to non-response, especially items associated with our outcome.
Topic: Do people know who we (CDC) are?
Panelists 1 and 2: Do people know who CDC is? Perhaps add “CDC is the nation’s federal, governmental arm for public health protection….”. (Note: there was mentioned earlier on that use of CDC and then RTI in the script may confuse some people.)
Panelist 3: Stressed the importance of not biasing the introduction by enticing or scaring people off based on some characteristic.
Recommendation: Consider altering the introductory script to add a sentence explaining who we (CDC) are. Consider dropping reference to RTI.
Strategies Used to Promote Respondent Safety, Comfort, and Disclosure
Topic: Effectiveness and receipt of advance letter, using official stationery
Panelist 1: Asked whether any studies had been done to determine whether the advance letter is effective in terms of response rates.
CDC: Indicated that while they have not studied this for NISVS, they did for an earlier study that pre-dated NISVS for the Washington, DC sniper shootings that occurred about 12 years ago. An advance mailing was done there, respondents were asked whether they received it, and there were benefits from that.
Panelist 2: Said he thought the literature supported that. Another thing to consider would be experimenting with pre-incentives. The chances that someone will open the letter and read it is increased if a $1 is enclosed in the envelope. While that is not a large portion of the sample, it could be an important group and could help to reduce non-response bias.
CDC: Asked for suggestions about what to include on the cover of the letter to keep it from going in the junk pile immediately, but also that would not cause it to be stolen.
Panelist 2: Said the Census Bureau had done some experimentation with this using a variety of types of envelopes that ranged from flashy and colorful to mundane white with the Census Bureau seal. The winner was the mundane white envelope with the Census seal because that looks official; whereas, the glossy envelopes developed by the marketing people went right into the trash. He suggested having the CDC logo on the envelope and making it look as much like a government letter as possible. This may at least cause them to peek inside, see the dollar, and be enticed to read the letter.
Recommendations:
Experiment with pre-incentives to increase likelihood of opening and reading the advance letter (e.g., insert $1 in envelope)
Use official stationery on advance letter mailings (letterhead, envelope)
Add question about whether the letter was received and read.
Topic: Data quality, validity of responses
Panelist 1: Typically, well below 1% of people have any distress. Questions can be incorporated into the end of the survey asking whether anything upset the person, if they are still upset, whether they would like to speak with anyone about it, et cetera. People are on call all of the time, but they never receive calls because it is not that big of an issue. Letting the interviewers know that they are not harming people by asking these questions is important.
Panelist 2: They [University of Michigan and others] ask questions at the end of the interview that will provide some hints about whether people have been fully disclosing of sensitive topics. Questions might be posed such as, “Do you think you were able to be completely honest in this interview?” Though it would be anticipated that people would always say “yes,” but some people say there were not or they felt uncomfortable. Those reactions can be recorded and the interviewer can code them. People get honest when answering these questions and they can be constructive. Consideration might be given to other questions that might be asked at the end of NISVS interview about issues of potential concern such as: Did you feel safe in answering these questions? Were these questions upsetting to you?
Panelist 3: Indicated that BJS has experience with that on measuring rape and sexual assault in prisons and jails. They include a series of debriefing items and have found people to be remarkably honest in saying they were not honest. Some people do not wish to report everything or were not entirely truthful, and some who felt the interview might have been too complex, confusing, or too long. The debriefing items are very useful in assessing data quality.
CDC: Liked the suggestion made earlier about including some questions at the end. Though he was not certain exactly how they would word it, it could be something along the lines of: Were there any questions that you indicated you hadn’t experienced, but you really did but didn’t feel comfortable talking about it? Some variation on that might provide some insights into who is saying “yes” to that: older folks, younger folks, minority groups, et cetera.
Recommendation: Add question(s) at the end of survey about honesty, discomfort disclosing, whether survey content was upsetting, whether they felt safe.
Topic: Increasing disclosure, participation
Panelist 1: Expressed her hope that people who answer these surveys or who do identify themselves as having been victimized do not carry with them the burden that they are damaged goods forever. This is the message that society gives them. Perhaps before the victimization questions, a line could be inserted that says, “People have these experiences. They are difficult. It takes time to heal, but people survive and can even thrive.” It is so hard to answer questions, even at the very end. It is not like someone answers a question at the very end and the interviewer just says, “Here’s a number. Goodbye.”
CDC: Asked whether Panelist 1 was suggesting that including these types of statements before the actual questions or at the end would help with disclosure or just be accommodating. Panelist 1 thought it might be both.
Panelist 2: Suggested that another way to do this would be to highlight the importance of people disclosing this or getting information from everybody who has experienced something like this. It could have been anytime in someone’s life. Anybody could have done it. It might have had a big or little effect. It is important to get the range so as not to poll only people who are totally devastated. That has to be carefully worded. Many people are resilient or it might strike someone the wrong way because they are not resilient. Perhaps a statement could be made to the effect of, “The data we are getting are really important. We use these data to help other folks.” He personally believes that is one of the main reasons people decide to participate in these surveys and answer these types of questions.
Recommendations:
Add supportive language that emphasizes strength of victims. The language could be before the questions and/or at the end.
Add language about importance of this information from everyone who has experienced it and how it’s used
Possible Alternatives to the Current RDD Design, Research Questions to Guide Decisions, and Collaboration with BJS
Topic: Mode of data collection and Address-based sampling (ABS) – multiple issues related to these changes were discussed.
Panelist 1: Suggested thinking about it in terms of a multi-mode design, not just doing web, but maybe a LL/web/paper design. LL can be matched to the ABS frame, because there are telephone numbers in the ABS frame. They are not complete, but nevertheless, that is a possibility for the majority… That design is to keep the address-based idea, but tries to maximize the telephone, use the web to fill in, and use paper when someone does not have web.…The literature on web/mail surveys says that giving sample members a choice of modes, such as mailing an introductory letter with an enclosed questionnaire and providing a web URL, somehow reduces response rates. However, it is not clear why. Perhaps it is a confusion factor. Incentivizing respondents to use the web and telling them that it is saving money is the Choice Plus option. Choice means giving them an option of web or mail at the introduction of the survey. Plus is for incentivizing them to go to the web.
Given the content of NISVS, CDC said they had some concerns about a mail survey. If households receive a letter with a survey out of the blue there is risk that the perpetrator of violence would wonder who the victim had been talking to, why they are getting this in the mail, and increase risk for violence.
Panelist 2: Acknowledged that the NISVS content is different from other content in the literature, those considerations have not been explored much in the literature. One possibility is just to use the web. Both the LL and web have some problem with coverage, but perhaps the combined coverage is not so bad. Perhaps it would even be better than just the cell only… Whatever process or options are considered will require a lot of testing.
Panelist 3 [on ABS]: ABS is a much better way to control the sample than an RDD, and it is a way to establish the location, certain characteristics of the sample, and things that can be adjusted for in terms of non-response. ABS has significant virtues and BJS is looking at that not only for its work in rape and sexual assault, but also with the Office of Juvenile Justice and Delinquency Prevention (OJJDP) on children exposed to violence and moving away from an RDD to an address-based collection in that regard.
Sensitive topics
Panelist 1: Thought there had been some large web surveys to collect data on sensitive topics. The RAND study was probably the biggest one. It did not cover all types of IPV, but it certainly covered sexual assault and sexual harassment. ….It turned out to be feasible. People who complained about the language tended to be people who had not been victims. It was not that they were upset about the language. They thought other people would be upset about the language, which is an interesting phenomenon. At any rate, he thinks it is a viable data collection approach. CDC could easily program NISVS using a platform such that it would come up on a CP and scale to another type of computer with all the skip patterns. If NISVS averages 25 minutes, it probably would go faster on the web than it would if someone was having to read all of the questions to people. There are literacy issues and other details that would have to be worked out.
Ensuring a representative sample
Panelist 2: Moving to an ABS with a push to the web, she wondered about the ability to get the most at-risk women. That concern is predominant throughout this research. It is not clear what problems the push to web would pose, but that does have to be kept in mind. Literacy, limited access to the internet, and up-to-date software must be considered.
Panelist 1 [on pilot testing]: If CDC has any discretionary budget at all, it might be possible to use a web panel. Some are probability-based in which they use ABS or other methods to try to locate people and then set up a web panel who participate in a variety of research. The cost is typically pretty inexpensive. If they wanted to know in the general population what modes of control people try to use, such as looking at one’s cell phone or watch what one does on the internet. They could probably conduct a separate study fairly inexpensively that is not designed to be a science paper. Instead, it is designed to provide some information internally about anything that might be an issue.
Panelist 3: He also wondered about drawing a sample from an ABS frame, sending an introduction letter, giving them a URL, and also providing a telephone number option. It would be costly, but could be tested. A telephone facility would have to be staffed from 8:00 pm to 9:00 pm to deal with the West Coast, perhaps 5 days a week. The cost would depend upon how many people would have to be staffed. But if concerned about paper, why not give people an option to do it by phone? They would still be incentivized. This would be open to the people who do not have the web option, so maybe it would not get that many people.
Panelist 2: Pointed out that legal immigrants are not showing up to claim benefits that are theirs. People are becoming very suspicious in this context about giving information to anyone and the government.
Panelist 3: He stressed the importance of “thinking out of the box” for this particular subject matter. He suggested doing a Choice Plus, trying to push people to the web by incentivizing that over paper, but in the paper questionnaire including information that is only somewhat sensitive and then ask at the end of the survey whether the respondent would be willing to complete an interview about more sensitive topics, offer to incentivize them again, and get their telephone number. The number of people who would be willing to do this would depend upon the incentive offered and their financial situation, but if they decide they do not want to do that, at least there will be some information to base non-response adjustments on. It is possible that this also would reduce the TSE.
Panelist 4: Agreed with Panelist 2 on her sobering assessment of what is occurring across the population. In the spirit of the formative evaluation was hearing the experts discuss with respect to the design choices and challenges, he wondered whether some formative engagement with potential respondents would further illuminate the issues from a respondent perspective.
While Panelist 1 thought this was a really good idea, the caveat is that it should include some actual experience with different modes of data collection or different ideas about it, which would be easy enough to do…. It might be good to develop brief surveys that exemplify collecting data on the web, being interviewed by telephone, et cetera and then get responses from people. There is a lot of fear in the country regardless of immigration status that seems to be pervasive. Someone coming to the door saying they are from the government to collect information would not be any better than getting an invitation by phone or letter to do that.
Ensuring comfort and safety with web mode
Panelist 1: Just as people did with the telephone initially, consideration must be given to how to make this safe and convenient such that people feel comfortable in disclosing things they ordinarily do not want to talk about. There are some examples of how people have done this with web-based methods in terms of deleting the browsing history, quick escape buttons, et cetera. It would be a good idea to explore these to determine what might be worth exploring on a larger scale. Reviewing pilot research on these areas could be valuable. If only interested in the web-based mode part of that, it is possible to get samples of people who are already on the web with whom research can be done very cheaply. The sample will not necessarily be random, but it will be large. A lot of experimentation could be done on that very cheaply.
CDC: Wondered whether there were some specific types of questions Panelist 1 wished he had had the resources to answer.
Panelist 1: Indicated that he recently priced something from Statistic Survey International (SSI). While this is not a probabilities web-panel at all, they will draw a sample of about 3000 people for approximately $4 per interview or $12,000 for a 20-minute complete. CDC could tell SSI that they would like to know whether people really know how to clear their browsers with different types of instructions. Something could probably be set up to see if people know how to do that, or follow safety instruction, et cetera. Many other little things could be done along this same line. For example, instructions could be varied about participating in a study. Different safety instructions could be given. This is one way to pilot some things.
Panelist 3: He thought that with some modest investments, some web testing could be done. Web surveys are not expensive relative to interviewer-assisted surveys. He suggested experimenting along the lines of what EIA did in terms of looking at different mode options. They looked at web-only, a sequential web followed by Choice, Choice alone at the beginning rather than the sequential design concurrently, the Choice Plus design, and various incentive options.
What other suggestions do the panelists have regarding the key questions to address during this development and piloting phase? This would likely mean a gap in data collection during this period.
Topic: Rostering and respondent selection
CDC: Asked how to ensure that the right person is the one going to the web.
Panelist 1: Replied that the biggest problem with that kind of mode is basically the within household selection. Don Dillman will admit that there are not very good ways to do this. The birthday method can be used, either last or next. Or a rostering method could be used, which is more cumbersome. Whomever opens the envelope will be asked to list all of the people in their household, and pick one at random to receive the money. There is a bias associated with it and that is why it must be considered from a TSE perspective. The TSE idea is that the advantage of ABS will outweigh possible complications.
Panelist 2: Said that in fairness, that probably also is true of rostering with telephone surveys as well. He was trying to conduct a sexual assault study with this, and if he can get funded, he will go ahead and do it. He had to do a lot of research on it, and is told that ABS in terms of who is interviewed could use two different methods. Some people just use next-birthday, while others use a two-stage process in which someone is told about the $50 and has to send the roster card back. The investigators then select randomly who would complete the survey. While that is cleaner, it is obviously more expensive. Something else that bears consideration is that once a web survey to be self-administered is designed, the cost of collecting the data is miniscule compared to having an in-person interviewer or someone interviewing respondents over the phone in an RDD study.
Does the panel have any additional recommendations for how to ensure respondent selection is appropriate and safe? Any ideas for experiments?
Topic: Panel design
Panelist 1: When dealing with a panel survey, more of a commitment is required so incentives will have to be reflective of the fact that people are being asked to commit to doing it again. In order to trade-off cross-sectional versus longitudinal estimation, consideration could be given to a split panel where 50% of the sample would be refreshed every year. It would be similar to what NISVS is doing currently with a random sample. The other 50% would be longitudinal such that the same sample members would be carried over at least one year. That type of split panel design is a compromise between getting good coverage for cross-sectional estimates because there is panel attrition.
What other ideas do the panelists have about this?
Topic: Instrument design
CDC: Emphasized that this had all been great food for thought. The discussion was making her think that apart from the mode, there would need to be some innovation in terms of the instruments themselves. There is a tendency to be very linear and sequential in the way respondents are moved through an instrument. She wondered if anyone had any experiences from their work on other types of innovative ways to capture information on victimization experiences, health impacts, et cetera in a way that does not result in the breakoffs that occur with multiple victimizations and fatigue. Going forward, consideration must be given to how to make the best of technology.
Panelist 1: Indicated that RTI has developed web surveys that are quite different from traditional paper. They take advantage of the web by embedding pictures, videos, and short snippets of videos to illustrate concepts. There can be hyperlinks so that if a respondent does not understand a term, they can click on it and go to a definition. All sorts of things are available, which is why there is such a push to the web. That is the platform that provides the most options in terms of trying to obtain better data quality.
Recommendation: If CDC moves forward with a web-based survey it will be important to take advantage of the unique strengths of this mode and important to work with a contractor to develop an alternative version of the instrument that is designed for the web.
Topic: Total Sampling Error
Continuing with the theme of TSE, Panelist 1 suggested developing a quality profile of the current NISVS. That means reviewing the 6 to 7 error sources within the TSE paradigm, for which there are many references that he will share with CDC, to understand what is known about each of those error sources. That quality profile will provide a picture of the total error for the current strategy. Then that profiling should be repeated for alternatives to determine where there might be benefits for alternative designs. Of course, cost must be factored in. One of the criticisms of the TSE paradigm is that although it attempts to minimize TSE for fixed costs, sometimes it is not prominent when quality profiles are done. It is important to keep that in mind. If a switch is going to be made to new design, it must be cost-neutral.
Panelist 2: Agreed that there was a lot of value in Panelist 1’s suggestions. What would a perfect estimate look like? The population coverage of the people to be sampled would be as large as it could be. The questions would be measuring the construct of interest as carefully as possible. Flexibility would be needed in terms of data collection modes. While there was a lot of discussion about push to web, consideration must be given to the people who may be left out of web-surveys and how to collect their data. To not confound the ABS as a method of potentially locating individuals to participate in the survey with the web part of that, because that is a method to locate people and give them an opportunity to participate in other types of survey modes.
Recommendation: Review references recommended by Panelist 1. Examine total survey error to inform design (quality profile matrix).
Topic: Alternatives to gather state data
Panelist 1: Suggested that perhaps a shorter survey model could be used for state estimates. Maybe the level of detail is not quite needed for the state-level estimates.
Panelist 2: Added that another design to consider would be a split panel in which the panel design would provide the core, and state supplements could be done. That could be cross-sectional. If state estimates needed to be done biannually, the panel would be ongoing and every two years, state supplements could be done with a cross-sectional design to try to achieve the levels needed.
Panelist 3: Observed that one trade-off that always must be considered is whether the data needed can be acquired from other studies. One advantage of whatever changes are made to NISVS, is that the entire survey is focused on SV, IPV, and stalking and in-depth measurement using state of the art assessment tools for these exist. This came up in the NAS who wanted to just add 5 questions onto existing studies to get national estimates. While they would get national estimates, they would be really bad national estimates. It depends on what level of detail and precision are desired in terms of what is being measured now. He would hate to see NISVS lost. There are some cons with trying to graft on a probably ineffective and too insensitive an assessment to some of the other national surveys.
Recommendation: Consider split panel; assess at national and state levels.
Topic: Collaboration with BJS
Panelist 1: Indicated that BJS is undergoing a major instrument redesign for the NCVS. A major component of that is moving to the self-administered mode. They are working through all of these issues currently. In terms of a multi-mode approach, they are thinking about having an in-person component at Wave 1 in which an interviewer goes to the household, makes contact, does the rostering, and provides the selected person a laptop… The culmination of this project will be a major two-wave field test to be conducted in 2018. Some testing of the instrument will have to be done prior to that, and cognitive testing is still being done on the questions. This is a pretty wholesale redesign. One component of that is the idea of a modular approach in which a subset of respondents would be asked particular questions. BJS is working toward that with the development of different modules and improving screening items, including new measures of risk of sexual assault based on the work that another panelist discussed earlier. This is underway and BJS would be happy to pull CDC into those conversations as they start moving in the direction of developing the field test, and sharing the findings from that research to help inform the NISVS approach as well. BJS is happy to collaborate on that.
CDC: Expressed his gratitude for the generous offer and indicated that CDC would love to take her up on it. Moving forward, if BJS realizes that they wish they had conducted some specific nuances to the testing, that might be an opportunity for CDC to assist as they consider the types of pilot testing to conduct in the future to build on what BJS has already done and to answer some of the lingering gaps. Having those discussions could be mutually beneficial.
Panelist 2: Conveyed that OMB has been encouraging federal agencies to collaborate for some time. They also are encouraging BJS and CDC to work together in the long-term to make sure that both needs are met and creative ways are thought of to collect information in one way or another
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Holland, Kristin (CDC/ONDIEH/NCIPC) |
File Modified | 0000-00-00 |
File Created | 2021-01-16 |