Westat recommendations 5.19.21

Att N_Westat Recommendations for NISVS.docx

[NCIPC] The National Intimate Partner and Sexual Violence Survey (NISVS)

Westat recommendations 5.19.21

OMB: 0920-0822

Document [docx]
Download: docx | pdf



Memo



Date: March 30, 2021

To: Asad Patwary

From: Eric Jodts, David Cantor, Ting Yan

Subject: Recommendations for Future NISVS



When considering a recommendation for the design of the NISVS, it is important to keep in mind the context that motivated the redesign work. The NISVS response rates have been dropping like those for all other RDD surveys. With this drop in response rates the cost is getting higher. As noted above, our initial calculation is that the RDD component of the study was between two to three times more expensive per completed survey than the ABS. All signs are pointing to RDD response rates continuing to drop. This threatens the viability of this methodology as a way to collect data. At the very least, it leads to an expectation that this methodology will become even more expensive when moving forward.

Use the ABS sample frame with push-to-web methodology

With this context in mind, as well as the measures of quality summarized above, the best design for NISVS is the ABS, push-to-web design. There is very little reason to include RDD. The evidence is the prevalence measures are much higher than equivalent surveys and it is much more expensive to implement. With response rates continuing to decline, these problems will only get more severe.

The above analyses also suggest the ABS design may have a negative bias, perhaps from measurement rather than nonresponse. The evidence is preliminary and is based primarily on comparisons with the NSFG measures. As with any comparisons, this measure is somewhat flawed because of the inherent differences between the ABS and NSFG designs. Nonetheless, the higher levels of missing data on the web suggest that some web respondents may not be fully engaged. Of course, measurement error is also present in interviewer-administered surveys as well. So, it should not be seen as a unique problem with the push-to-web methodology. Further research should be conducted on ways to promote respondent engagement on the web and methods to measure it.

Use additional mailings

The design used in the feasibility study was limited by calendar time. As a result, it was not possible to make four contact attempts at each stage of the survey. Using four attempts is standard procedure for most surveys using postal mail as the contact method. The screening survey had three contacts: 1) initial recruitment letter, 2) reminder postcard, and 3) follow-up with paper screener. The follow-up for extended respondents had two contact attempts: 1) initial request and 2) offer of alternative mode.

The nonresponse follow-up did add a contact, but this is different because it changes the essential survey conditions by offering respondents an additional incentive. We recommend that the survey add one additional contact to both the screening and extended phase of the survey: 1) for the screener, an additional follow-up which includes the paper screener and 2) for the extended, an additional follow-up that offers the respondent the choice of the mode.

Use the Web/CATI optional Group

There are three primary advantages of the web/paper option group. The first is that it resulted in a slightly higher response rate (1 point) than the web/CATI group. Second, it brought in individuals with lower education, which is a group that is generally underrepresented in this type of ABS design. The small sample sizes make it difficult to assess whether the observed differences in who responded to each of the option groups was significant. But it is the case, based on prior research, that different types of respondents tend to use the paper mode. Furthermore, the results suggest that this difference in representation may lead to respondents who are less likely to report a victimization. When comparing rates between the web/paper group to the web/CATI group, the former was consistently lower. The third advantage is that the optional mode (paper) is self-administered, which is compatible with the web survey. This should minimize any effects that mode of administration (interviewer vs. self-administered) has on responses.

There are three advantages of the web/CATI option group. The first is that it yields a complete dataset for all respondents. The paper instrument was shortened and did not include all of the items on the full NISVS questionnaire. For example, the paper instrument did not collect the detail on each perpetrator as on the web and CATI instrument. The second advantage is the web/CATI has more control over the privacy of the interview. Individuals can only be exposed to the web questionnaire by signing onto the account, using the appropriate username and password. The paper instrument was included in the postal package addressed to the particular respondent. The third advantage is that there was less item-missing data for the web and CATI as compared to the paper. The computerized instruments are able to provide more guidance to respondents when navigating the skip instructions. This results in lower rates of item-missing data.

Approximately 80 percent (weighted) of the surveys were completed before the respondent was given the choice between modes. This limits the effect this feature of the design can have on the final results. This might change once the additional mailing recommended above is instituted. The two yield very similar prevalence estimates. The decision between these two options comes down to trading off the slightly better representation of the web/paper with the slightly more complete dataset that the web/CATI offers. The web/CATI also offers some advantages with respect to item-missing data and privacy. We recommend using the web/CATI option. In our view, the completeness of the data outweighs the small increases in representation the web/paper offers. However, this is a close call. It might be worth considering running a similar experiment on the national study to collect more data when all mailings are instituted.

Use the probability method of respondent selection

The differences between the two selection methods were negligible. Of note, both of the methods produced the same distributions for age and sex. The primary objective of the YMOF method was to boost participation of young people and males. Both of these groups are typically under-enumerated in surveys involving postal contacts. However, both the YMOF and the RBP methods were similar with respect to these two demographic groups. In addition, there was no difference in the lifetime and 12-month prevalence rates.

The Rizzo-Brick-Park (RBP) probability method is recommended for the larger study. The probability method is preferred because it maintains full coverage of all individuals within the household. In a small number of households, the YMOF does not assign a non-zero probability of selection to every member. Given there is no difference in either response or victimization rates between the two methods, the RBP method is recommended so that all adults have a non-zero chance of selection.

Include items on the NISVS questionnaire that allow assessment of representation and bias

The feasibility study included a number of additional measures to compare against other surveys, including the ACS (internet use, born in United States, home ownership); the NHIS (mental/emotional problems, hospitalization, have asthma, have been depressed, any adult injured), and the NSFG (forced vaginal intercourse by male, by female and oral/anal sex by a male). These items were useful for the analysis of data quality on the NISVS. These measures, or at least a subset of these measures, should be included in the survey. The measures used on the NISVS should be coordinated with the most recent versions available from each respective survey.

Include more items to measure the attention of the respondent on the web survey

The Feasibility instrument included two items to assess whether the respondent was carefully reading the questions. One was placed at the beginning of the survey and the other at the end. More, or at least different items, should be placed on the instrument. One should be placed in the middle of the survey. Other approaches might include an additional attention check placed in the middle of the survey (e.g., pick “x” from the list displayed). It could also include putting an open-ended item at the end asking about a general topic (e.g., Health policy issues). This would be used to see if respondents put in coherent answers, put in a non-sequitur.




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleWestat Memo template
SubjectMemo template
AuthorRegina Yudd
File Modified0000-00-00
File Created2023-11-10

© 2024 OMB.report | Privacy Policy