B. Statistical Methods
1. Respondent Universe and Sampling Methods
The 2016 National Census of Victim Service Provider (NCVSP) will be a complete enumeration of the approximately 31,000 entities included on the national roster of VSPs developed for this data collection. The roster is the first list of all civilian entities nationwide that provide services to victims as the primary function of the organization or through dedicated programs or personnel. Over the past 4 years, BJS and the project team of Rand, NORC at the University of Chicago, and the National Center for Victims of Crime (NCVC), have taken many steps to identify this universe of VSPs.
First, BJS and the project team consulted with experts in victim service provision, victim service funding, victimization, and special victim populations to better understand the diverse range of VSPs that exist across the nation and potential challenges that could arise in creating a comprehensive list of providers. Based on these early conversations with the Project Input Committee, Expert Panel, and other federal stakeholders (e.g., Office for Victims of Crime, Violence Against Women, National Institute of Justice), the project team broadly defined “Victim Service Provider” as: “any civilian organization or entity which provides services or assistance to victims of crime or abuse.” The team identified three large categories of VSPs: (a) primary VSPs, or providers principally functioning to provide services to crime victims; (b) secondary VSPs, or providers who assist crime victims as one of several functions but have a program, center, or specific staff dedicated to serving crime victims; and (c) incidental VSPs, or providers who might serve crime victims as part of their regular services but have no designated programs or staff for doing so. Within these categories of primary, secondary, and incidental VSPs, there are a range of different types of VSPs, including government agencies, hospitals, campus organizations, faith-based and non-profit entities, and informal groups such as survivor networks.
With this broad definition in mind, the research team began to compile an all-inclusive, national list of VSPs. The team first gathered lists of federal grantees and subgrantees who received funding to provide victim services, as well as national-level professional associations for victim service providers, lists of grantees and subgrantees who received Victims of Crime Act (VOCA), Violence Against Women Act (VAWA), Byrne or Justice Assistance Grant (JAG) funding to provide services to crime victims, were then consolidated and deduplicated. The NCVC membership list of self-identified victim service providers was then merged with the list of federal grantees.
Initially, a total of 43,843 records were received from NCVC, and the federal grant providers, including the Office for Victims of Crime (OVC), Office on Violence Against Women (OVW), and the Bureau of Justice Assistance (BJA). Some of the VSPs were duplicated within records from a single source, primarily because OVC and OVW provided lists covering multiple grant years. In addition, the same VSP might show up on lists provided by two or more of the sources. De-duplication was performed by conforming VSP name and address to US Postal Service abbreviations and conventions, then performing a set of tests to identify obvious duplicates based on name, street address, and city. Street address duplicates that did not match on name were subjected to eyes-on review. After de-duplication, the number of unique entries was reduced to 21,014 names, of which 11,719 are unique to the NCVC database, 4,184 are unique to OVC’s listings, and 2,090 come only from OVW’s records. The remaining 3,021 entities were found on one or more of the compiled lists, with NCVC as one of the contributors in about three-quarters of these records).
Next, web-based canvassing efforts were conducted to identify VSPs at the state and local level that may not be captured in the lists received from NCVC, OVC, OVW, and JAG. Rand initially started canvassing efforts in the 11 largest states in the country based on population size –California, Florida, Georgia, Illinois, Michigan, New Jersey, New York, North Carolina, Ohio, Pennsylvania, and Texas – in order to assess the proportion of the VSP universe in these states that was captured through national lists (full report available in attachment 2). Rand conducted an exhaustive search of state and local private organizations and public agencies that had compiled lists of VSPs within their area. These lists are often used for victim referrals or provided as a part of a package of victim resources by the organizations that compile them. These lists were merged with the list of federal grantees and NCVC members, and the resulting list was de-duplicated using the same process described above. In total across the 11 states, 8,075 VSP entities were identified. After de-duplication, 35% of the resulting VSP sample frame in these states comprised only the entities identified through the canvassing efforts, meaning these VSPs would have been missing from the VSP sampling frame if canvassing efforts were not conducted. VSPs identified through NCVC, OVC, OVW, and JAG comprised 58% of this sample frame, and VSPs included in both state canvassing and federal/NCVC lists comprised 7% of the sample frame. These results corroborated the need for canvassing efforts in the remaining states. They also highlight the need to conduct a Census to validate the roster, as a large portion of VSPs that self-identified as victim service providers for NCVC membership were not identified as victim service providers by state and local entities.
In addition to the consolidation of existing VSP lists at the national, state, and local level, BJS has ongoing efforts to validate the sampling frame by adding questions to other existing BJS surveys to identify criminal justice entities that have personnel or programs dedicated to victim assistance. BJS added questions to the Census of State and Local Law Enforcement Agencies assessing whether agencies had dedicated personnel serving victims or had participated in a victim service taskforce in the past year. All law enforcement agencies serving victims through dedicated staff (and therefore falling into the secondary provider VSP category) were added to the sampling frame (n ~ 2,700). BJS also plans to add questions to the Census of Prosecutor’s Offices to identify victim service activities. However, since data from that collection are not yet available, BJS contributed a list of all 2,300 Prosecutor Offices in the nation to the VSP sampling frame.
In total, about 31,000 unique VSPs were identified through these efforts. This is the first all-inclusive, comprehensive, national list of VSPs, and is the universe for this first 2016 NCVSP. The Census will identify and screen out entities that are no longer serving victims, resulting in the first national roster of active VSPs and a more precise VSP sampling frame for future routine Censuses and a more in-depth National Survey of Victim Service Providers.
Of note, the primary focus of the NCVSP is to better understand characteristics of the universe of primary and secondary VSPs. Primary and secondary VSPs are easier to identify, quantify, and describe than incidental VSPs. Many primary and secondary VSPs are easily identifiable because they receive federal funding for victim service provision or are connected to federal agencies such as OVC and OVW in other ways (e.g., through conferences, trainings). In addition, because primary and secondary VSPs have dedicated resources for serving victims, they need to be publically accessible to victims or connected to victims through some identifiable mechanism such as referrals from other VSPs or law enforcement. Even VSPs that do not want to be geographically located (e.g., domestic violence shelters) or VSPs that only accept referred, substantiated victims tend to have a public profile (e.g., a public BO Box or phone number or website) or are connected to other service providers. Thus, the project team has confidence that extensive efforts to identify primary and secondary VSPs for this census resulted in near complete coverage of the universe of these VSPs.
It is known, however, that incidental providers that do not fit the definition or scope are included on this first national list of VSPs because the list includes all self-identified VSP members of the National Center for Victims of Crime (NCVC) and all VSPs that are included on federal or state lists of VSPs. Incidental providers might be particularly likely to be publically recognized for serving victims in geographical regions where there are few primary or secondary VSPs, leading victims to seek services from other types of entities. More information gathered from the Census about these incidental providers that are recognized as VSPs (e.g., as members of NCVC or on lists of VSPs) will allow BJS to screen them out or include them in analyses as appropriate for each research question.
2. Procedures for collecting data
Data collection for the 2016 NCVSP will involve a series of mailings and non-response follow-up activities, emphasizing questionnaire completion via a secure web-based reporting system. BJS will use a multi-mode approach in which three main modalities available for the respondents to complete a survey: A web survey, phone survey, and a paper-and-pencil PAPI hard copy survey. Respondents will be directed to a web-based format as the primary mode of data collection, since a web-based collection is the preferred means to increase response rates, expedite the data collection process, simplify data verification, and facilitate report preparation. In the field test of the NCVSP instrument, about 83% of respondents completed the survey online, with another 15% completing the instrument over the phone, and 2% requesting the paper version. This suggests that most VSPs are capable of and even prefer to complete the survey online, but that response rates can be increased by offering multiple modes as part of the nonresponse follow up procedures.
The hard copy version of the survey is only provided to VSPs who request it. The option to complete the survey via phone will be offered to all VSPs with whom telephone contact is made, unless they indicate a preference for using one of the other modalities. Exhibit 1 depicts the data collection and nonresponse follow-up process (see materials in attachments 10a-10f).
Exhibit 1: Multi-mode data collection process with timeline.
All VSPs will be mailed a formal invitation letter from BJS and OVC to the director of the VSP announcing the start of the Census data collection (see Attachment 10a). This letter will also note support from NCVC. This mailed letter (email will also be used in the relatively rare instances when an email address is available on the roster) will provide the link to the web survey, the agency’s unique login information, and a notification that the web survey is available for use. VSPs will be asked to complete the survey within one month. In order to maximize efficiency and decrease costs over the long-term, the use of a web-based completion mode is emphasized by providing the web URL, without the paper instrument, in the initial contact. This initial contact will also include a statement that a paper instrument would be provided to them if they preferred to complete the survey via mail by requesting it through an 800 number or project e-mail address. In order to assure a high response rate, respondents who fail to complete the web-based questionnaire without requesting a mail version will still be provided with subsequent opportunities to complete the paper version and the option to complete it over the telephone as needed.
VSPs that do not complete the survey but do not refuse to participate will receive a series of follow-up prompts increasing in intensity and cost over time (discussed in greater detail under “Methods to Maximize Response Rates and Address Nonresponse”). This approach follows from the Dillman recommendation of a tailored, hierarchical approach to data collection that begins with the least expensive contacting strategy and mode to complete the maximum number of interviews at minimal cost and transitions to more expensive contacts and modes to improve completion rates.
As questionnaires are received they will be reviewed, and if needed, the VSP data provider will be contacted to clarify responses or provide missing information. Participants who started but did not complete the online survey will receive a series of email prompts connecting them to their unfinished survey, or be contacted by phone if email prompts do not lead to a completed survey.
The following are data quality assurance steps that the research team will employ during the data collection and processing period:
Data editing. The data collection agent will attempt to reconcile missing or erroneous data through a manual edit of each questionnaire. In collaboration with BJS, the data collection agent will develop a list of manual edits that can be completed by referring to other data provided by the respondent on the survey instrument. For example, if a screening question was left blank, but the follow-up questions were completed, a manual edit could be made to indicate the intended positive response to the screening question. Through this process, the data collection agent can quickly identify which hardcopy cases require follow up and indicate the items that need clarification or retrieval from the respondent.
Data retrieval. When it is determined that additional data retrieval is needed, a data collection specialist will contact the data provider for clarification. Throughout the data retrieval process, the data collection agent will document the questions needing retrieval (e.g. missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.
Data entry. Respondents completing the survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning the survey via hardcopy (mail or fax), data will be entered as responses are received and determined complete. Once the data has been entered into the database, it will be made available to BJS via an SFTP site. To confirm that editing rules are being followed, the data collection agent will review frequencies for the entered data after the first 10 percent of cases are keyed. The data collection agent will also review frequencies from web survey responses. Any issues will be investigated and resolved. Throughout the remainder of the data collection period, staff will conduct regular data frequency reviews to evaluate the quality and completeness of data captured in both the data entry and web modes.
3. Methods to maximize response rates and address non-response
Methods to maximize response rates
Instrument development. The design of the 2016 NCVSP questionnaire and the plans for recruiting VSPs are consistent with current leading research on survey design and implementation.1 The instrument has been designed to be short (20 minutes), asks fairly broad questions about the VSP entity, and allows respondents to provide estimates if exact numbers are not available. All respondents are encouraged to complete the online survey which employs skip patterns to present only applicable questions to respondents. However, for those who complete the hard copy of the survey, three versions of the survey have been developed so that each type of VSP (primary, secondary, or incidental) will only receive the questions that are applicable to them (attachments 3a-3c). The differences between the three versions are minor and are intended to ensure that secondary and incidental VSPs respond to the questionnaire thinking only about the victim serving component of their organization. The multi-mode approach capitalizes on the strengths of individual modes while neutralizing their weaknesses with other approaches, making the reporting task easier for respondents by offering them alternatives (which has been associated with higher response rates).
The instrument includes several design elements intended to increase the ease of reading and understanding the questionnaire. First, related questions are grouped together in topical sections. In addition, the survey instrument begins with the most salient items. Questions and instructions are presented in a consistent manner on each page in order to allow respondents to comprehend question items more readily. Proper alignment and vertical spacing is also used to help respondents mentally categorize the information on the page and to aid in a neat, well-organized presentation.
Outreach and recruitment efforts. Our recruitment strategies have also been designed to maximize response rates. Because this will be a complete enumeration of all eligible entities on the roster, initial outreach efforts will include broad scale activities to raise awareness of the survey and promote enthusiasm for its success. The project team also plans to utilize external stakeholders, including OVC and OVW, the expert panel members, PIC group members, and VOCA administrators, to encourage and increase cooperation among VSPs. This is expected to be particularly helpful in legitimizing the proposed NCVSP which is important because this is the first time the federal government is surveying all VSPs. Working through trusted intermediaries who are part of the project expert panel and the Project Input Committee (PIC) (which include a variety of national, regional, state and local level organizations), project staff will:
Distribute short informational blurbs to be shared in organization newsletters. (One month prior to launch.)
Ask PIC members to post an NSVSO icon on their own websites with link to the project website. (One month prior to launch.)
Ask PIC members to help advertise an informational webinar that will be held for all VSPs. (One month prior to launch.)
Ask PIC members to distribute through their networks a reminder blurb urging service providers to take part in the survey. (At launch.)
Each of those awareness materials will link to a Website that provides more information about the project and the importance of participation. That Website will include a list of the trusted victim advocacy leaders and other stakeholders who are involved in the project, which should give participants additional confidence in the survey project.
Nonresponse follow-up protocols. Once the data collection has launched, the approach to nonresponse follow-up is also consistent with best practices in survey research and entails the following steps (phases 2 – 6 in Exhibit 1 above):
Phase 1: VSPS receive the formal invitation letter.
Phase 2: First reminder by postcard/email: The research team will send a simple friendly reminder post card about 2 weeks after the invitation to participate requesting completion of the survey for all remaining non-respondents. The post card is designed for US mail delivery but will also be sent by email when we have an email address for the VSP.
Phase 3: Second reminder by letter/email: Next, researchers will search online and verify contact information for the VSPs that do not respond within 1 week after receiving the postcard/email, and a reminder letter will be sent via US mail and email.
Phase 4: Phone prompt: One week after the reminder letter, VSPs that still have not responded will be contacted via telephone. While any VSP expressing a wish to do the survey over the phone will be able to do so, the goal is for this to be a prompt to encourage completion of a survey by self-administration (web preferred over paper). We will make multiple phone prompts to reach the VSP (up to 10). We will also check the phone/contact information online for those we are not able to reach after a few calls. Exhibit 2 outlines the possible outcomes of phone contacts and procedures for moving to phases 5 and 6.
Phase 5: A FedEx prompt will be sent to all VSPs who do not respond 2 weeks after the reminder letter. Sending a FedEx package as a prompt to the VSP is a well-regarded Dillman approach2 and communicates the importance of the survey and the urgency of our request. All remaining non-respondents will get an updated address check prior to this mailing.
Phase 6: The final phase is the ‘last chance’ opportunity for the VSPs to complete the survey. This ‘last chance’ opportunity will happen about two weeks after the FedEx prompt. When we have a phone number we will make this attempt by phone and try to complete the survey over the phone. Where we do not have a phone number we will send a ‘last chance’ contact in the form of a letter by US mail or email. Such a final contact can often be successful in motivating participants that want to support the project but have just been in delayed in responding. We expect that this ‘last chance’ contact will lead to completes mostly by phone surveying, but some VSPs that we reach by phone may insist on completing the survey by web or by PAPI/hard copy. Also, there are the remaining non-responding VSPs without working phone numbers who may complete a survey at the end by web or by PAPI/hard copy or some may decide to call NORC and complete it over the phone.
Seem to have right # but
cannot get right person on phone
Outcome of Phase 4 Phone
Prompt
Hard refusal over phone
Promise by VSP to complete
survey by web, mail or phone at a later date
Completed survey by phone
Vague response by VSP as to
willingness to complete survey in any modality
Respondent says on phone that
this is the first time they learned of survey
Unable to contact VSP due to
absence of working number
After checking for new number
& doing Phase 5, last chance contact in Phase 6 is by US
mail/email for those cases not closed in in Phase 5
Return to phone attempt in
Phase 6 after exhausting other methods in Phase 5 that don’t
lead to case closure
Phase 6 Last
Chance Mail/Email
Contact
Phase 6 Last
Chance Phone Contact
No response
Hard refusal sent to NORC by
email/mail
No response
Completed survey by web or
PAPI
Completed survey by phone
Hard refusal over phone
In addition to follow-up contacts to non-respondents made by the data collection agent, the National Center for Victims of Crime may reach out to hard-to-reach respondents to encourage completion of the survey in a timely manner.
Based on the Census pilot test, which is discussed in greater detail below, we expect a response rate of at least 80% for the NCVSP. The response rate for the pilot test reached 87% overall, and the pilot test was conducted without the outreach efforts that will accompany the full Census. Once the outreach efforts are put into place, particularly with the most difficult types of VSPs from which to obtain responses, it is anticipated that response rates will be close to or higher than what they were in the pilot, despite the large number of entities being surveyed.
Addressing nonresponse
While the majority of respondents will participate after one of the aforementioned contact attempts, a small percentage of respondents will refuse to complete the survey for various reasons. There are two major types of nonresponse: “unit,” when no data are collected for the VSP entity, and “item,” when some questions are answered but others are left unanswered. Item nonresponse is often handled by multiple imputation – filling in the missing items based on what can be inferred about their unrecorded values from other items that were recorded together with the stochastic structure that relates the different data elements.
A nonresponse bias analysis will be conducted to assess the extent to which nonresponders differ from responders on key characteristics. Unit nonresponse will then be handled by creating weights to compensate for the reduced frame size and to reduce some nonresponse biases that this type of nonresponse introduces. This approach is somewhat complicated because this is the first effort to validate the VSP sampling frame and little is known about the full universe of VSPs. Before considering how units that responded to the survey represent all the units on our national roster of VSPs, we will make extensive efforts to determine whether the entity/target respondent is not responding because the survey is not applicable to them (i.e., because they have stopped serving victims, become part of another agency in the sample, or the entity is defunct). These agencies will be removed from the national roster.
For most active VSPs, we are able to determine a few critical characteristics that can be used to create weights. We will know whether each VSP is federally funded or not, and the state in which the VSP maintains an address, and whether the VSP is located in a rural or urban area. In addition, for a sizable portion of the agencies it is clear in their name, website, or other publically available information whether they are a primary, secondary, or incidental VSP and what types of victims they tend to serve. Agencies for which this information is unknown will be grouped together into an “unknown” category. These variables (i.e. federal funding, state, urbanicity, type, and primary victim services) will be used to estimate non-response bias and weight to the full national roster. Another approach we will explore is Heckman’s (1979) two-step model3 and assess if we get similar results as using a weighted approach. The first step is to specify a model through the use of a multiple regression of the attrition process that we will capture in a predicted latent measure. This latent measure is then entered into any substantive outcome models as an independent variable to more fully specify the model.
4. Test of procedures or methods
The NCVSP instrument and protocol has been cognitively tested with 15 VSPs and field tested with 725 VSPs (OMB number 1121-0339).
Cognitive testing
The cognitive testing, conducted in 2014-15, assessed the completeness of the information collected (were respondents able to furnish the requested information), the uniformity of understanding of the survey from one respondent to the next (e.g., did each respondent define a domestic violence victim in the same way), and the respondents ability to provide case-level data in standard formats. The cognitive testing aided the team in refining the questionnaire for reducing the burden on the recipient, readability and improved understanding of terms. The information collected from the cognitive testing of the survey was incorporated into the instrument used in the pilot test.
Most participants’ cognitive interview feedback resulted in minor edits to question wording or tweaks to particular response items. However there were six major changes that helped refine the Census instrument to be used in the field test. These changes ultimately resulted in the overall completion time being reduced from 50 minutes to 20 minutes or less. The changes made focused on (1) the types of services provided question to create a shorter list response options, making sure the condensed services list still encompasses the bulk of services provided but is easier for providers to complete and questions regarding the types of crime experienced by victims receiving services. (2) Respondents initially were unsure whether to report all crime types experienced by victims or only report the presenting crimes types for which victims initially sought services. While both pieces of information are potentially of importance, one of the main purposes of the full survey is to obtain a better picture of the victim services field and since there are many variations in VSPs, a principal distinguishing feature is their victimization focus.
To be able to compare answers by groups of providers with differing victim focuses, we edited the victim type question to ask about crime types for which victims sought services.
The remaining changes focused on the organizational and administrative components of the VSPs and resulted in changes to the way the instrument asked questions on (3) staffing, (4) funding and (5) record keeping. These were all simplified so that respondents didn’t need to spend time accessing their internal records to complete the survey and response options now include a place to indicate where this information is an estimate. Lastly, cognitive interview participants reported some difficulty answering questions according to the reference period of the prior 12 months, as different organizations operate on different schedules. To avoid potentially high levels of missing item responses due to confusion/difficulty with the time period requirement, the last major change was (6) the addition of a question to the instrument to clarify whether the organization operates/reports data on a calendar year or fiscal year with a follow up question that then asks the date of the beginning of the fiscal year if that response is endorsed. A detailed discussion of the changes made to the survey instrument as a result of the cognitive testing are included as attachment 5.
Field test
The revised instrument was then field tested with 725 VSPs from August, 2015 through mid-January, 2016. There are a number of challenges in conducting a census of about 26,000 active VSPs, and the field test was designed to investigate the key issues that might arise. First, there are many potential problems in contacting the VSP staff person who would be responsible for completing the survey. Many VSPs on the list have no contact information, out of date information, or only general information for the front office or a hotline. Field testing examined whether spending time and resources up front to obtain the name of the appropriate point of contact had an impact on cooperation rates or the resources required to obtain cooperation. The challenges associated with identifying suitable points of contact within victim service entities were also documented.
Another foreseeable challenge is that VSPs that receive federal funding might be more likely to respond to the survey than those who do not receive federal funding. Anecdotal evidence suggests that federally funded VSPs will be an easier group to collect data from and it is a group that can be identified readily and is more likely to have accurate contact information. In addition, they are familiar with submitting information to the government because they provide quarterly reports to maintain their funding. Field testing compared response rates and challenges in recruiting federally funded vs. non-federally funded VSPs.
Based on these two major considerations, the 725 VSPs were assigned to one of the four study conditions: (1) VSPs that are recipients of federal grants and received a pre-survey contact to identify a point of contact prior to being asked to complete the questionnaire; (2) VSPs that did not receive federal funds and received a pre-survey contact, (3) VSPs that are recipients of federal grants and did not receive a pre-survey contact, and (4) VSPs that do not receive federal funds and did not receive a pre-survey contact. Within each of the four conditions, the project team then stratified the sample based on categories of VSPs (e.g., criminal justice-based VSP), using keywords from information included in the roster to create six groups of different VSPs. The six groups created were: (1) prosecutors’ offices and other criminal justice system-based VSPs (like police, special advocates, etc.), (2) community-based shelters, (3) domestic violence or sexual assault programs, (4) mental and physical health-related programs, (5) tribal organizations or tribal-focused services, and a sixth other group including VSPs for which the type could not be identified.
Field test results. Of the 725 entities on the roster selected for the sample, 252 proved to not be VSPs (e.g. the agency stopped serving victims, became part of another agency in the sample, or the entity became defunct), leaving 473 active VSPs in the study. Screen outs represented a little more than one-third of the original cases (252/725= 35%). Of the 473 active VSPs, completed surveys were received from 409, for an 86.5% response rate. Of the 473 active VSPs, we also had 20 partially completed surveys and 44 VSPs that refused to do the survey (40 of the 44 were passive refusals and only 4 VSPs actively refused). Table 1 below displays how the 473 active VSPs were randomly assigned to one of the four study conditions for field testing, and the number of completed surveys obtained in each condition: (1) VSPs receiving a pre-contact prior to being asked to complete a survey that are recipients of federal grants, (2) VSPs receiving a pre-contact that do not receive federal grants, (3) VSPs not receiving a pre-contact that are recipients of federal grants, and (4) VSPs not receiving a pre-contact that do not receive federal grants.
Table 1: 532 Active VSP cases randomly assigned to four conditions (409 completes obtained)
|
Pre-contact |
No Pre-contact |
Sub-Total |
Federal |
123 (108 completes) (87.8%) |
118 (105 completes) (89.0%) |
241 (213 completes) 88.4% |
Non-Federal |
111 (87 completes) (78.4%) |
121 (109 completes) (90.1%) |
232 (196 completes) 84.5% |
Sub-Total |
234 (195 completes) (83.3%) |
239 (214 completes) (89.5%) |
|
The experiment revealed no statistically significant differences between the pre-survey contact and non-pre-survey contact groups (X2= 3.38, p= .066). The pre-survey contact group had an overall response rate of 83.3% and the non-pre-contact group had an overall response rate of 89.5%. We did not observe a statistically significant difference in the response rates across our VSPs receiving federal funding versus VSPs not receiving federal funding (X2= 1.54, p= .22). VSPs receiving federal funding had only a slightly higher response rate (88.4%) than VSPs not receiving federal funding (84.5%) but that difference was not statistically significant.
Initial assessments of nonresponse bias were based on the classification of VSPs by provider type. As seen in Table 6, we did see some statistically significant variation in completion rates by VSP type (X2= 22.2, DF=5, p=.001). Our best completion rate was for ‘Shelters, DV, or sexual assault programs’ at 96%, which exceeded our average completion rate of 87%. Our lowest rate was for our tribal VSPs at 56% but we only targeted completing a small number of tribal VSPs and if we completed just two or three more we would have increased our rate to the 87%. The other VSP types were closer to our average rate of 87%, with the ‘Health’ based VSPs on the lower side of the average (at 78%) and prosecutors on the higher side of the average (at 90%).
Table 2: Completion and refusal rate by VSP type
|
Completion rate |
Not completed Rate (partial complete and refusals) |
||
|
N |
Rate |
N |
Rate |
01: system-prosecutors |
79 |
90% |
9 |
10% |
02: system-other |
46 |
85% |
8 |
15% |
03: shelters\dv\sa |
105 |
96% |
4 |
4% |
04: health |
14 |
78% |
4 |
22% |
05: tribal |
5 |
56% |
4 |
44% |
06: other community |
160 |
82% |
35 |
18% |
|
409 |
87% |
64 |
13% |
The overall rate of completion did significantly vary by VSP type in Table 2 (as seen above), but within VSP type categories separated by federal funding status (see Table 3) we did not see significant variation (X2= 10.2, DF=5, p=.069). For example, the completion rate for the category ‘System-prosecutors’ was 88.5% for those VSPs without federal funding compared to 91.7% for ‘System-prosecutors’ with federal funding. In most cases, the percentage differences between VSPs without federal funding compared to VSPs with federal funding is small.
Table 3: Completion rate by VSP type and federal funding status
|
VSPs without federal funding |
|
VSPs with federal funding |
||
|
N |
Rate |
|
N |
Rate |
01: system-prosecutors |
46 |
88.50% |
|
33 |
91.70% |
02: system-other |
18 |
85.70% |
|
28 |
84.80% |
03: shelters\dv\sa |
43 |
97.70% |
|
62 |
95.40% |
04: health |
3 |
75.00% |
|
11 |
78.60% |
05: tribal |
1 |
50.00% |
|
4 |
57.10% |
06: other community |
85 |
78.00% |
|
75 |
87.20% |
|
196 |
85.60% |
|
213 |
88.40% |
For the purposes of this pilot test, we counted a survey as ‘completed’ if 100% of the items from the beginning of the survey through Section G (‘Services for Victims’) are completed. Items A1 through G11 are considered crucial questions. Using that definition we had 409 completed surveys (196 Fed and 213 Non-Fed) out of 470 eligible VSP cases. Among survey responders, item nonresponse was not a problem.
While not a part of our completion rate, we also had another 20 VSPs (11 Fed and 9 Non-Fed) of the 473 active VSPs partially complete surveys. Of the critical items from A1 to G11, used to define a 100% completed case, the item with the highest level of missingness, was a question (G11) about the type of victims (e.g., burglary or robbery victims) that sought services from the VSP. Twenty VSP respondents of the 429 VSPs that worked on the survey (4.7% of the 429) left this composite item blank. If we look at the specific item from G11A to G11V survey question with the greatest number of missing it would be item G11L (an item on human trafficking for sex) with 14 missing cases (however, when the G11 question is totaled there are 6 more cases with missing items to bring us to 20 missing). For the non-critical items, H1 to K5, the item with the highest level of missingness, was a question (K4) asking the respondent how concerned they are about the burden of grant reporting; 41 cases (9.6% of 429 cases) left this item blank.
The pilot test revealed valuable information about the time and costs associated with administering the NCVSP. The pilot test was designed to provide an estimate of the maximum costs needed to obtain a high response rate. No outreach to the VSP field was made before conducting the pilot, and we intensively recruited participants over a 6 month period. Overall, the average number of contacts made before obtaining a completed survey was 15 for our 409 VSPs with completed surveys. There was no difference in the number of contacts made to VSPs with federal funding (14.8) compared to the VSPs without federal funding (15.3) (F= 0.232, DF=1, p= 0.63) before obtaining a completed survey. Overall, the average number of contacts made before bringing a case to closure (i.e., obtain a completed survey, have a participant refuse to do a survey or determine a participant is not eligible) for the entire 725 cases was 16.6. There was no difference in the number of contacts made to bring a case to closure between the VSPs with federal funding (16.6 contacts) compared to the VSPs without federal funding (16.6 contacts) (F= 0.002, DF=1, p= 0.96). We observed statistically significant variation in the number of contacts made for our main disposition for each case (F= 19.3, DF=3, p <.001). Completed survey cases (n= 409) required 15 contacts on average, partially completed survey cases (n= 20) required 33.6 contacts on average, screenout cases (n=252) required 16.5 contacts on average, and refusal cases (n=44) required 23.5 contacts on average. Overall, the data collection team made 12,035 contacts with the sample of 725 cases.
These contacts translated into approximately 2.13 hours of data collector staff time. There was no difference in the amount of data collector staff time necessary to finalize a survey between the VSPs with federal funding (1.81 hours) compared to the VSPs without federal funding (2.185) (F= 0.268, DF=1, p= 0.61). When out-of-scope respondents are included in the estimate of burden hours required to classify a VSP as out-of-scope/no longer in existence, a responder or a nonresponder (n= 725), this translates into 2.22 hours per federal roster name and 2.07 hours per non-federal roster name (overall 2.14 hours for all cases), but this small difference was not statistically significant (F= 1.04, DF=1, p= 0.31). Based on this intensive level of effort, the maximum cost for administering the survey for all cases (completes, refusals and ineligibles) is estimated to be $77.4 per federal entity and $72.1 per non-federal entity.
Based on the results of the pilot, the research team has developed a number of strategies to reduce data collection costs for the NCVSP. We have streamlined the non-response follow-up process to reduce the number of contacts made overall. In addition, the pilot data has been analyzed to determine which groups that responded at high rates can receive fewer and less expensive contacts (e.g., domestic violence shelters responded well and might be less likely to require phone prompts than more difficult to reach VSPs). The pilot also yielded information about how to better identify out-of-scope entities from the roster (e.g., out of scope respondents were more likely to come from the NCVC list than other lists). Efforts will be made to identify out of scope VSPs before data collection begins, reducing costs associated with repeatedly contacting out-of-scope entities.
Led by NCVC, an outreach plan has been developed to inform the VSP field about the NCVSP, increasing their awareness and interest in the census before and during data collection. Outreach to the field will include presentations at conferences, webinars, postcards and flyers, and letters of support from NCVC and OVC. The Project Input Committee is also well-positioned and ready to share information about the NCVSP with a wide variety of VSPs. We anticipate these outreach efforts to reduce the number of contacts needed to obtain a completed survey because VSPs will be anticipating the survey before receiving the first letter recruiting them to participate. In addition, they might have a better understanding of why the survey is important and how it might be useful for them.
5. Individuals consulted on statistical aspects and collecting/analyzing data
The victimization unit of BJS takes responsibility for the overall design and management of the activities described in this submission, including data collection procedures, development of the questionnaires, and analysis of the data. BJS contacts include the following:
Lynn Langton, Ph.D.
Chief, Victimization Statistics Unit
Bureau of Justice Statistics
810 Seventh St., NW
Washington, DC 20531
(202) 353-3328
Barbara Oudekerk, Ph.D.
Statistician
Bureau of Justice Statistics
810 Seventh St., NW
Washington, DC 20531
(202) 616-3904
Michael G. Planty, Ph.D.
Deputy Director
Bureau of Justice Statistics
810 Seventh St., NW
Washington, DC 20531
(202) 514-9746
Jessica Stroop
Statistician
Bureau of Justice Statistics
801 Seventh St., NW
Washington, DC 20531
(202) 616-2605
Jeri Mulrow
Acting Director
Bureau of Justice Statistics
810 Seventh St., NW
Washington, DC 20531
(202) 514-9283
The cognitive testing, field piloting, and canvassing efforts have been completed under a cooperative agreement with RAND, NORC, and NCVC. Principal Investigators at each agency are:
Bruce Taylor, Ph.D.
NORC at the University of Chicago
4350 East West Highway, Room 733, Bethesda, MD 20814
Office: 301-634-9512
Fax: (301) 634-9301
Nelson Lim
RAND
1776 Main Street
Santa Monica, CA 90401-3208
Phone: (310) 393-0411 x7291
Susan Howley
National Center for Victims of Crime
Director of Public Policy
2000 M Street NW, Suite 480, Washington, DC 20036
Phone: 202-467-8700
1 Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method (3rd ed.) (3rd. ed.). New Jersey: Wiley.
2 Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method (3rd ed.) (3rd. ed.). New Jersey: Wiley.
3 Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica: Journal of the econometric society, 153-161.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Oudekerk, Barbara Ann |
File Modified | 0000-00-00 |
File Created | 2021-01-23 |