NSP 2020 OMB Part B_Final

NSP 2020 OMB Part B_Final.docx

2020 National Survey of Prosecutors

OMB: 1121-0149

Document [docx]
Download: docx | pdf
  1. Universe and Respondent Selection



The 2020 National Survey of Prosecutors (NSP-20) will collect data from a sample of prosecutor offices that handles felony cases in courts of general jurisdiction. Prosecutors serve a key role in the criminal justice system, as they are charged with protecting the public, providing fair and impartial access to justice, ensuring due process, and administering justice for each case while ensuring efficient and effective use of limited resources. The approximately 2,400 prosecutors’ offices in the United States—district, county, prosecuting, and state’s or commonwealth’s attorneys—sit in the executive branch of state governments and handle felony cases in state courts of general jurisdiction.

Prosecutors may exercise broad discretion in deciding which cases to prosecute and how those cases are prosecuted. It is important that agencies have access to updated, representative data regarding the staffing, budgets, caseload, and other aspects of the nation’s prosecutor offices. The National Survey of Prosecutors is a unique data collection effort in that it serves as a historical collection of these resources. Government agencies, state prosecutor offices, and academic institutions rely on updated information to better provide recommendations for program planning and resource management.

This survey will build on previous surveys of the nation’s state prosecutor offices’ organizational characteristics. Emphasis will be placed on collecting information pertaining to staffing, salaries, budget, and caseload, as well as measures of emerging issues in prosecution and courts administration, and use of diversion programs and problem-solving courts.


The most recent data available from this population is from the 2007 National Census of State Court Prosecutors. The main reference period for this project will be January 1, 2020 through December 31, 2020. The survey also includes questions about total staffing, budget, and caseload for the calendar year January 1, 2019 through December 31, 2019.



Sampling Frame

The frame contains a comprehensive list of all offices that have primary jurisdiction for prosecuting felony cases in state courts in the fifty states and the District of Columbia, which resulted in 2,347 prosecutors’ offices. BJS is unsure of the availability of data in Puerto Rico. BJS intends to assess the structure and the data reporting capacity of the Puerto Rico prosecutors’ office with the intent of assessing the availability and quality of data. At this time, BJS does not intend to include Puerto Rico in full data collection and reporting for the NSP.


The frame was developed from a previous list of prosecutors’ offices and contact information that BJS had compiled to support the 2014 Census of Prosecutors. BJS then cross-matched this initial list with the member directory for the National District Attorneys Association (NDAA), which provides current contact information for over 5,500 individuals from approximately 1,500 prosecutors’ offices across the U.S. BJS identified any offices that (1) were not a match on the NDAA member list or (2) were found on the NDAA member list but not on the 2014 NSP Census frame. BJS then conducted Internet searches to confirm whether these agencies were in operation as of June 2020 and met the inclusionary criteria of having primary jurisdiction for prosecuting felony cases in state courts.


The frame was further validated by matching jurisdictions to county-level Census data using Federal Information Processing Standards (FIPS) codes, and cross matching to ensure that all counties in the U.S. were covered by a prosecutors’ office on the frame. Most prosecutor districts correlate to single or multiple counties; however, there are exceptions. Alaska, Connecticut, District of Columbia, Delaware, and Rhode Island each only have one office per the entire state. Also, there are a few situations where counties are split between offices or share coverage between counties (occurs in Alabama, Kansas, and Texas).


Sample Allocation and Sample Size

The sample design was developed by exploring several sample allocation options. To compare the allocations, the relative standard error (RSE) was computed for each estimate. RSE is the ratio of the standard error to the estimate. RSE is a standardized measure of precision unaffected by estimate value. Key estimates such as prosecutor office caseloads, staffing, budget, and methamphetamine production measure will be utilized. These measures derived from the 2007 Census of Prosecutors and represent a wide range of metrics used to evaluate RSEs. These metrics include binary (yes/no) and numeric options, as well as moderate to highly skewed variables.


The final design for the NSP-20 will be similar to the methods used in the previous NSP collections having the strata based on the prosecutors’ office district population. The final verified prosecutors’ office list of 2,343 offices will serve as the sampling frame and the sample size will be 750 offices for the NSP-20. Sample allocation by strata for the NSP-20 is shown in Table 1. When sampling by PPS, more offices than existed were allocated to stratum 1 and 2 (population covered greater than 1,000,000 and 500,000-999,999) due to these offices covering 53% of the population, so offices in these strata sampled with certainty. Strata 3 (250,000-499,999) and 4 (100,000-249,999) retained the sample size obtained through the PPS allocation without any adjustments. Excess sample from strata 1 and 2 were allocated to stratum 5, population less than 100,000. This also helped reduce the RSEs into an acceptable range in stratum 5.


The sample size of the NSP-20 was designated to generate better estimates of the projected 2,400 prosecutor offices nationwide compared to previous iterations. The NSP-05 was the last sample survey conducted and had a population size of 310 offices. The NSP-20 will collect data from 750 prosecutor offices because a larger sample population is more reflective of the expected universe of prosecutor offices. Moreover, RTI and BJS collectively structured the study’s sample design. The survey was structured so that an allocation of the sample will be proportional to the population in each stratum, which allocates more of the sample to larger offices. Utilizing PPS sampling, the survey was designed with five strata in which a census was comprised of the largest two (1,000,000+ and 1000,000-999,999 populations). The allocation of the three remaining strata (250,000-499,999; 100,000-249,999; <100,000 populations) was contingent upon maintaining respectable standard error rates of key measures. For example, the fifth stratum was altered to ensure the RSEs for felony jury trial verdicts remained under 18%.



Table 1. Sample Size, Expected Respondents, and Relative Standard Errors for the NSP-20 Sample Design


Table 2 reports the estimated RSEs for these metrics at samples of 600, 750, and 900 offices. A sample size of 750 offices represents a marked improvement in the RSEs over the sample of 600 offices. The sample of 900 represents a modest improvement across the measures to the sample of 750 offices, but a sample of 900 more offices would require more resources and a longer field period for the data collection.


Table 2. Relative Standard Errors for Selected NSP-20 Measures at Samples of 600,750, and 900 Offices



Focusing on the stratum with the most offices, the 1,728 offices serving less than 100,000 residents, the three comparison sample sizes further demonstrate the marked improvement in RSEs by going from 600 to 750 offices sampled and the much more modest improvement produced by further increasing the sample size to 900 offices (see Table 3).






Table 3. Relative Standard Errors for Offices Service Less than 100,000 Residents for Selected NSP-20 Measures at Samples of 600,750, and 900 Offices




  1. Procedures for Collection Information



Data collection procedures. The NSP-20 is designed as a multi-modal data collection, with the online data tool serving as the primary mode, and hardcopy form or data submission using various formats (e.g., phone or email) offered as alternatives. The survey will be hand keyed by RTI project staff for those respondents returning their survey and/or roster form via hardcopy (mail), Initial contact for data collection will be with the chief prosecutor of each prosecuting attorney’s office in the sample. This initial outreach will occur before data collection begins to notify prosecutors of the upcoming data collection. The national implementation of the NSP will commence in January 2021. Data collection, nonresponse follow-up, and data quality follow-up will last approximately seven months and will include a variety of mailings and telephone contacts. A brief description of all steps in the data collection protocol are provided below.

  • Prenotification letter and data provider designation form. The letter (Attachment 7), on BJS letterhead, will be sent to all respondents and highlight the importance of NSP, encourage participation, and provide contact information that can be used to obtain additional information about NSP. A data provider designation form (Attachment 8) will be enclosed with the letter, which chief prosecutors can use to identify another member of their staff as NSP’s primary point of contact.


  • Study invitation. Two weeks after the pre-notification letter, the invitation message will be mailed via USPS to all designated data providers. The invitation message (Attachment 9), on BJS letterhead, will highlight the importance of NSP and encourage participation. The message will also provide instructions for accessing and completing the web survey questionnaire (including the web address, username, and password), contact information for obtaining additional information about NSP, and the data collection end date. This letter will also be sent via email one week after mailed out (Attachment 10).


  • Mail and email reminders. One month after the invitation package is sent, the first reminder message will be emailed to the data providers at nonresponding offices (Attachment 13). One month later, a second reminder letter with a hard copy questionnaire enclosed will be mailed (Attachment 14). Two weeks later, a third reminder will be sent to the chief prosecutor via UPS (Attachment 17).


  • Nonresponse prompting. One month after the UPS mailing, we will initiate phone follow-up with nonrespondents (Attachment (15). Up to five call attempts will be made for each prosecutor’s office before the case receives a “maximum call attempts reached” code. An attempt is defined as a call where an interviewer talks to the data provider at the office, or leaves a message on the data provider’s answering machine or with a gatekeeper. If a contact attempt is successful, the respondent will be reminded of the purpose and importance of the survey and informed of the goal of receiving a completed survey from each prosecutor’s office. The telephone interviewer will reference the most recent communication in the introduction of the phone call to determine if they have received any of the communications sent to them. Those who did not receive any of the messages or the questionnaire packet will be assisted by the interviewer in getting the information they need to complete the survey. For those who received the communications or the questionnaire packet, the interviewer will determine why they have not yet completed the survey, offer assistance, and try to gain cooperation. Respondents who agree to complete the full survey will be asked to submit the survey online but will be sent another hard copy version of the survey if requested. Those who are hesitant will be asked to consider providing responses over the phone. The interviewers will be prepared to collect responses during the phone call or to schedule an interview at a more convenient time.


  • Critical item interviewing. One month after nonresponse prompting begins, RTI staff will begin phone outreach to retrieve critical item data from nonresponders (Attachment 16). A small list of critical items from the questionnaire will be identified by BJS before critical item interviewing begins. Up to five call attempts will be made for each prosecutor’s office before the case receives a “maximum call attempts reached” code. If a contact attempt is successful, the respondent will be reminded of the purpose and importance of the survey. Interviewers will acknowledge the nonresponder’s limited time and resources and will request their cooperation in obtaining a few key items. The interviewers will be prepared to collect responses during the phone call or to schedule an interview at a more convenient time.


  • Closeout letter. Six weeks after the start of telephone nonresponse follow-up, we will mail an end-of-study letter to nonresponding prosecutor’s offices. The letter (Attachment 18), on BJS letterhead, will notify nonrespondents that the study is coming to an end and that their response is needed within two weeks. Data collection will continue for approximately three more weeks to allow for receipt of any remaining questionnaires. This letter will again provide instructions for accessing and completing the survey and roster form (via web or mail) and contact information for obtaining additional information about NSP.


  • Respondent thank you correspondence. Within two weeks of the completion of data quality follow-up, a thank you letter (Attachment 19) will be mailed to respondents.







Week

Stage

Attachment Number

1

Prenotification letter and data provider designation form (mail)

7

3

Study invitation (USPS)

9

4

Study invitation (email)

10

5

Reminder #1 (email)

13

8

Reminder #2 (USPS)

14

10

Nonresponse prompting (phone)

17

14

Critical item interviewing (phone)

16

18

Reminder #3 (UPS)

17

20

Closeout (mail)

18



Data Entry. Respondents completing the survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning their survey and/or roster form via hardcopy (mail), the survey will be hand keyed by RTI project staff. RTI will perform a quality control check on randomly selected surveys to ensure all data is keyed correctly. For respondents completing the survey over the phone, project staff will enter data directly into the respondent’s instance of the web survey.


Data Editing. RTI will attempt to reconcile missing or erroneous data through automated and manual edits of each questionnaire within two weeks of completion. In collaboration with BJS, RTI will develop a set of edits that will use other data provided by the respondent on the survey instrument to confirm acceptable responses or identify possible errors due to missing or inconsistent data elements. For example, if a question on the total number of attorneys on staff was left blank, but the respondent provided a count of attorneys broken down by race in a separate question, an error would be flagged.


Data Retrieval. When it is determined that additional data retrieval is needed, a member of the data collection team will contact the respondent for clarification (Email template, Attachment 20 and Phone script-Attachment 21). Throughout the data retrieval process, RTI will document the questions needing retrieval (e.g. missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.


Data Quality Review. RTI staff will conduct regular data quality reviews to evaluate the quality and completeness of data captured in both the web and paper copy modes. To confirm that editing rules are being followed, RTI will review frequencies for the entered data within one week of submission. Any issues will be investigated and resolved within 2 weeks.


  1. Methods to Maximize Response Rates


As described in the previous section, BJS and RTI will undertake various activities to ensure that high response rates are achieved for NSP-20.

To this end, the survey instrument was reviewed to ensure the collection of the most pertinent information, removing any unnecessary questions to reduce burden. The questionnaires were also reviewed by BJS and RTI staff for ease of use, flow, and compliance with questionnaire design best practices to ensure ease of administration. Cognitive interviews were also conducted. More details are included in B.4, Testing of Procedures.

Additionally, the web-based instrument will be supported by several online help functions to maximize response rates. The web survey interfaces are user-friendly, which encourages response and ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the web survey instrument. The online application will be flexible so it can adapt to meet the needs of multiple browser types (e.g., Internet Explorer and Google Chrome), and screen sizes. Other features of the web instrument will include the following:

  • Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.

  • The online instrument will be programmed with data consistency checks and automatic prompts to ensure inter-item consistency and reduce the likelihood of “don’t know” and out-of-range responses, thereby eliminating the need for follow-up with the respondent after survey submission.

  • The online instrument will also have a version of the survey that respondents can print out and mail back.

  • The questionnaire will also have hard copies that will be sent to nonrespondents several weeks into the survey period.

  • Data providers will also have the option to complete their survey over the phone with a telephone interviewer.


At all stages of the survey, a Help Desk will be available to provide both substantive and technical assistance. BJS will supply the Help Desk with answers to frequently asked questions and guidance on additional questions that may arise.

The multi-stage survey administration and follow-up procedures have been incorporated into BJS’s response plans to obtain higher response rates and to ensure unbiased estimates. Ensuring adequate response (not just unit/agency response rates, but also item responses) begins with introducing data providers to NSP. This will be accomplished through the NSP-20 marketing and contact strategy. Resources available to help respondents complete the survey (e.g. telephone- or e-mail-based Help Desk support) will be described in those communications.

Nonresponse Adjustments

With any survey, it is typically the case that some of the selected subjects will not respond to the survey request (i.e., unit nonresponse) and some will not respond to particular questions (i.e., item nonresponse), despite best efforts made to collect all the data. Using frame data, weighting will be used to adjust for unit nonresponse in the NSP-20. To determine which factors to use in the facility nonresponse weight adjustments, a procedure available in RTI’s SUDAAN software based on the Generalized Exponential Model (GEM) will be used to model the response propensity based on information from the sampling frame (e.g., agency characteristics such as geographic region and size of population served) within sampling strata (Folsom, 2000). Ideally, only variables highly correlated with the outcomes of interest will be included in the model in order to reduce the potential for bias.

If response rates fall below 80%, BJS will conduct a nonresponse bias analysis in order to ensure those offices that do not participate in the study are not fundamentally different than those that do. This analysis will compare known characteristics of both respondents and nonrespondents that are likely related to nonresponse bias. For each of these characteristics, Cohen’s effect size will be calculated. Cohen’s effect size ranges from 0 to 1 and measures the distributional differences between respondents and the total population. If any characteristic has an effect size that falls into the “medium” or “high” category, as defined by Cohen, then there is a potential for bias in the estimates. Each estimate will be included in a nonresponse model to adjust weights to minimize the potential for bias in the estimates.



  1. Testing of Procedures


During the development of the NSP-20 instrument was based on prior measures collected as part of the NSP and collaboration with the field. The process included convening a technical expert panel, comprised of prosecutors from agencies across the United States, to review the domains of interest and provide feedback on the questionnaire content. Once the draft questionnaire was approved by BJS, a series of cognitive interviews with participants recruited from district attorney offices across the United States was conducted. The cognitive interviews allowed us to assess question and response comprehension and burden in responding.


The NSP-20 project used two cognitive interviewing approaches to probe question understanding and responding: think-aloud techniques and verbal probing. The think-aloud technique asks the participant to provide a continuous verbal narrative while interpreting the question, recalling the information requested, considering how to respond, and matching his or her response to the response options provided. With verbal probing, moderators ask participants targeted questions, called probes, about their experiences completing the survey. For this study, probes were administered concurrently (i.e., immediately after the participant answered an item). Both scripted probes (developed in advance based on the findings from the expert review) and spontaneous probes (identified during the interview to further explore participants’ responses) were used to understand the participants’ understanding of the question, answer retrieval, decision processes, and response processes.


Cognitive interviewees were recruited to represent a diverse set of agencies instrument testing. RTI drew a convenience sample of 250 counties from the 2014 NSP frame, stratified across geographic areas: South (S), West (W), Midwest (MW), and Northeast (NE) and jurisdiction sizes: Large (population of 810,000 or more), medium-large (population of 250,000 – 809,999), medium-small (population of 100,000 – 249,999), and small (population of under 100,000). Cognitive interviews were completed with 24 of the 25 agencies who agreed to participate, representing all regions of the US and all population served strata noted above, as well as two offices that were not members of the NDAA and three offices from states that have appointed, as opposed to elected, chief prosecuting attorneys. RTI stopped pursing the 25th agency after several attempts to contact staff were met without a response.


RTI operated under a generic OMB approval to conduct the cognitive interviews.


Cognitive Test Methodology


The draft paper survey instrument was converted into a fillable .pdf file and emailed to participants in advance of the interview. Participants were asked to complete the .pdf version of the survey and return it to RTI in advance of the interview. Cognitive interviewers then reviewed the completed survey and noted item-nonresponse, responses that did not calculate according to the instructions, other issues that may have found.


Each cognitive interview was conducted by one of three project staff experienced and trained in cognitive interviewing methods. Interviewers provided participants with a short description of the study and described the goals of cognitive testing. Interviewers then asked about the following topics:

1. Issues or problems in understanding individual questions, including sentence structure, individual word comprehension, and answer choices.

2. Instructions or transitional statements used in the questionnaire.

3. Spontaneous probes for issues that arose during the interview process.


During the cognitive interview, the interviewers would review these discrepancies along with the scripted probes outlined in a cognitive protocol (see Attachment 22).


Debriefing


RTI scheduled interviews with each of the cognitive participants. Upon receipt of a completed questionnaire, RTI reviewed the responses and noted inconsistencies, item-nonresponse, and other problems to be discussed with the participant during the interview. This interview included general questions about respondent burden, the readability of the survey, and the availability of information Scripted and spontaneous probes were provided by the interviewers during conduct of the cognitive interview.


Interviewers asked about the following topics:

1. Issues or problems in understanding individual questions, including sentence structure, individual word comprehension, and answer choices.

2. Instructions or transitional statements used in the questionnaire.

3. Spontaneous probes for issues that arose during the interview process.

To minimize participant burden, the cognitive interviews were designed to last no more than 60 minutes. Following the completion of the cognitive interviews, the cognitive testing team analyzed the notes and summarized key findings and provided recommendations to BJS.




Cognitive Test Results


RTI received completed surveys from 24 of 25 offices. No office noted any major difficulties with the questionnaire. Several respondents provided feedback to make some of the items more clear and concise. The average estimated time to collect the necessary information and complete the survey was 60 minutes. The actual time for each office will be dependent on the size of the office and the resources available to locate and provide the requested information.


Overall, the survey worked well for most participants. Some concepts were particularly hard for participants, such as determining whether they should include felony cases or misdemeanor cases when providing values. In addition, the definitions of diversion and problem-solving courts provided on the instrument were not clear to many interviewees.

Based on the respondents’ feedback from the cognitive interviews, several items were modified to make them more clear and concise. Also, for questions where counts are requested, a “do not track” checkbox was added to allow respondents to select if they cannot provide specific counts for an item.



Several participants noted that they had passed entire sections of the survey off to a delegated individual, particularly the budget-related questions. To facilitate the delegation of questions in the main data collection, RTI will have a downloadable .pdf version that can be used as a worksheet prior to entry into the web survey. This will allow respondents to collect information in advance of completing the survey and encourage accurate responses rather than including an estimate.

The .pdf version of the survey that was sent to cognitive interviewees included some auto-filled blank numeric answers of “0,” as this was used as the field to calculate responses from sub-items and provide a total. In some cases where the participant did not fill any of the fields and the “0” was remaining, it was difficult to determine if the “0” response was accurate or whether this was a result of item-nonresponse. This will not be an issue in the Hatteras-coded final survey nor in the final paper version of the survey instrument.



  1. Contacts for Statistical Aspects and Data Collection


The prosecution and adjudication unit staff at the Bureau of Justice Statistics, along with staff from RTI International, takes responsibility for the overall design and management of the NSP-19 data collection, including the development of the questionnaire, sampling plan, and the analysis and publication of the data.


  1. BJS:


George E. Browne, Statistician

Prosecution and Adjudication Statistics Unit

U.S. Department of Justice

Bureau of Justice Statistics

810 Seventh Street, N.W.

Washington, D.C. 20531

(202) 307-0765


Kevin M. Scott, Acting Unit Chief

Prosecution and Adjudication Statistics Unit

U.S. Department of Justice

Bureau of Justice Statistics

810 Seventh Street, N.W.

Washington, D.C. 20531

(202) 307-0765


  1. RTI International:


Duren Banks

NSP Project Director

RTI International

3040 E. Cornwallis Dr.

Research Triangle Park, NC 27709

(919) 541-6000


  1. Persons consulted on statistical methodology:


Stephanie Zimmer

RTI International

3040 E. Cornwallis Dr.

Research Triangle Park, NC 27709

(919) 541-6000


Caroline Scruggs

RTI International

3040 E. Cornwallis Dr.

Research Triangle Park, NC 27709

(919) 541-6000



Attachments:

Attachment 1. 34 USC 10132

Attachment 2. Cognitive interview report

Attachment 3. Survey instrument

Attachment 4. 60-day notice

Attachment 5. 30-day notice

Attachment 6. Public Comments and BJS Response to NSP 60 Day FR Notice

Attachment 7. Prenotification Letter

Attachment 8. POC Designation Form

Attachment 9. Web Invitation Letter (USPS)

Attachment 10. Web Invitation Letter (Email)

Attachment 11. NSP NDAA Endorsement Letter

Attachment 12. Data Collection Overview - Flyer

Attachment 13. First reminder Email

Attachment 14. Second reminder Email

Attachment 15. Nonrespondent Prompting Call Script

Attachment 16. Nonrespondent Critical Item Retrieval Call Script

Attachment 17. Third Reminder UPS

Attachment 18. Closeout-Letter

Attachment 19. Thank you Letter

Attachment 20. Data Quality Followup Email Template

Attachment 21. Data Quality Followup Call Script

Attachment 22. Cognitive Interview Protocol








References

Folsom, R.E., & Singh, A.C. (2000). The generalized model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorScruggs, Caroline
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy