2019 SLEPS OMB_Part B_final

2019 SLEPS OMB_Part B_final.docx

2019 Survey of Law Enforcement Personnel in Schools (SLEPS)

OMB: 1121-0367

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT – Part B

Collection of Information Employing Statistical Methods


2019 Survey of Law Enforcement Personnel in Schools (SLEPS)


  1. Universe and Respondent Selection

The Survey of Law Enforcement Personnel in Schools (SLEPS) is designed to measure both the characteristics of Law Enforcement Agencies (LEAs) employing School Resource Officers (SROs), and those of the SROs themselves. The target LEA population is all local and county/regional police departments, sheriff’s offices, and K-12 school-based agencies with one or more full-time sworn SRO. The target officer population is sworn SROs working for those agencies.


Sampling Frame


The SLEPS sample design is based on the 2018 Census of State and Local Law Enforcement Agencies (CSLLEA; OMB 1121-0346) frame as it represents the most recent complete list of LEAs in the U.S. currently available to BJS. It was used to estimate the universe of agencies eligible for SLEPS while data collection and nonresponse follow-up for the 2018 CSLLEA is underway. The completed 2018 CSLLEA will be used as the final LEA sampling frame for SLEPS, with a sample drawn based on the design described here.


The 2018 CSLLEA frame, which covers all publicly-funded LEAs with the equivalent of one or more full-time sworn officer, was subset to the agency types of interest (local police, sheriffs’ offices, and school-based agencies). BJS then used SRO counts for those agencies collected in the 2016 Law Enforcement Management and Administrative Statistics (LEMAS; OMB 1121-0240) survey, the 2015 SLEPS verification sample, and the 2014 and 2008 CSLLEAs, in that order of priority. The 2015 SLEPS verification sample consisted of approximately 4,000 agencies that were contacted by the SLEPS project team to confirm whether the agency employed SROs and if so, how many. These data sources provided SRO counts for the following percentages of agencies: LEMAS, 16.9%; SLEPS verification sample, 9.3%; 2014 CSLLEA, 61.2%; and 2008 CSLLEA, 8.6%. The SRO count for the remaining 4.0% of agencies was imputed using a hot-deck approach with imputation cells defined by agency type and with intra-cell sorting by the number of full-time sworn officers.


SLEPS focuses on local police (municipal and county/regional police), school-based police, and sheriffs’ offices employing one or more full-time SRO. Based on the data sources noted above, BJS estimates that 5,950 LEAs meet that criterion. Collectively, these 5,950 LEAs are estimated to employ 17,715 SROs. The final LEA universe will be determined using the completed 2018 CSLLEA, including its collected full-time SRO count.


The SLEPS survey collects data at both the agency and officer levels. The 2018 CSLLEA results will provide the agency frame and provide the expected SRO employment at sampled LEAs. The SRO sampling frame is comprised of officers from responding LEAs that provide SRO rosters during the SLEPS LEA survey collection period. The officer-level portion of the overall sample design is based on the expected size and composition of the augmented frame given informed assumptions about LEA response and rostering rates and estimated SRO employment figures.


SLEPS Sampling Design and Response Rates


Since SLEPS will be used to make estimates of both LEA and SRO characteristics, it utilizes a two-stage LEA and SRO sample design. This design targets homogenization of first and second stage estimate precision across agency type and size strata representing analysis of domains of interest, while also balancing precision across LEA and SRO stages.


The universe of SLEPS-eligible agencies is stratified at three levels representing groups of substantive interest for estimates. The first level separates School-Based and Non-School-Based agencies. Within the Non-School-Based stratum, agencies are substratified by agency type, with separate strata for sheriff’s offices and local police departments (municipal and county/ regional). Police and sheriffs’ offices strata are further stratified by agency size as measured by the count of full-time SROs they employ. On the low end of the agency size range, agencies with only one full-time SRO are expected to be substantively distinct from larger agencies and are isolated into their own strata to ensure efficient samples of LEAs and SROs are allocated to them. On the high end, agencies with more than 24 SROs represent the largest agencies that are expected to be substantively self-representing and which require separately-controlled second-stage sampling rates for the management of design effects resulting from cluster correlation. Intermediate size strata of 2-4, 5-9, and 10-24 SROs provide substantively interesting analysis classes of small, medium, and large agencies. Table 1 provides the SLEPS LEA universe distribution among these strata. The total volume and allocation of LEA sample units to the strata presented in Table 1 was determined with the goal of providing the best balance of estimate precision across strata and survey stages.


Table 1. SLEPS LEA Universe Distribution among Design Strata




BJS and RTI conducted a pre-test of survey protocols to evaluate the full range of the planned data collection protocol from November 2017 through May 2018 with a sample of 250 LEAs and 475 SROs. The report on pre-test findings and recommendations for the full data collection is Attachment 9. Using measured response rates observed in the 2017 SLEPS pre-test, effective sample sizes and precision estimates1 were assessed at the full stratification (i.e., agency type by agency size) and marginal stratification levels (i.e., agency type only or agency size only). The following assumptions were used for the LEA stage of the SLEPS sample design:


  1. The overall expected LEA response rate is 82%. This is based on the pre-test, where the response rate was 77% overall but 82% among agencies that employ SROs.

  2. The overall expected LEA eligibility rate is 99%. As the 2018 CSLLEA data will be recently-collected, BJS expects low ineligibility due to frame error.

  3. The expected within-stratum unequal weighting effects (UWEs) are 1.1. A 10% increase in unequal weighting effects is expected as the result of post-collection LEA unit nonresponse weighting adjustments.


Using these assumptions, a starting sample of 1,982 agencies has been allocated as shown in Table 2. This sample size provides good precision overall and balances precision across substantive domains of interest to the extent possible. This starting sample is expected to yield 1,609 LEA completes.


Table 2. SLEPS LEA Sample Allocation, Assumptions, and Estimated Precision at full Stratification



Table 2 shows the LEA stage sampling rates along with the expected precision levels resulting from sampling and nonresponse. There are five certainty strata – the four Non-School-Based Police and Sheriff’s Office strata comprised of agencies with more than 10 SROs and the School-Based stratum. Since the sampling rates in these strata are 100%, the precision outcomes are dictated by nonresponse, eligibility, and design effect. While BJS does not expect to achieve high precision for the largest agencies due to nonresponse, it is useful to stratify them separately so second-stage sampling rates can be more precisely controlled. Given the SLEPS substantive focus on officers in schools, school-based agencies are sampled at 100%. Single-SRO agencies are oversampled in the first stage to yield adequate officer samples in stage 2. Tables 3-6 provide LEA sampling rates and expected precision across marginal strata.


Table 3. SLEPS LEA Sample Allocation, Assumptions, and Estimated Precision for School-Based vs. Non-School-Based LEAs



Table 4. SLEPS LEA Sample Allocation, Assumptions, and Estimated Precision for Marginal Agency Type Strata



Table 5. SLEPS LEA Sample Allocation, Assumptions, and Estimated Precision for Marginal Agency Size Strata



Table 6. SLEPS LEA Sample Allocation, Assumptions, and Estimated Precision for Marginal Agency Size Strata – 10-24 and 25+ Strata Combined



Tables 3 through 6 show that precision is well-homogenized across combined strata based on agency type only or agency size only. Precision for agencies with more than 24 SROs is not high, but this is the result of expected response rates and cannot be improved through increased sampling. Table 6 shows that when agencies with more than 9 SROs are combined, precision is comparable to other groups.


In the second stage of the SLEPS survey design, rostered SROs from responding LEAs constitute the sampling frame. While this effective frame is smaller than the SRO population of interest, to ensure representation of the overall population, adjusted LEA design-based weights accounting for nonresponse, ineligibility, and non-rostering will comprise the first component of the final SRO weight. Based on observations from the 2017 SLEPS pre-test, the following assumptions have been used for development of the SRO stage sample allocation:


  1. The expected overall LEA rostering rate is 85%

  2. The expected overall SRO response rate is 78%.

  3. The expected overall SRO eligibility rate is 98%. Some rostered SROs will be ineligible due to transfers or other changes in SRO employment status once the SRO survey is fielded, but BJS plans to implement a two-phase SRO data collection protocol (described below) that is expected to minimize this type of ineligibility.

  4. The expected within-stratum unequal weighting effects (UWEs) are 1.1. A 10% increase in unequal weighting effects is expected due to unit nonresponse weight adjustments

  5. The expected intraclass correlation (ICC) is 0.15. Intraclass correlation was measured using data from the pre-test across three measures of SRO experience (years of experience as a sworn officer, in career as an SRO, and in the current assignment as an SRO) as they were the best-available general proxies for the types of SRO characteristics measured in the SRO survey. The ICC estimates for these measures were 0.20, 0.07, and 0.09, respectively. While the average of these correlations is 0.12, the project team selected a more conservative ICC estimate of 0.15 because the variables used were proxies and the pre-test was not powered to measure ICC specifically.


In the SRO stage, the sampling frame is comprised of rostered officers from responding, eligible LEAs. Unlike in the LEA stage, this number is not known, so final sampling rates and stratum sample sizes will depend on the results of the LEA stage. The numbers presented here are based on the assumed rates noted above and in the tables. Table 7 shows the SRO universe distribution stratified by agency type and the number of SROs.




Table 7. SLEPS SRO Universe Distribution among Design Strata



Table 8 shows these rates and expected sample sizes across strata using a starting sample of 4,137 SROs. This starting sample size is expected to yield 3,163 completed SRO questionnaires.


Table 8. Expected SRO Sampling Rates and Sample Sizes across Strata



This allocation of SRO sample is expected to yield overall precision comparable to the LEA stage, while balancing precision across strata to the extent possible or practical. Tables 9 through 13 show expected SRO estimate precision and are analogous to tables 2 through 6 presented above. SRO sampling rates shown in table 8 are effective sampling rates relative to the second stage sampling frame. Sampling rates shown in tables 9 through 13 are effective sampling rates relative to the SRO population. Tables 9 through 13 do not show the expected number of SRO completes but instead show the effective n in order to show the estimated margin of error.




Table 9. SLEPS SRO Sample Allocation, Assumptions, and Estimated Precision at full Stratification



Table 9 shows expected precision at full stratification given assumptions about the performance of the LEA and SRO samples. The total design effect (DEFF) is UWE multiplied by the design effect from clustering (DEFFc), so large sampling rates in this stage would be inefficient. This specifically impacts the estimated precision for the largest agencies. Precision by combined strata grouping are presented in Tables 10-13.


Table 10. SLEPS SRO Sample Allocation, Assumptions, and Estimated Precision for School-Based vs. Non-School-Based LEAs



Table 11. SLEPS SRO Sample Allocation, Assumptions, and Estimated Precision by Agency Type



Table 12. SLEPS SRO Sample Allocation, Assumptions, and Estimated Precision by Agency Size





Table 13. SLEPS SRO Sample Allocation, Assumptions, and Estimated Precision by Agency Size – 10-24 and 25+ Strata Combined



Tables 9 through 13 show that precision is well-homogenized across combined strata. While precision for agencies with more than 24 SROs is not high, increasing sample allocation is not an efficient option given the 100% LEA-stage sampling rate and the design effects incurred through cluster correlation in the SRO stage. Table 13 shows that when agencies with more than 9 SROs are combined, precision is comparable to other groups.


  1. Procedures for Collecting Information


Data collection procedures. SLEPS data collection will include two phases. First, each sampled law enforcement agency (LEA) will be contacted to complete an LEA survey and a roster of all school resource officers (SROs)2 employed by the LEA at the beginning of the current school year. LEA survey materials will be addressed to the point of contact (POC) on record from the 2018 CSLLEA. The survey will include questions about the agency, the number and characteristics of the SROs employed by the agency, SRO training, and agency policies guiding the work of the SROs. The LEA survey and roster will be designed as multi-mode data collection instruments using web as the primary mode, a hard copy survey as an alternative for respondents, and a telephone nonresponse follow-up. The LEA data collection and nonresponse follow-up period will last approximately five months. LEA data collection materials will include a pre-notification letter, mail invitation package, five reminders, telephone nonresponse follow-up, and an end-of-study notification letter. A brief description of each contact method for LEAs is provided below and a timeline of LEA data collection is included in Table 14.

  • LEA pre-notification letter. The letter (Attachment 10), on BJS letterhead, will be sent to all respondents and highlight the importance of SLEPS and encourage participation. It also provides contact information that can be used to obtain additional information about SLEPS.


  • LEA invitation package. Two weeks after the pre-notification letter, the invitation package will be sent to all LEA POCs and will include an invitation letter and letter of support from the Police Executive Research Forum (PERF). The invitation letter (Attachment 11), on BJS letterhead, will highlight the importance of SLEPS and encourage participation. The invitation letter will also provide instructions for accessing and completing the web survey questionnaire and roster form (including the web address, username, and password), contact information for obtaining additional information about SLEPS, and the data collection end date. The PERF letter of support (Attachment 12) will further emphasize the importance of SLEPS and provide contacts for additional information.


  • LEA mail and email reminders. Two weeks after the invitation package is sent and four weeks into the survey, the first reminder letter will be mailed to nonrespondents (Attachment 13). Three weeks later, a reminder postcard will be sent (Attachment 14). Two weeks later, a third reminder will be sent via email (Attachment 15). A fourth reminder will be mailed two weeks later and will include a reminder letter (Attachment 16), a paper copy of the LEA survey and officer roster form, and a business reply envelope. A final email reminder will be sent five weeks later (Attachment 18). Each reminder will emphasize the importance of SLEPS, provide instructions for accessing and completing the survey and roster form (via web or mail), contact information for obtaining additional information about SLEPS, and the data collection end date.


  • LEA telephone nonresponse follow-up. Three weeks after the mail reminder package and 15 weeks into the survey, we will initiate phone follow-up with nonrespondents (Attachment 17). Up to five call attempts will be made for each LEA before the case receives a “maximum call attempts reached” code. An attempt is defined as a call where an interviewer talks to the POC at the LEA or leaves a message on the POC’s answering machine. If a contact attempt is successful, the respondent will be reminded of the purpose and importance of the survey and informed of the goal of receiving a completed survey from each LEA. The telephone interviewer will reference the most recent communication in the introduction of the phone call to determine if they have received any of the communications sent to them. Those who did not receive any of the messages or the questionnaire packet will be assisted by the interviewer in getting the information they need to complete the survey. For those who received the communications or the questionnaire packet, the interviewer will determine why they have not yet completed the survey, offer assistance, and try to gain cooperation. Respondents who agree to complete the full survey will be asked to submit the survey online but will be sent another hard copy version of the survey if requested. Those who are hesitant will be asked to consider providing responses over the phone. The interviewers will be prepared to collect responses during the phone call or to schedule an interview at a more convenient time.


  • LEA end-of-study letter. Six weeks after the start of telephone nonresponse follow-up and 21 weeks into the survey, we will mail an end-of-study letter to LEA nonrespondents. The letter (Attachment 19), on BJS letterhead, will notify nonrespondents that the study is coming to an end and that their response is needed within two weeks. Data collection will continue for approximately three more weeks to allow for receipt of any remaining questionnaires. This letter will again provide instructions for accessing and completing the survey and roster form (via web or mail) and contact information for obtaining additional information about SLEPS.


  • LEA thank you correspondence. After LEA POCs complete their survey, a thank you will be sent to the POC. If the POC has an email on file, they will receive a thank you e-mail. If the POC does not have an e-mail address on file, they will receive a thank you letter printed on BJS letterhead. The thank you correspondence (Attachment 20) will thank LEA POCs for their time, notify them of the next contact concerning the SRO phase of SLEPS, and provide contact information for obtaining additional information about participation.


Table 14. SLEPS LEA Survey Contact Schedule

Week

Stage

Attachment Number

1

LEA pre-notification letter (mail)

10

3

LEA invitation package (mail)

11, 12

5

LEA reminder #1 (mail)

13

8

LEA reminder #2 (mail)

14

10

LEA reminder #3 (email)

15

12

LEA reminder #4 (mail reminder package)

16, 1, 2

15

LEA telephone nonresponse follow-up

17

17

LEA reminder #5 (email)

18

21

LEA end-of-study notification letter

19


LEA thank you letter

20



In the second phase of SLEPS – the SRO survey – we will collect data from a sample of each LEA’s rostered SROs. Communications for the SRO survey will be routed to/through the LEA’s POC. The SRO questionnaire will ask about the trainings, policies, and practices related to their work as an SRO. The SRO survey is designed as a multi-mode data collection using web as the primary mode and a hard copy survey instrument as an alternative for respondents. The data collection and nonresponse follow-up period will last approximately six months across both modes, including an initial invitation by mail, two reminders, telephone nonresponse follow-up, and an end-of-study notification letter. SRO data collection will occur in two waves to minimize officer turnover (more information on the two wave process is included in B.3, Methods to Maximize Response Rates). A brief description of each contact method for SRO data collection is provided below and a timeline of SRO data collection is included in Table 15.

  • SRO invitation package. The invitation packages for all selected SROs at each LEA will be sent to the LEA POC, along with a POC letter (Attachment 21) thanking the POC for their support. The LEA POC will then distribute the packages to the sampled SROs. The SRO invitation package will include an invitation letter and a letter of support from the National Association of School Resource Officers (NASRO). The invitation letter (Attachment 22), on BJS letterhead, will provide instructions for accessing and completing the web survey and highlight the importance of SLEPS. Additionally, the invitation will note that the LEA POC supports the data collection and provide the LEA POC’s contact information. The NASRO letter of support (Attachment 23) will further emphasize the importance of SLEPS and encourage SROs to participate.


  • SRO mail and email reminders. Two weeks after the invitation package is sent, a reminder email (Attachment 24) will be sent to the LEA POC. The email will provide the LEA POC with a list of officers that have not yet responded to the survey and ask the POC to remind officers to complete the survey. Two weeks later, reminder packages will be sent to the LEA POC. The reminder package will include a letter to the POC (Attachment 25), asking the POC to distribute the reminder packages to the identified officers. The reminder packages will include an SRO reminder letter (Attachment 26), an SRO survey, and a business reply envelope.


  • SRO telephone nonresponse follow-up. Two weeks after the mail reminder package, we will initiate telephone follow-up with the LEA POC for agencies with one or more nonresponding SROs (Attachment 27). Up to five call attempts will be made for each LEA POC before the case receives a “maximum call attempts reached” code. An attempt is defined as a call where an interviewer talks to the POC at the LEA or leaves a message on the POC’s answering machine. If a contact attempt is successful, the respondent will be reminded of the purpose and importance of the survey and informed of the goal of receiving a completed survey from each SRO. The telephone interviewer will reference the most recent communication in the introduction of the phone call to determine if the POC received any of the communications sent to them. Those who did not receive any of the messages or SRO packages will be assisted by the interviewer in getting the materials SROs need to complete their surveys. For those who received the communications or SRO materials, the interviewer will determine why the selected SROs have not yet completed the survey, offer assistance, and try to gain their cooperation in encouraging SROs to participate.


  • SRO end–of-study letter. Four weeks after the start of telephone nonresponse follow-up, an end-of-study letter will be sent to the LEA POC. The letter (Attachment 28), on BJS letterhead, will notify the LEA POC that the study is coming to an end and provide a list of nonresponding officers, asking the POC to encourage the officers to respond within two weeks.


  • SRO POC thank you letter. At the conclusion of the SRO data collection, a thank you letter will be sent to the SRO’s LEA POC. The thank you letter (Attachment 29) on BJS letterhead, will thank the POC for coordinating the SRO data collection for their agency.


Table 15. SLEPS SRO Survey Contact Schedule

Week

Stage

Attachment Number

Wave 1

Wave 2

14

29

SRO survey invitation package (mail)

21, 22, 23

16

31

SRO reminder #1 (email)

24

18

33

SRO reminder #2 (mail reminder package)

25, 26, 3

20

35

SRO telephone nonresponse follow-up

27

24

39

SRO end-of-study notification letter

28



Thank you letter to POC

29


Data Editing. RTI will attempt to reconcile missing or erroneous data through automated and manual edits of each questionnaire within two weeks of completion. In collaboration with BJS, RTI will develop a set of edits that will use other data provided by the respondent on the survey instrument to confirm acceptable responses or identify possible errors due to missing or inconsistent data elements. For example, if a question on the numbers of SROs was left blank, but the SRO roster was completed, a manual edit would be made to indicate the intended positive response to the number of SROs question. BJS identified some issues during the pre-test and will incorporate reminders and checks to reduce the occurrence of these issues.


Data Entry. Respondents completing the LEA survey, LEA roster, and SRO survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning their survey and/or roster form via hardcopy (mail), the survey will be scanned upon receipt and the data will be extracted from the TeleForm. RTI will perform a quality control check on randomly selected survey and roster forms to ensure all data is scanned correctly. For respondents completing the LEA survey over the phone, telephone interviewers will enter LEA POC’s responses directly in to the phone survey instrument. To confirm editing rules are being followed, RTI will review frequencies for the entered data and any anomalies, inconsistencies, or unexpected values will be investigated and resolved. Throughout the remainder of the data collection period, RTI staff will conduct regular data frequency reviews to evaluate the quality and completeness of data captured in the web, hard copy, and phone modes. RTI will then deliver a preliminary and final dataset to BJS at completion of the LEA and SRO surveys.


Data Retrieval. When it is determined that additional data retrieval is needed, an Agency Liaison (AL) will contact the respondent for clarification. Throughout the data retrieval process, RTI will document the questions needing retrieval (e.g. missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.


Data Quality Review. RTI staff will conduct regular data quality reviews to evaluate the quality and completeness of data captured in both the web and paper copy modes. To confirm that editing rules are being followed, RTI will review frequencies for the entered data within one week of submission. Any issues will be investigated and resolved within 2 weeks.


  1. Methods to Maximize Response Rates


As described in the previous section, BJS and RTI will undertake various activities to ensure that high response rates are achieved for SLEPS.

To this end, the LEA and SRO survey instruments were reviewed to ensure the collection of the most pertinent information, removing any unnecessary questions to reduce burden. The questionnaires were also reviewed by BJS and RTI staff for ease of use, flow, and compliance with questionnaire design best practices to ensure ease of administration. Cognitive interviews were conducted on both the LEA and SRO surveys, along with a pre-test of data collection protocols. More details are included in B.4, Testing of Procedures.

Additionally, the web-based LEA and SRO instruments will be supported by several online help functions to maximize response rates. The web survey interfaces are user-friendly, which encourages response and ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the web survey instrument. The online application will be flexible so it can adapt to meet the needs of multiple device types (e.g., desktop computer, tablet, and phone), browser types (e.g., Internet Explorer and Google Chrome), and screen sizes. Other features of the web instrument will include the following:

  • Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.

  • The online instrument will be programmed with data consistency checks and automatic prompts to ensure inter-item consistency and reduce the likelihood of “don’t know” and out-of-range responses, thereby eliminating the need for follow-up with the respondent after survey submission.

  • The online instrument will also have a version of the survey that respondents can print out and mail back.

  • The LEA survey questionnaire, roster form, and SRO survey questionnaire will also have hard copies that will be sent to nonrespondents several weeks into the survey period.

  • LEA POCs will also have the option to complete their survey over the phone with a telephone interviewer.


At all stages of the survey, a Help Desk will be available to provide both substantive and technical assistance. BJS will supply the Help Desk with answers to frequently asked questions and guidance on additional questions that may arise.

The multi-stage survey administration and follow-up procedures have been incorporated into BJS’s response plans to obtain higher response rates and to ensure unbiased estimates. Ensuring adequate response (not just unit/agency response rates, but also item responses) begins with introducing LEA POCs and SROs to SLEPS. This will be accomplished through the LEA and SRO invitation packages, postcard reminders, e-mail reminders, reminder packages and accompanying documents. Resources available to help LEA and SRO respondents complete the survey (e.g. telephone- or e-mail-based Help Desk support) will be described in those communications.

BJS recognizes that LEAs may have concerns about providing identifiable information on officers when completing the officer roster form. To encourage response while addressing this concern, the roster instructions provide guidance for LEAs to anonymize the list of officers in the event that the agency does not want to directly identify officers. The roster instructions note that the list of officers will only be used for statistical purposes and will be kept confidential. One of the goals of the pre-test was to evaluate the willingness of LEAs to provide this information and the majority of LEAs provided complete roster information.

The SRO data collection will use a POC to distribute SRO survey materials to officers. The version of the LEA survey used in the pre-test gave agencies the option to either designate a POC to manage the SRO survey distribution or allow for direct contact of officers by providing an email address for each officer. In the pre-test, 70% of agencies chose to designate a POC. Furthermore, 83.4% of SROs that received the survey through a POC responded, whereas only 62.8% of SROs that were contacted directly responded. Using a designated POC streamlines the SRO data collection effort within an agency and also serves as a resource to notify BJS if a selected officer is not available (e.g., on extended leave) or no longer eligible (e.g., transferred) for the SRO survey.

The SRO data collection will be conducted across two waves in an effort to minimize turnover among the officers selected to receive the SRO survey. The first wave will start about halfway through the LEA data collection and the officer sample will be based on the rosters received up to that point. The second wave will start after the LEA data collection closes and the officer sample will be based on the rosters received during the second half of the LEA data collection. Dividing the SRO data collection into two waves reduces the time between roster submission and officer selection and mailout, reducing the chance of officers no longer being eligible for the SRO survey.

Nonresponse Adjustments

LEA unit nonresponse. The SLEPS LEA sample is designed to produce estimates of LEA characteristics as well as SRO rosters from a nationally-representative subset of the universe of LEAs with one or more full-time SROs. Despite a purposeful design and best efforts to collect data from all LEAs, some LEAs will not complete the LEA survey, and some that do are further expected to not provide an SRO roster. In order to ensure that all agencies in the LEA universe (as captured in the 2018 CSLLEA; i.e., the SLEPS LEA sampling frame) are represented by the set of LEA respondents, and to mitigate against any potential bias introduced by differential nonresponse, weight calibration will be used.


LEA respondents’ design-based weights will be adjusted to account for unit nonrespondents using the WTADJUST procedure provided in the SUDAAN package of complex survey data analysis software. The WTADJUST procedure estimates a generalized exponential model of response propensity as a function of agency characteristics available for both respondents and nonrespondents (Folsom and Singh, 2000). LEA characteristics predictive of response propensity will be retained in final nonresponse adjustment models stratified by the agency type and size categories used in construction of LEA sampling strata. These nonresponse-adjusted weights will be used for estimation of agency characteristics measured in the LEA questionnaire.


A second stage of LEA weight calibration will be used to adjust the weights of LEAs that provide SRO rosters. This adjustment will be carried out just as described above and using as input the LEA unit nonresponse-adjusted weights, with the exception that rostered LEAs will comprise the set of respondents and unrostered LEA survey respondents will comprise the set of nonrespondents. These adjusted weights, which account for LEA unit nonresponse and SRO roster nonresponse, will serve as the first component of the SRO weight.


SRO unit nonresponse. The sampling frame for SRO sample selection is comprised of SRO rosters provided by LEA respondents. Each SRO’s design-based weight is comprised of two components: (1) the LEA-level nonresponse- and no-roster-adjusted weight, and (2) the SRO sampling weight. Since component 1 incorporates the LEA design-based weight as well as the LEA-level adjustments described above, the SRO design-based weight is representative of the entire SRO universe. However, as in the case of LEAs, despite best efforts in data collection, some SROs will not respond to the SRO survey. This SRO unit nonresponse necessitates its own weight calibration step. Since correlation on SRO survey estimates within LEAs is expected to be non-ignorable, this adjustment will occur within agencies and will allocate the weight of nonresponding SROs to responding SROs within the same agency. To the extent possible – given agency size and roster quality – this adjustment will account for SRO race and gender as collected on SRO rosters.


It is expected that there will be some agencies where a within-LEA SRO nonresponse adjustment will be impossible or undesirable. This may occur, for example, when an SRO nonrespondent comes from a single-SRO LEA or where the volume of nonresponding SROs within a given agency outweighs the volume of respondents. For these SRO nonrespondents (identified using a within-agency nonresponse cut-point of >50%), a second stage of SRO weight adjustment will be required. After SRO weights have been adjusted within agencies where possible, a second stage of weight calibration will be used to account for the remaining SROs – those from agencies with SRO nonresponse >50%. In this stage, SRO weights will be adjusted at the LEA stratum level (i.e., within agency type and size groups). After this second stage of SRO weight calibration, SRO respondents’ weights will be representative of the entire SRO universe.


Nonresponse bias analysis. In the event that LEA or SRO stage nonresponse falls below 80%, nonresponse bias analyses will be used to assess the potential for bias resulting from differential nonresponse. For LEAs, agency characteristics available for both respondents and nonrespondents (from the agency frame: the 2018 CSLLEA) will be compared across response groups. These characteristics include:


  1. Agency type (0/1 indicators for each type),

  2. Agency size (measured by number of full-time SROs), and

  3. Geographic location (measured by Census region; 0/1 indicators for each type).


If any of these characteristics is found to differ meaningfully (as measured by Cohen’s d > 0.5) across response groups, this is taken to be a potential indication of bias. The extent to which this is true depends on how strongly each of these identified characteristics correlates to characteristics measured on the LEA survey. Survey measurements that are strongly associated with identified frame characteristics among LEA respondents are those most at risk for nonresponse bias. This analysis will take place both at the stratum level and overall, and frame characteristics associated both with response propensity and survey measurements among respondents will be incorporated in nonresponse weight adjustment models to mitigate against nonresponse bias. Additionally, since hard-to-reach agencies have lower relative response propensities, incorporating time to respond into nonresponse weighting models may help to ameliorate bias. To assess whether or not this is necessary, survey estimates will be compared across groups comprised of early and late responders. If meaningful differences are observed, time to respond will be included as part of nonresponse weight calibration.


To isolate the effect of SRO nonresponse and its potential for bias, an SRO-level bias analysis will be carried out by comparing weighted agency and officer characteristics across SRO response groups both overall and at the stratum level. The weight for this analysis is the LEA nonresponse- and officer roster nonresponse-adjusted weight (the first component of the SRO design-based weight). Characteristics to be compared across response groups include:


  1. Agency type (0/1 indicators for each type),

  2. Agency size (measured by reported number of full-time SROs),

  3. Geographic location (measured by Census region; 0/1 indicators for each type),

  4. SRO gender, and

  5. SRO race (0/1 indicators for each category).


Characteristics meaningfully associated with response propensity will be identified using the d statistic as was the case at the LEA level. Identified characteristics that are strongly associated with survey outcomes among SRO respondents will be incorporated into SRO nonresponse weight adjustment models as appropriate for the mitigation of bias.


  1. Testing of Procedures

The proposed questions in the 2019 SLEPS LEA and SRO instruments were developed through a multistage effort, given no prior iterations of SLEPS. Those processes are described below.


  • Expert Panel. BJS, RTI, and PERF hosted an Expert Panel meeting in April 2015 with representatives and experts in the area of school safety and, in particular, school resource officers. As a result of that meeting, BJS compiled feedback of the panel and was able to:

    • Construct an operational definition of “officers working in schools”

    • Identify key measures to collect from law enforcement agencies and from the officers working in schools

    • Draft agency- and officer-level questionnaires

    • Develop an overall data collection approach, and

    • Identify a frame of respondents for the data collection


  • Cognitive Interviews. Two rounds of cognitive testing were conducted with LEAs and one round of cognitive testing was conducted with SROs. The cognitive interviews focused on (1) the clarity of the instructions and question wording; (2) respondents’ ability and willingness to apply the study definitions when answering the questions; (3) the availability of data needed to provide accurate responses; (4) the estimated burden associated with participation, and (5) the LEA POC’s thoughts on efficient and effective data collection methodology. The first round of LEA cognitive testing included 20 respondents and led to changes to simplify the table structure of some questions and improve the flow of the instrument. The second round of cognitive testing with LEAs included 17 respondents and confirmed that the instrument changes were effective. The SRO survey was cognitively tested with 18 SROs and resulted in only minor suggestions to improve the clarity of the instrument. RTI provided a report on each cognitive testing effort describing all findings and recommendations (Attachments 30 and 31).


  • SLEPS Pre-test. A pre-test of the LEA and SRO procedures, using further refined agency-level and officer-level surveys, was conducted with 250 agencies and 475 SROs to evaluate the full range of the planned data collection protocol. The pre-test began with the agency component of the data collection in November 2017 and concluded with the close of the officer-level survey in May 2018. In July 2018, RTI provided a report on pre-test findings and recommendations for the full data collection (Attachment 9). The pre-test was critical in helping to finalize the proposed SLEPS methodology (for LEAs and SROs), questionnaires, and materials.


Prior to the national implementation of the 2019 SLEPS, BJS and RTI will (1) conduct thorough testing of the web-based survey administration system through systematic user testing, including testing skip patterns, ensuring seamless reporting of data, and back-end data checks on entered responses, and (2) use respondent recruitment and support procedures informed by the above tests of procedures, which in many respects are the same as used on other successful BJS agency collections (e.g., LEMAS, CSLLEA), which have been field tested and successfully employed. These include mailing a pre-notification letter, letters of support, and offering several help functions to respondents.


Additionally, RTI has developed and utilized web-based survey instruments that are substantially similar to the format and design for the 2019 SLEPS. The web-based survey administration procedures successfully employed in similar BJS collections, such as the Law Enforcement Management and Statistics Survey (LEMAS) and Annual Surveys of Probation and Parole (ASPP; OMB 1121-0064), will be followed to ensure the successful administration of the 2019 SLEPS.


  1. Contacts for Statistical Aspects and Data Collection



  1. BJS contacts include:


Elizabeth Davis Shelley Hyland, Ph.D.

202-305-2667 202-305-5552

[email protected] [email protected]


Kevin Scott, Ph.D.

202-616-3615

[email protected]


  1. Persons consulted on statistical methodology:


Lance Couzens

RTI International


  1. Persons consulted on data collection and analysis:


Dustin Williams Duren Banks

RTI International RTI International


Chris Ellis

RTI International




1 Expected margins of error (± X%) for percentage estimates of 30/70% were measured and used to compare precision across strata and survey stages.

2 For the purpose of this collection, SROs are defined as sworn law enforcement officers who are assigned to work in any public K-12 school.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDavis, Elizabeth
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy