2022 CSLLEA OMB Part B 6.1.22

2022 CSLLEA OMB Part B 6.1.22.docx

Census of State and Local Law Enforcement Agencies, 2022

OMB: 1121-0346

Document [docx]
Download: docx | pdf

Part B. Collection of Information Employing Statistical Methods


  1. Universe and Respondent Selection


The universe for the 2022 Census of State and Local Law Enforcement Agencies (CSLLEA) will consist of all state, county, local, and tribal law enforcement agencies in the 50 states and the District of Columbia that (1) are publicly funded and (2) employ at least the equivalent of one full-time sworn officer with general arrest powers. This will include local, county, and regional agencies; state police and highway patrol agencies; sheriff’s offices; and special-purpose agencies that are focused on tribal lands, public facilities, natural resources, transportation systems, criminal investigations, and other types of targeted enforcement.


To develop this universe, RTI first developed the Law Enforcement Agency Roster (LEAR) focusing on general-purpose agencies. In preparation for the 2018 CSLLEA, RTI conducted further analysis of existing agency lists and added special-purpose agencies to the LEAR. RTI also developed the Agency Record Management System (ARMS) as a portal to identify, validate, and upload changes to agency records in the LEAR. The ARMS also added functionality to allow BJS to continuously update contact information. The 2018 CSLLEA universe was selected from the ARMS based on the criteria identified above,1 and RTI will do the same for the 2022 CSLLEA universe. The following section details the steps taken in each phase of the universe development in more detail.


LEAR Development. The LEAR was created between summer 2015 and winter 2016; work consisted of seven stages using information from multiple datasets:


Stage 1: The 2008 and 2014 CSLLEA universe files were linked. The resulting file served as the basis of the LEAR.


Stage 2: Information from five additional data files was appended to supplement the LEAR: (1) 2013 Law Enforcement Management and Administrative Statistics (LEMAS) survey, (2) Law Enforcement Agency Identifiers Crosswalk (LEAIC), (3) FBI’s Police Employee Data, (4) validation data from frame cleaning conducted during the Survey of Law Enforcement Personnel in Schools, and (5) agency lists generated by each state Peace Officer Standards and Training. These datasets provided both additional cases and additional variables (e.g., officer staffing counts).


Stage 3: The merged datasets were de-duplicated. Some source datasets did not contain sufficiently unique identifiers, which resulted in duplicate records. Records were reviewed and assigned duplicate identifiers where appropriate.


Stage 4: Cases were reviewed for validity. This involved a combination of online research and direct agency contact.


Stage 5: Logic checks were conducted on agency information and identifying records that failed basic consistency checks (e.g., large discrepancies in sworn officer staffing size between different datasets).


Stage 6: Two additional files from the 2014 CSLLEA were verified. First, a subset of cases was marked as out of scope. These cases were reviewed to ensure that this was an accurate disposition. Second, a set of cases that had disposition changes between the intermediate 2014 CSLLEA file and the final 2014 CSLLEA file were reviewed for accuracy.


Stage 7: Cases with eligibility changes between the interim and final datasets were reviewed. Agencies marked as out of scope were reviewed to determine if this disposition was accurate and consistent with the inclusion criteria for the CSLLEA universe file.


In 2016, BJS conducted the LEMAS Body-Worn Camera Supplement and the LEMAS Core, and these data were incorporated into the LEAR. This included information on in-service status, officer staffing size, agency chief executive, and contact information (e.g., telephone number and mailing address).


ARMS Expansion of LEAR. After completion of the 2016 LEMAS, the LEAR was linked to the ARMS to allow for better tracking of changes to agency records and easier updating of contact information. Within the new ARMS system, further work was done to update the current list of general-purpose agencies and clean the special-purpose agencies within LEAR that were not included in the LEMAS frames.


In preparation for the 2018 CSLLEA, RTI conducted research on state-level chief’s and sheriff’s associations to identify additional in-scope agencies. Through online research and outreach to some of these associations, RTI obtained membership lists that totaled over 11,000 agencies. This combined agency list was programmatically matched with existing LEAR agencies to identify new in-scope cases. After automated matching, approximately 800 cases required manual review and classification. No law enforcement agencies were contacted by RTI directly to update their point of contact (POC) information.


RTI conducted a similar process to identify special-purpose agencies, and that work underwent manual review to determine their in-service and in-scope status. Cases went through a multi-stage review process, starting with web-based research. Most cases could be updated for in-service and in-scope status through publicly available information. Cases with insufficient information available online underwent further follow-up through email and phone contact. This work was only done to establish if an agency was operational and in scope for the 2018 CSLLEA.


The LEAR has since been updated with data from the 2018 CSLLEA, the 2020 LEMAS, and the 2021 SCLEA. These data collections provided updates on agencies’ in-service and in-scope status. The 2021 SCLEA also provided information on parent/child campus relationships that will be used to determine which campus(es) should receive the 2022 CSLLEA.


Elimination of Out-of-Scope Agencies for CSLLEA. The following actions have been taken to reduce the possibility of including out-of-scope agencies from the LEAR:


  1. Non-publicly funded law enforcement agencies (LEAs) (e.g., those serving private universities) have been vetted to reduce out-of-scope agency participation.

  2. Publicly available information was reviewed to determine if agencies do not have general sworn law enforcement authority.

  3. Agency size has been checked across a variety of sources, including past CSLLEA waves, the 2020 LEMAS core, and the 2021 SCLEA to attempt to minimize the number of agencies in the frame with less than one full-time equivalent sworn officer.

  4. Agency chief executive information has been updated based on web searches and updated POC information received from the 2018 CSLLEA and 2020 LEMAS core to reduce respondent need to manually update this information.


As the first step in frame construction for the 2022 CSLLEA, RTI extracted all cases from the current LEAR that met the previously discussed criteria. LEAs that had self-identified in a previous survey that they were ineligible will be included in the 2022 CSLLEA frame as their eligibility may have changed.


RTI compared the frequencies across agency types in the preliminary frame extracted from the LEAR with the frequencies from the 2018 CSLLEA. Any agency types with an absolute difference of greater than 5% are being investigated by RTI. RTI is determining the reason(s) behind these differences, such as the reclassification of LEAs to other agency types or ineligibility based on other recent BJS collections. Additionally, the largest agencies identified as eligible for the 2022 CSLLEA using the above process are being compared to the 2018 CSLLEA to ensure consistency and accuracy of agency-type assignments. The largest agencies were defined by the number of full-time-equivalent sworn officers from the 2018 CSLLEA.


RTI is also reviewing the addresses and phone numbers of cases in the preliminary frame to identify possible duplicates. Any matches will be reviewed to identify true duplicates from non-matches that just share characteristics. For example, small police departments and county sheriff’s offices are often located at the same address. Duplicates will then be dropped from the frame.


To maximize efficiency and ensure frame quality, RTI is also comparing the preliminary frame for the 2022 CSLLEA to the National Directory of Law Enforcement Administrators. The directory, which is updated annually, contains nearly 37,000 listings of various types of law enforcement agencies, academies, and professional organizations. The data include contact information and are validated through direct contact year-round.2


Status issues, potential duplicates, and other eligibility issues that are not easily resolved will be investigated by reviewing information from previous BJS data collections and through online searches. Reviews will be conducted in two stages within the ARMS. During the first stage, research will be done online using publicly sourced information. Sources will include (1) records from prior surveys, (2) webpages, (3) city-, county-, or organization-level budgets, (4) official governmental reports, and (5) news reports/articles from reputable sources. All updated data will be entered in the ARMS and then tracked and approved through the standard verification process before becoming part of LEAR. During the second stage, cases that could not be resolved through these means will be initiated for direct phone or email outreach. BJS requested and received generic clearance approval (OMB 1121-0339) for 25 hours for frame development outreach. Any updated agency information will be entered in the ARMS and passed on to the LEAR as appropriate.

Although the development and review of the CSLLEA universe are not finished, an estimated 20,000 agencies will be defined as in scope for the collection.


  1. Procedures for Collecting Information


Procedures for Collecting Information

Data Collection Procedures. Data collection will begin with a survey invitation letter mailed via USPS to the POC for each LEA to inform them about the survey (Attachment 3). This letter will be signed by the Director of BJS and explain the purpose and significance of the survey. It will include the survey web address and agency-specific log-in credentials. The survey invitation letter will also provide a toll-free telephone number and project-specific email address for the survey Help Desk, should the agency have any questions. Instructions for changing the POC via the survey website, fax, or telephone will be included, in the event the LEA needs to change the POC to a more appropriate person.


Included with the survey invitation letter will be an informational flyer (Attachment 4). The flyer will describe the overall BJS law enforcement survey program and how the CSLLEA relates to the LEMAS and other BJS collections. Also included will be a letter of support signed by the major law enforcement organizations in the United States (i.e., International Association of Chiefs of Police, National Sheriffs’ Association, Major County Sheriffs of America, Major City Chiefs of Police, and Association of State Criminal Investigative Agencies) (Attachment 5). Lastly, a POC Update Form (Attachment 6) will also be included so that the recipient can use it to provide contact information for a newly designated POC. To maximize efficiency and decrease costs over the long term, the initial mailing will not include a hardcopy instrument.3

After the invitation letter mailing, agencies will receive the following additional communications during the data collection period:

  • Approximately one week after sending the survey invitation letter, RTI will send an email message that is identical to the survey invitation letter (Attachment 7) to those recipients for whom an email address is available to confirm receipt of the study materials.

  • Three weeks after sending the survey invitation letters and two weeks after the email invitation, a first reminder letter will be sent to POCs at nonresponding agencies, including those who are newly identified (Attachment 8). These letters, signed by the BJS Project Manager, will express the importance of the CSLLEA to the LEA community and encourage response via the online survey (or hardcopy, if preferred).

  • One week after the first reminder letter, a second reminder email will be sent to POCs at nonresponding agencies (Attachment 9). These emails, signed by the BJS Project Manager, will reflect similar content as the first reminder letters.

  • Two weeks after sending the second reminder, RTI will send a third reminder—a self-mailer (postcard) with the URL and login instructions (Attachment 10). Changing the appearance of the communication to a self-mailer will draw the attention of the recipients and help spur participation.

  • Two weeks after the third reminder is sent, RTI will send a fourth reminder—a letter via USPS—to POCs in agencies that have not responded (Attachment 11). This mailing will also include a hardcopy questionnaire (Attachment 1) and return envelope.

  • One week after sending the fourth reminder, RTI will send a fifth reminder via email (Attachment 12).

  • Three weeks after the fifth reminder, a letter will be sent to nonrespondents as a sixth reminder. Like the fourth reminder, this letter (Attachment 13) will include the URL and login instructions.

  • One month after the sixth reminder, another letter (Attachment 14) will be sent via UPS as a seventh reminder. Changing the mode of delivery is expected to draw the attention of the recipient and help spur participation.

  • Ten days after the seventh reminder is sent (approximately four months after sending the survey invitation letters), RTI will begin telephone follow-up with all non-responding LEAs (Attachment 15). These contacts will include efforts to complete the survey during the call and to prompt nonrespondents to complete the survey via the web.

  • About three weeks after telephone follow-up begins, RTI will send an eighth reminder email to LEAs that have not responded (Attachment 16).

  • Four weeks after the eighth reminder email is sent, RTI will mail and email the ninth and tenth reminders, respectively (Attachments 17 and 18).

  • The final correspondence will be an end-of-study letter and email (Attachments 19 and 20) to nonrespondents to announce the forthcoming closure of the study and make a final appeal to participate. This communication will be sent approximately eight weeks before the survey is closed.

Throughout the data collection period, respondents will receive a thank-you email or letter, depending on the completion mode (Attachment 21). The text will formally acknowledge receipt of the survey and state that the agency may be contacted for clarification once their survey responses are processed.


The 2022 CSLLEA will include an experiment designed to test the effect of sending a prenotification package to agency heads (e.g., chiefs, sheriffs) announcing the planned launch of the survey. Potential benefits include 1) cost savings due to obtaining survey responses earlier in the data collection period and 2) improved data quality due to higher rates of participation. The experimental group will include 1,000 randomly selected agencies: 250 local police departments with fewer than 100 full-time sworn officers; 250 local police departments with 100 or more full-time sworn officers; 250 sheriff’s offices with fewer than 100 full-time sworn deputies; and 250 sheriff’s offices with 100 or more full-time sworn deputies.


The experimental prenotification package will be sent to the selected agencies via USPS approximately 3 weeks prior to the survey launch. The package will include a letter signed by the Director of BJS announcing the upcoming survey. The letter will include the survey web address and agency-specific log-in credentials (Attachment 22) that can be used to update contact information for the agency chief executive and to identify a POC for the survey. The survey prenotification letter will also provide a toll-free phone number and project-specific email address for the survey Help Desk, should the POC have any questions. Included with the prenotification letter will be the information flyer and the letter of support signed by the major law enforcement organizations in the United States.4


The 2022 CSLLEA will employ a multi-mode approach that relies primarily on web-based data collection; nonresponse follow-up efforts will allow for both hardcopy and phone survey response. When the 2018 CSLLEA ended data collection, there was an overall response rate of 94% (about 16,500 agencies). Of these, almost 14,500 agencies (87%) responded via the web. Due to increased web-based capabilities of LEAs and the project’s strong encouragement to respond using the web-based data collection tool, BJS expects that most of the agencies responding to the 2022 CSLLEA will use the web-based option.


There is some ambiguity in the experimental literature about whether research should offer a choice of web-based or paper modes concurrently or sequentially. Studies have found that offering a concurrent choice in mode does little to improve response rates. Millar and Dillman found among college students that the concurrent approach did not improve response; however, it did not significantly diminish response rates either. Yet, they found that a sequential approach of web then mail improved response.5 Medway and Fulton found in their meta-analysis, which included 19 experimental comparisons, that offering concurrent web-based and mail surveys reduced responses.6 It should be noted that these results are from individual-level surveys, many conducted among college student samples, and not based on establishment surveys.


Upon receipt of a survey (web or hardcopy), data will be reviewed and edited, and if needed, the respondent will be contacted to clarify answers or provide missing information. The hardcopy survey will be developed using TeleForm, which will allow the surveys to be scanned in and the data read directly into the same database containing the web survey data. This will ensure that the same data quality review procedures are applied to all survey data, regardless of response mode. The following is a summary of the data quality assurance steps that RTI will observe during the data collection and processing period:

Data Editing. RTI will attempt to reconcile missing or erroneous data through automated and manual edits of each questionnaire. In collaboration with BJS, RTI will develop a list of edits that can be completed by referring to other data provided by the respondent on the survey instrument. For example, if a screening question was left blank, but the follow-up questions were completed, a manual edit could be made to indicate the intended positive response to the screening question. Through this process, RTI can quickly identify which hardcopy cases require follow-up and indicate the items that need clarification or retrieval from the respondent.

Data Retrieval. When it is determined that data retrieval is needed, an Agency Liaison will contact the respondent for clarification. Throughout the data retrieval process, RTI will document the questions needing retrieval (e.g., missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.

Data Entry. Respondents completing the survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning the survey via hardcopy (mail or fax), data will be scanned once received and determined complete. Once the data have been entered into the database, they will be made available to BJS via an SFTP site. To confirm that editing rules are being followed, RTI will review frequencies for the entered data after the first 10% of cases are received. Any issues will be investigated and resolved. Throughout the remainder of the data collection period, RTI staff will conduct regular data frequency reviews to evaluate the quality and completeness of data captured in both the web and hardcopy modes.


  1. Methods to Maximize Response Rates


Minimizing Nonresponse

The CSLLEA has historically achieved high rates of survey response, with the 2008 and earlier administrations achieving 99% response rate and the 2018 CSLLEA reaching 94%. BJS and RTI will undertake various procedures to maximize the likelihood that the 2022 CSLLEA reaches a similar level of participation. The 2014 CSLLEA achieved only an 84% response rate but also employed a significantly longer questionnaire. The 2014 CSLLEA burden was 1 hour for 21 items. Considering survey length likely impacted the overall response rate, the 2018 CSLLEA was revised to better mirror the 2008 CSLLEA, which achieved an over 99% response rate. The 2008 CSLLEA had 6 items and the 2018 CSLLEA had 7 items, and both instruments had an estimated burden of 30 minutes. Similarly, the 2022 CSLLEA will have 7 items and an estimated burden of 32 minutes.

BJS will use a web-based instrument supported by several online help functions to maximize response rates. For convenience, respondents will receive the survey link in an email invitation and a mailed hardcopy invitation. A Help Desk will be available to provide both substantive and technical assistance. BJS will supply the Help Desk with answers to frequently asked questions and guidance on additional questions that may arise (Attachment 23). In addition, the web survey interface is user-friendly, which encourages response and ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the web survey instrument. The online application will be flexible so it can adapt to meet the needs of multiple device types (e.g., desktop computer and tablet), browser types (e.g., Microsoft Edge and Google Chrome), and screen sizes. Other features in the instrument will include the following:


  1. Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.

  2. The online instrument will be programmed with data consistency checks and automatic prompts to ensure inter-item consistency and reduce the likelihood of “don’t know” and out-of-range responses, thereby reducing the need for follow-up with the respondent after survey submission.

  3. The online consistency checks will be tailored based on previously reported data or on the type and size of agency, so that they are triggered only when appropriate. For example, an agency that reported 500 full-time sworn officers in the 2018 CSLLEA will not be prompted if the new response is 525, but it will be prompted if the response is 52. Checks for agencies that are new to the CSLLEA or did not respond to an item in the 2018 CSLLEA will be based on the mean reported value of the item by type of agency. Tailored checks, using a less robust approach, were used on the 2020 LEMAS and reduced the proportion of respondents requiring follow-up to 45% compared to nearly 100% on the 2016 LEMAS.

  4. LEAs may also download and print a hardcopy survey from the website or request one from the Help Desk.


To obtain higher response rates and to ensure unbiased estimates, multi-stage survey administration and follow-up procedures have been incorporated into BJS’s response plans. Ensuring adequate response (not just unit/agency response rates, but also item responses) begins with introducing agencies to the survey. In previous years, the CSLLEA did not send out a prenotification letter introducing agency heads to the survey. For the 2022 CSLLEA, BJS tests whether a prenotification letter can improve response rates by conducting an experiment in which a prenotification letter will be sent to a random sample of 1,000 agencies. BJS will examine the response rate and time to respond of these agencies against agencies that did not receive a prenotification letter to determine if a prenotification letter has a significant positive impact and should be used for all agencies in future iterations of the CSLLEA.


For the agencies that do not receive a prenotification letter, their survey introduction will be accomplished initially through the invitation letter and accompanying documents (Attachments 3-6). LEAs will receive periodic reminder letters and emails. Resources available to help the respondent complete the survey (e.g., telephone- or email-based Help Desk support) will be described in each communication. RTI will provide LEAs with online and fax methods to identify respondents and change the POC assignment if needed. The online and hardcopy versions of the instrument will capture the name of the individual who completes the survey to facilitate follow-up later.

Adjusting for Nonresponse
With any survey, it is typically the case that some of the selected units (i.e., LEAs) will not respond to the survey request (i.e., unit nonresponse) and some will not respond to particular questions (i.e., item nonresponse). Weighting will be used to adjust for unit nonresponse in the 2022 CSLLEA. To determine which factors to use in the agency nonresponse weight adjustments, a procedure available in RTI’s SUDAAN software based on the Generalized Exponential Model will be used to model the response propensity using information from the LEAR (e.g., agency characteristics such as geography, operating budget, whether officers have arrest powers) within sampling strata.7 Ideally, only variables highly correlated with the outcomes of interest will be included in the model used to reduce potential bias. Given the expected differential response rates by agency type and size, the weighting adjustment procedures will attempt to minimize the bias in the estimates within these domains.

As previously stated, and based on the traditional CSLLEA response patterns, an overall response rate of approximately 95% is expected. To ensure that nonresponding agencies are not fundamentally different than those that participate, a nonresponse bias analysis will be conducted if the agency-level response rate obtained in the 2022 CSLLEA falls below 80%.


Administrative data on agency type, size, and census region or division will be used in the nonresponse bias analysis. For each agency characteristic, BJS will compare the distribution of respondents to nonrespondents. A Cohen’s Effect Size statistic will be calculated for each characteristic. If any characteristic has an effect size that falls into the “medium” or “high” category, as defined by Cohen, then there is a potential for bias in the estimates.8 Each estimate will be included in a nonresponse model to adjust weights to minimize the potential for bias in the estimates. In addition to estimating effect sizes, an examination of early and late responders will be conducted. If late responders (i.e., those that take more contact attempts before responding) are significantly different on the key outcomes of interest, that is also an indication of potential bias. Comparison will be made to determine if the potential for bias varies by agency type and size.

  1. Final Testing of Procedures


Most of the questions included in the 2022 CSLLEA mirror those used in either the 2018 CSLLEA or the 2020 LEMAS. Reviews of the data from these surveys indicate that these questions performed well in the past and did not require revisions. Therefore, testing of the 2022 CSLLEA items was deemed unnecessary.










  1. Contacts for Statistical Aspects and Data Collection


  1. BJS contacts include:

Elizabeth Davis

Statistician

202-305-2667

[email protected]

Alexia Cooper

Law Enforcement Statistics Unit Chief

202-307-0582

[email protected]


  1. Persons consulted on statistical methodology:


Harley Rohloff

RTI International


  1. Persons consulted on data collection and analysis:


Tim Smith

RTI International


Renee Mitchell, PhD

RTI International



References

Banks, D., Hendrix, J., Hickman, M.J., & Kyckelhahn, T. (2016). National Sources of Law Enforcement Employment Data. Washington, D.C.: Bureau of Justice Statistics.


Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge, 2013.

Folsom, R.E., & Singh, A.C. (2000). The generalized model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.

Medway, R.L., & Fulton, J. (2012). When more gets you less: a meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4), 733-746.

Millar, M.M., & Dillman, D.A. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly, 75(2), 249-69.


National Research Council. (2009). Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Panel to Review the Programs of the Bureau of Justice Statistics. Robert M. Groves and Daniel L. Cork, eds. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.


Reaves, B. (2011). Census of State and Local Law Enforcement Agencies, 2008. Washington, D.C.: Bureau of Justice Statistics.


1 The LEAR contains agencies that are in scope for the CSLLEA, as well as agencies that are not in scope.

3 A delayed hardcopy survey mailing will be done with all agencies except for the 275 tribal agencies in the CSLLEA frame. Due to the sporadic access to internet service on tribal lands, hardcopy survey submission will be the primary method for tribal agencies. Therefore, the hardcopy survey will be included in the initial mailing with a secondary option of web submission for these agencies.

4 The 1,000 agencies randomly selected for the prenotification experiment will receive the study flyer and letter of support in both their prenotification package and their survey invitation package.

5 Millar, M.M., & Dillman, D.A. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly, 75(2), 249-69.

6 Medway, R.L., & Fulton, J. (2012). When more gets you less: a meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4), 733-746.

7 Folsom, R.E., & Singh, A.C. (2000). The generalized model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.

8 Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge, 2013.

9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authoradamsd
File Modified0000-00-00
File Created2022-06-10

© 2024 OMB.report | Privacy Policy