2016_BWC LEMAS Supplement_Part B

2016_BWC LEMAS Supplement_Part B.docx

2016 Body Worn Camera Supplement to the Law Enforcement Management and Administrative Statistics (LEMAS) Survey

OMB: 1121-0354

Document [docx]
Download: docx | pdf

Part B. Collection of Information Employing Statistical Methods


  1. Universe and Respondent Selection


The target population for the 2016 Body-Worn Camera Survey Supplement (BWCSS) is all local police, county sheriff, and state police agencies with at least one full time sworn officer. BWCSS will use the 2014 Census of State and Local Law Enforcement Agencies (CSLLEA) to identify the universe of eligible respondents. The sample of agencies to be surveyed will be drawn from this list based upon the sampling design described below

  1. Procedures for Collecting Information


Once the sample is complete, data collection will begin with a letter that will be mailed via USPS to the point of contact (POC) for each law enforcement agency (LEA) to inform him or her about the 2016 BWCSS to the 2013 Law Enforcement Management and Administrative Statistics (LEMAS) and to notify him or her that the web survey invitation is forthcoming. This letter, referred to in the rest of the package as the lead letter, will also explain the purpose of the survey (see Attachment 1). Unless specified by the agency, the POC will be the Sheriff or the Chief of Police. The lead letter will include help desk information (a toll free telephone number and project-specific e-mail address) should the POC have any questions or want to update POC information. Accompanying this lead letter will be the Confidentiality Assurances (see Attachment 2), which states that BJS will use the data collected under this Program only for research and statistical purposes, as described in Title 42, USC §3735 and 3789g.


Following the lead letter, the POC will receive an e-mail invitation with information on how to complete the web survey. The e-mail invitation will include a URL and a survey access code. As with the lead letter, help desk information will be provided should the POC have any questions or want a paper copy of the survey (see Attachment 3).


Sampling Frame

The 2014 CSLLEA is used as a universe list of general purpose law enforcement agencies. The sample of agencies that will be surveyed will be drawn from this list based upon the sampling design described below. One of the primary purposes of the CSLLEA is to provide an accurate sampling frame for the LEMAS program. The CSLLEA is the most systematic and comprehensive source of national data on the number of sworn and non-sworn personnel employed by law enforcement agencies nationwide and provides a complete accounting of policing agencies that employ the equivalent of at least one full-time sworn officer by collecting data on law enforcement agency functions, facilities, personnel, and budget. The 2014 CSLLEA was administered in 2014 and 2015, and so represents the most current information on the universe of state and local law enforcement agencies in the United States.


BWCSS Sampling Designs and Response Rates

The BWCSS will use a sampling design based on the same protocol used to develop the sample for the LEMAS survey. Specifically, LEMAS uses a stratified simple random sample design in which LEAs are stratified by agency type and agency size. Agency type has three categories: (1) local police, (2) Sheriff’s offices, and (3) state police. To obtain a representative sample of all agency sizes, the sample is stratified by agency size. Agency size is split into seven categories: (1) 1 officer, (2) 2 – 4 officers, (3) 5 – 9 officers, (4) 10 – 24 officers, (5) 25 – 49 officers, (6) 50 – 99 officers, and (7) 100 or more officers. In the LEMAS, LEAs with 100 or more officers are sampled with certainty1. In other words, these agencies will be self-representing (SR). For agencies with less than 100 officers, the LEMAS has employed various allocation methods depending on the analytic goals of the particular iteration.


The LEMAS has traditionally experienced a high response rate. For example, the 2013 LEMAS had an overall response rate of 86.3%. However, as seen in Table 1, the response rate varied by agency type and agency size. Other surveys on body worn camera use have experienced lower response rates among smaller agencies. Therefore, based on the LEMAS and other BWC surveys, we will assume a response rate that differs by agency type and agency size with lower response rates for smaller agencies. Table 2 presents the assumed response rates for the BWCSS.


Past experiences on the LEMAS and data collection efforts undertaken by others on topics related to BWC suggests that response rates will be lower among smaller agencies and non-local agencies. We recognize this lower response rate as a potential limitation and will take steps to identify and ameliorate any issues caused by the differential response rates. First, we will test for potential bias in response by comparing agencies on known characteristics including agency size, type, and geographic region. Second, we will reduce the likelihood of potential bias through nonresponse adjustments, which are discussed in a later section.



Table 1: 2013 Law Enforcement Management and Administrative Statistics Survey Response Rates by Agency Type and Size

Agency Type

FTE sworn Officers

Sample Size

Response Rate

Local Police

100+

632

91.1%

50-99

311

91.3%

25-49

366

89.0%

10-24

457

87.9%

5-9

347

84.0%

2-4

188

87.8%

1

52

69.8%

Sheriff Office

100+

370

81.9%

50-99

109

74.8%

25-49

135

88.1%

10-24

162

77.0%

5-9

109

74.5%

2-4

42

85.4%

1

6

83.3%

State

All

50

92.0%

Table 2: Assumed Response Rates by Agency Type and Size

Agency Type

Agency Size

Response Rate

Local Police

Self-representing

95%


Non-self-representing1

55%

Sheriff’s Office

Self-representing

85%


Non-self-representing

45%

State Police


90%

1. Non-self-representing agencies are comprised of agencies with less than 100 officers. Based on past experience we anticipate lower response rates from these smaller agencies.



Analytic Goals


The BWCSS is designed to have precise estimates of body-worn camera prevalence. It is expected that about 1/3 of agencies use body-worn cameras and that this rate is the similar across agency size.2 For the 2/3 of agencies that do not use body-worn cameras, BJS’s wishes to understand the reasons the agency does not use body-worn cameras. Accordingly, agencies that indicate they do not use BWC will be asked to indicate all the reasons why not. Estimates for prevalence of these reasons is unknown. However, the precision of such an estimate for various levels of prevalence will be examined.


The domains of interest are:

  • Overall – all agency types and sizes

  • Local police departments – all sizes

  • Sheriff offices – all sizes

  • State agencies – all sizes

  • Local police – non-self-representing

  • Sheriff offices – non-self-representing


Relative standard error (RSE) is used as the measure of precision in the sampling design where the RSE is the ratio of a measure and its standard error. RSE is a standardized measure of precision regardless of estimate value. For each domain, the respondent size is chosen such that the RSE of body-worn camera prevalence is at most 7% in each analytic domain. The design is chosen to achieve a RSE of 6% nationally but up to 7% in some analytic domains. This particular RSE is chosen after examining the relationship of sample size and precision. BJS examined the required sample size for other precision levels. If precision of 5% RSE for body-worn camera prevalence was desired, the total responding sample size required would be 4,649 while if a precision while to achieve a 7.5% RSE, a responding sample size of 2,887 would be necessary.


There are no absolute limits on acceptable RSE values. Desire for increase precision must be balanced against additional burden, time constraints, and costs. For example, BJS is targeting a 10% RSE on subnational estimates of victimization using NCVS data. Other reports using NCVS data have only censored findings once RSE exceeds 50%.3 The CDC has used RSE in excess of 30% as a cutoff for statistical reliability.4 The available literature suggests that our 7% within-domain maximum RSE is an appropriate goal.


Allocation of Agencies between Strata


The allocation and optimization of sample size is selected for the expected respondent size assuming response rates for various agency sizes and types (see Table 2). For non-self-representing strata, the sample size is allocated proportionally. Proportional allocation is chosen since the estimate of interest is assumed to be the same across strata and proportional allocation creates weights that are equal for all units in the NSR strata thus minimizing the unequal weighting effect (UWE).5 Optimization to obtain sample sizes was done first to calculate the number of respondents necessary and then using the forecasted response rates, the sample sizes are obtained by dividing the necessary respondent size by the expected response rate. To achieve the desired precision of 6% RSE, a total of 3,122 respondents are needed requiring a total sample of 5,063 (see Table 3). Based on this allocation the national estimate has a design effect due to unequal weighting of 1.32. This design effect was taken into account when estimating the resulting precision for the key outcomes.


Table 3: Distribution of population, sample allocation, and respondents for the 2016 BWCS

Agency Type

FT sworn Officers

Population

Sample Size

Respondents

Local Police

100+

681

681

647

50-99

809

198

109

25-49

1,659

404

222

10-24

2,958

722

397

5-9

2,548

624

343

2-4

2,729

665

366

1

2,107

515

283

Sheriff Office

100+

420

420

357

50-99

327

98

44

25-49

544

162

73

10-24

889

264

119

5-9

546

162

73

2-4

225

69

31

1

102

29

13

State

All

50

50

45

Total

16,594

5,063

3,122



Sampling Error


The sample allocation and size is specified to meet a precision of no worse than 7% in all analytic domains for the prevalence of body-worn camera usage. However, for the state agencies, with the expected response rate, it is only possible to achieve a 6.73% RSE under the expected response rate. Since all state agencies are self-representing, the resulting precision is the optimal level it can be under current response rate assumptions. In some domains, this precision is better than 6%, particularly the SR group (see Table 4).


Table 4: Precison of body-worn camera use by analytic domain

National

Local Police

Sheriff

State

Local Police NSR

Sheriff

NSR

2.62%

2.95%

5.75%

6.73%

3.17%

6.99%



Additionally, the precision for the reason an agency does not use body-worn cameras is of interest. It is anticipated that 2/3 of respondents do not use body-worn cameras and thus will respond with a reason.6 The prevalence rate for each reason is unknown. However, the precision of such an estimate is given for various levels of prevalence in Table 5. Since the RSE of the prevalence rate is the ratio of standard error and prevalence rate, the RSE increases as the prevalence rate decreases.


Table 5: Relative standard errors for reasons an agency does not use body-worn cameras by prevalence rate and analytic domain

Prevalence

Rate

National

Local Police

Sheriff

State

Local Police NSR

Sheriff

NSR

5%

9.88%

11.14%

21.71%

25.55%

11.98%

26.41%

10

6.80

7.67

14.94

17.59

8.24

18.17

15

5.40

6.08

11.85

13.95

6.54

14.42

20

4.53

5.11

9.96

11.72

5.50

12.12

25

3.93

4.43

8.63

10.15

4.76

10.49

30

3.46

3.90

7.61

8.95

4.20

9.25

35

3.09

3.48

6.79

7.99

3.74

8.26

40

2.78

3.13

6.10

7.18

3.37

7.42

45

2.51

2.83

5.51

6.48

3.04

6.70

50

2.27

2.56

4.98

5.86

2.75

6.06


Final Sampling Design


The final design mirrored that used in the 2014 LEMAS with the exception that full-time sworn officers rather than full-time equivalent sworn officers are used to construct strata and the optimization criterion was the precision of the estimate of body-worn camera prevalence. The design utilized the most recent CSSLEA to determine the universe of eligible agencies as it is the most updated listing of currently operational law enforcement agencies including new agencies, agencies that have ceased operations, and agencies that have been merged within other agencies. The initial sample will be 5,063 agencies. Given the anticipate response rates, detailed in Table 2, we expect to generate 3,122 survey responses. The 2016 BWCSS will select with certainty all agencies that reported 100 or more sworn officers in the 2014 CSLLEA. Among the remaining non-self-representing local police and sheriff agencies, BJS will allocate agencies to 12 non self-representing strata based on the proportions set out in Table 3.


  1. Methods to Maximize Response Rates


The previous waves of the LEMAS survey have achieved high rates of survey response, typically meeting or exceeding 90%. BJS and RTI will undertake various procedures to ensure that response rates for the BWCSS are as high as possible.


BJS will use a web-based instrument supported by various online help functions to maximize response rates. For convenience respondents will receive the survey link in an email invitation and a mailed hard copy invitation. A helpdesk will be available to provide both substantive and technical assistance. In addition, the web-survey interface is user friendly, which ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the Internet survey instrument. The instrument is also flexible so it can adapt to meet the needs of multiple device types (e.g., desktop computer and tablet), browser types (e.g., Internet Explorer and Google Chrome), and screen sizes. Other features in the instrument include the following:


  • Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.

  • The online instrument will be programmed with data consistency checks and automatic prompts to ensure inter-item consistency and reduce the likelihood of “don’t know” and out-of-range responses.

  • When a respondent enters a response that appears to be out of range for that agency or question, a prompt will appear on the screen instructing the user to double check the response.

  • Upon submission, respondents will receive a message that confirms receipt of their survey.


In order to obtain higher response rates, multi-stage survey administration and follow up procedures have been incorporated into BJS’s response plans. Insuring adequate response (not just department response rates, but also item responses) begins with introducing respondent agencies to the survey. This will be accomplished initially through the lead letter sent to each agency. The letter will include a description of the survey, its background, and the reason the survey is being conducted (see Attachment 1). Resources available to help the respondent complete the survey (e.g. phone- or email-based support) will be described in detail.

The data collection schedule is designed to include several follow-up communications to allow the LEA to complete the survey at a time most convenient for them. A month after the initial invitation, an e-mail will be sent to all LEAs. This e-mail communication will serve as a thank you message to those respondents who have already submitted their agency’s survey or as a reminder for those agencies who have not yet submitted their information (see Attachment 4). Following this thank you/reminder message, the first nonresponse message will be sent via e-mail to any to-date nonrespondents asking them once again to complete the web survey. This communication will address potential reasons for nonresponse (see Attachment 5). If no survey response is received after the first nonresponse message, a second nonresponse message will be sent via USPS to to-date nonrespondents. Since communication has relied on e-mail up to this point (aside from the lead letter), this USPS letter will address the fact that earlier e-mails may have been blocked or unread. Like previous communications, it will provide information on how to complete the web survey, including the URL and the LEA’s unique survey access code (see Attachment 6); however, this communication will also include a hard copy of the survey and a business reply envelope to facilitate completion via mail.


Following these written communications, telephone calls will begin with the to-date nonrespondents (see Attachment 7). In preparation for this outreach, agency liaisons (AL) will be trained on study protocol and procedures for contacting nonresponding agencies. Most notably, ALs will be trained on asking agencies to complete the web survey, tracking cases (including contact attempts), and administering the web survey over the phone. After nonresponse telephone calls, ALs will make targeted attempts with nonresponding agencies to capture critical items.


One week before data collection ends, a final written contact with the POC will occur via an end-of-study message. This message will go to any to-date nonrespondents to announce the forthcoming closure of the study and make a final appeal to participate (see Attachment 8).


Nonresponse Adjustments


Unit nonresponse. With any survey, it is typically the case that some of the selected subjects will not respond to the survey request (i.e., unit nonresponse) and some will not respond to particular questions (i.e., item nonresponse), despite best efforts made to collect all the data. Weighting will be used to adjust for unit nonresponse in the BWCSS. Using the CSLLEA, there are variables that may be related to nonresponse and the use of body-worn cameras. To determine which factors to use in the facility nonresponse weight adjustments, a procedure available in RTI’s SUDAAN software based on the Generalized Exponential Model (GEM) will be used to model the response propensity using information from the sampling frame (e.g., agency characteristics such as geography, operating budget, whether officers arrest people, etc.) within sampling strata.7 Ideally, only variables highly correlated with the outcomes of interest will be included in the model in order to reduce the potential for bias. As described above, given the expected differential response rates by agency type and size, the weighting adjustment procedures will attempt to minimize the bias in the estimates within these domains.


Nonresponse bias analysis. As previously stated, we are assuming a response rate of approximately 62% overall. In order to ensure those agencies that do not participate in the study are not fundamentally different than those that do, a nonresponse bias analysis will be conducted if in fact the agency-level response rate obtained in the 2016 BWCSS is below 80%. The following administrative data on agency characteristics will be used in the nonresponse bias analysis −

  • Agency type,

  • Agency size, and

  • Census region or division.


For each agency characteristic, BJS will compare the distribution of the respondents to the nonrespondents. A Cohn’s Effect Size statistic will be calculated for each characteristic. If any characteristic has an effect size that falls into the “medium” or “high” category, as defined by Cohn, then there is a potential for bias in the estimates. Each of these estimates will be included in a nonresponse model to adjust weights to minimize the potential for bias in the estimates.

In addition to estimating effect sizes, an examination of early and late responders will be conducted. If late responders (i.e., those that take more contact attempts before responding) are less likely to use BWC then that is an indication of potential bias in the key outcome of interest. In other words, it is likely that the non-responders are less likely to use BWC than those that responded to the survey. BJS will conduct this comparison within each strata to determine if the potential for bias varies by strata.


  1. Final Testing of Procedures


The 2016 BWCSS will maintain similar respondent recruitment and support procedures as previous LEMAS surveys and supplements, which have been field tested and successfully employed through the prior eight waves of the LEMAS program.


An expert panel was consulted to develop the BWCSS instrument. The experts included individuals in leadership positions at LEAs; representatives from key practitioner organizations, including the International Association of Chiefs of Police and the National Sheriffs Association; the Community Oriented Policing Services (COPS) Office, and experts from the academic field.


Pilot testing for the BWCSS involved a thorough testing of the survey instrument, which was developed by RTI International and the Police Executive Research Forum with support from BJS and a panel of expert consultants. The goal of the pilot testing was to ensure that the instrument’s form and content were understandable and appropriate. Pilot testing was initially conducted with project team members to ensure system functionality and to measure clarity and appropriateness. Then, a total of nine respondents representing different LEAs and data consumers were selected to participate in the pilot testing: Seattle (WA) Police Department, Elk Grove (CA) Police Department, Henrico County (VA) Police Department, Fort Worth (TX) Police Department, Houston (TX) Police Department, King County (WA) Police Department, the Police Foundation, the COPS Office, and the Commission on Accreditation for Law Enforcement Agencies.


These nine pilot testers were selected to represent the range of LEAs that will be sampled for the BWCSS and other data consumers likely to use the information provided by the BWCSS for research and policymaking purposes. They include variation in terms of agency size and whether the agency is known to have the body-worn cameras.


The instrument was sent to respondents with instructions to complete the survey just as they would if they received the survey as part of the regular sample of agencies. Testers were asked to take note of any aspects of the instrument that were unclear, any questions or topics that were omitted, or any answer choices or response categories that were missing, and to mark these comments directly on the survey instrument. Completed surveys were ultimately received from the following entities: Seattle (WA) Police Department, Elk Grove (CA) Police Department, Henrico County (VA) Police Department, Fort Worth (TX) Police Department, Houston (TX) Police Department, the Police Foundation, the COPS Office, and the Commission on Accreditation for Law Enforcement Agencies.


As a result of the pilot testing, several items on the survey were modified. With regard to the measure asking what organizations or stakeholders were involved in various aspects of the technology’s acquisition or implementation, options for consulting practitioner organization guidance and other LEAs were added. Because of recent public attention on the use of officer-worn cameras, the introductory language to the survey was modified to clearly state the purposes of the survey and the uses of the data collected through the survey. For questions asking about the implementation status of the cameras, a response option of “in pilot testing” was added. Finally, an option of “don’t know or unsure” was added to all categorical response questions. Wording on several items was updated slightly to improve clarity and comprehension.


Finally, BJS consulted with other organizations currently assessing the prevalence of BWC use among state and local law enforcement agencies. In particular, PERF, with support from the Arnold Foundation, recently conducted a survey of BWC use, and advised BJS on expected response rates and item non-response based on their experience. That information was incorporated into the final instrument content (see Attachment 9) and sample design.


  1. Contacts for Statistical Aspects and Data Collection


  1. BJS contacts include

  • Shelley Hyland

202-305-5552

[email protected]


  • Alexia Cooper

202-307-0582

[email protected]


  1. Persons consulted on statistical methodology:

  • Marcus Berzofsky, RTI International


  1. Persons consulted on data collection and analysis:

  • Chris Ellis, RTI International

  • Travis Taniguchi, RTI International

  • Alissa Chambers, RTI International

  • Allen Beck, BJS

Attachments:

  1. Introductory letter

  2. Confidentiality Assurances

  3. Email invitation to complete survey

  4. One-month thank you/ reminder message

  5. First non-response follow-up email

  6. Second non-response follow-up letter sent via USPS

  7. Protocol for telephone non-response follow-up

  8. End of survey message

  9. BWCSS instrument

  10. 60-day ICR notice

  11. 30-day ICR notice

1 In order to compare to past LEMAS surveys, the BWCS will include any LEA that has had 100 or more officers in any LEMAS since 1990 in the certainty stratum.

2 This estimate is based on the 2013 Local Police Departments Equipment and Technology Survey (NCJ 248767)

3 Planty, M. (2014). Using NCVS for Subnational Estimates of Victimization: A BJS Update. Washington, DC: BJS.

4 Klein, R., Proctor, S., Boudreault, M., & Turczyn, K. (2002). Healthy People 2010 Criteria for Data Suppression. Washington, DC: CDC.

5 Valliant, R., Dever, J. A., & Kreuter, F. (2013). Practical tools for designing and weighting survey samples. New York: Springer.

6 Our prevalence estimate is based on a 2014 survey of law enforcement agencies conducted by RTI that found 33% of agencies had used BWC technology in the last two years. This publication is currently being peer-reviewed by the NIJ and will be available in early 2016.

7 Folsom, R.E., & Singh, A.C. (2000). The Generalized Model for Sampling Weight Calibration for Extreme Values, Nonresponse, and Poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.

10


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authoradamsd
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy