LEMAS PATOW Part B 7 20 23

LEMAS PATOW Part B 7 20 23.docx

2023 Law Enforcement Management and Administrative Statistics (LEMAS) supplement survey Post-Academy Training and Officer Wellness (PATOW)

OMB: 1121-0379

Document [docx]
Download: docx | pdf

Part B. Collection of Information Employing Statistical Methods


  1. Universe and Respondent Selection


The universe for the 2023 Law Enforcement Management and Administrative Statistics Post-Academy Training and Officer Wellness (LEMAS PATOW) survey will consist of all general-purpose law enforcement agencies (LEAs) within the U.S. to represent the population of active, publicly funded primary state agencies; sheriffs’ offices; and municipal, county, or regional police departments. They are distinct from special-purpose agencies, sheriffs’ offices with jail and court duties only, and federal LEAs. The target population for the 2023 LEMAS PATOW survey is all general-purpose agencies that employ the equivalent of at least one full-time equivalent (FTE) sworn personnel.1


The survey universe is based on the Law Enforcement Agency Roster (LEAR) that RTI maintains for the Bureau of Justice Statistics (BJS). The LEAR was first developed in 2016 and has since been updated with data from public membership listings and other BJS data collections, including the 2016 LEMAS core, 2016 LEMAS Body-Worn Camera Supplement, 2018 Census of State and Local Law Enforcement Agencies (CSLLEA), 2020 LEMAS Core, and 2021 Survey of Campus Law Enforcement Agencies (SCLEA). The LEAR will also incorporate updates from the 2022 CSLLEA, which is currently in the field. These data collections provided updates on agencies’ in-service and in-scope status and current contact information for a survey point of contact and agency head.


RTI also developed the Agency Record Management System (ARMS) as a portal to identify, validate, and upload changes to agency records in the LEAR. The ARMS also added functionality to allow BJS to continuously update contact information.


Elimination of Out-of-Scope Agencies for the 2023 LEMAS PATOW Survey. A survey sample of 3,500 general-purpose agencies with at least one FTE sworn personnel will be selected from the universe. The following actions have been taken to reduce the possibility of including out-of-scope agencies from the LEAR:


  1. Non-publicly funded LEAs (e.g., those serving private universities) were vetted to reduce out-of-scope agency participation.

  2. Publicly available information was reviewed to determine if agencies do not have general sworn law enforcement authority.

  3. Agency size was checked across a variety of sources, including the 2020 LEMAS core and prior waves of the CSLLEA. Agencies will also be checked against the 2022 CSLLEA in order to attempt to minimize the number of agencies in the frame with less than one FTE sworn personnel.


RTI will extract all cases from the current LEAR that meet the criteria described above as the first step in frame construction for the 2023 LEMAS PATOW survey. Approximately 16,000 general-purpose agencies eligible for PATOW currently exist in the LEAR. Additionally, any agencies that are out of service, contract with another agency, or do not have police operations will be defined as out of scope.


The LEAR is routinely updated to reflect the changing landscape of LEAs in operation. Data from the ongoing 2022 CSLLEA will be the primary source to identify ineligibles and will be merged into the LEAR to ensure that it contains the most accurate and timely data available before the LEMAS PATOW sample is created.


RTI will also review the addresses, phone numbers, and agency head names of cases in the preliminary frame to identify possible duplicates. Any matches will be reviewed to identify true duplicates from non-matches with similar characteristics. For example, small police departments and county sheriff’s offices are often located at the same address. Duplicates will then be dropped from the frame.


Status issues (i.e., whether the agency is still an operational agency), potential duplicates, and other eligibility issues that are not easily resolved will be investigated by reviewing information from previous BJS data collections and through online searches. Research will be done online using publicly sourced information. Sources will include (1) records from prior surveys, (2) law enforcement and government webpages, (3) city-, county-, or organization-level budgets, (4) official governmental reports, and (5) news reports/articles from reputable sources. All updated data will be entered in the ARMS and then tracked and approved through the standard verification process before becoming part of LEAR.


The LEMAS PATOW will use the same sampling methodology as the LEMAS core -- a stratified simple random sample design in which LEAs are stratified by agency type and agency size. Agency type has three categories: (1) local and county police departments, (2) sheriff’s offices, and (3) primary state police. To obtain a representative sample of all agency sizes, agency type is further stratified by agency size except for state police. Agency size is split into seven categories: (1) 1-1.5 FTE, (2) 2 – 4.5 FTEs (3) 5 – 9.5 FTEs, (4) 10 – 24.5 FTEs, (5) 25 – 49.5 FTEs, (6) 50 – 99.5 FTEs, and (7) 100 or more FTE sworn personnel.


All primary state police agencies (N=49)2 and agencies with 100 or more full-time equivalent sworn officers (approximately N=1,030) will be selected with certainty. All other agencies will be placed into non–self-representing (NSR) strata defined by agency type and number of officers. The sample will be selected such that the anticipated respondents are proportionally allocated across NSR strata, which will result in more cases allocated to strata with historically lower response rates.


Starting with the 2020 LEMAS core, RTI developed a new strategy to reduce burden on smaller agencies over time. NSR agencies now have a low probability of being selected in more than one of the next five waves of LEMAS core or supplement administrations. NSR LEAs were assigned a permanent random number (PRN) and sorted by PRN within strata. The PRN is a random number selected uniformly between 0 and 1. After sorting the frame by the PRN, the first agencies in each stratum were selected for the 2020 LEMAS, where is the sample size for each stratum.


For the 2023 LEMAS PATOW survey, the following steps will be done to select the NSR sample:

  1. Any recently identified new agencies will be added to the frame and be assigned a PRN.

  2. Closed or otherwise ineligible agencies will be removed from the frame.

  3. If needed, strata assignments will be updated based on new size information.

  4. A new sample will be selected by beginning after the maximum PRN from the 2020 LEMAS core in each stratum and selecting the next agencies in the stratum where is the sample size for the stratum.

  5. The sampling weight is calculated as where is the population size of the stratum at the time of sampling.


The LEMAS has traditionally experienced a high response rate; the 2020 LEMAS had an overall response rate of 78.4 percent. However, as seen in Table 1, the response rate varied by agency type and agency size. As a result, we will assume a response rate that differs by agency type and size. Lower response rates are assumed for sheriffs and smaller agencies, and these assumed response rates are presented in Table 2.


Table 1. 2020 LEMAS survey response rates, by agency type and size

Agency Type

Agency Sizea

Sample Size

Response Rate

Local Police

100+

669

88.5

50-99.5

147

83.7

25-49.5

290

80.3

10-24.5

573

80.7

5-9.5

499

77.2

2-4.5

346

66.7

1-1.5

107

66.0

Sheriff’s Office

100+

369

75.5

50-99.5

67

60.0

25-49.5

111

67.0

10-24.5

162

76.1

5-9.5

80

71.0

2-4.5

28

58.6

1-1.5

3

N/A*

State

All

49

98.0

a Number of full-time equivalent sworn officers.

*Note: All three sampled agencies were found to be ineligible during LEMAS 2020 data collection.



Table 2. Assumed response rates for the 2023 LEMAS PATOW, by agency type and self-representation status

Agency Type

Self-Representation Status

Response Rate

Local Police

Self-representing

90%


Non-self-representinga

80%

Sheriff’s Office

Self-representing

80%


Non-self-representinga

74%

State Police

Self-representing

90%

All agencies


81%

a Non-self-representing agencies are comprised of agencies with less than 100 FTEs.


The sample size allocation obtained through the proportional number of agencies is presented in Table 3. Based on the response rate assumptions, the design calls for a sample size of 3,500 with 2,850 complete questionnaires expected.


Table 3. Sample size allocation based on the proportion to number of agencies by stratum, 2023 LEMAS PATOW

Agency Type

Agency Sizea

Sample Size

Expected Respondents

Local Police

100+

669

602

50-99.5

148

118

25-49.5

287

230

10-24.5

565

454

5-9.5

494

396

2-4.5

343

275

1-1.5

108

86

Sheriff’s Office

100+

369

295

50-99.5

71

53

25-49.5

113

85

10-24.5

163

122

5-9.5

83

62

2-4.5

31

23

1-1.5

7

5

State Police

All

49

44

Total

3,500

2,850

a Number of full-time equivalent officers.



Sampling Error

Relative standard error (RSE) is used as the measure of precision in the sampling design where the RSE is the ratio of a measure and its standard error. RSE is a standardized measure of precision regardless of estimate value. We examined the RSE in all analytic domains for the prevalence of agencies having state mandated in-service training from the 2020 LEMAS (Table 4). RSE is a standardized measure of precision regardless of estimate value and is useful in comparing different populations. While there are no absolute limits on acceptable RSE values, a lower RSE is often desirable which is shown in the table below.


Table 4: RSE of state mandated in-service training by analytic domain, 2020 LEMAS

National

Local Police SR

Sheriff SR

State*

Local Police NSR

Sheriff

NSR

0.48%

0.57%

0.73%

0.00%

0.58%

1.14%

Note: all state agencies had state mandated in-service training hours



  1. Procedures for Collecting Information


Data Collection Procedures

The LEMAS PATOW survey data collection effort will mirror the protocol used for the 2020 LEMAS core with some enhancements (see Table 5). Data collection will begin with a survey prenotification letter mailed via United States Postal Service (USPS) to the point of contact (POC) for each LEA to inform them about the survey (Attachment 6). This letter will be signed by the Director of BJS and explain the purpose and significance of the survey. Instructions for changing the POC and contact information for the study Help Desk will be included. One week after the prenotification mailing, the same information will be emailed to the agency heads (Attachment 7). Also included with the prenotification will be a letter of support signed by supporting organizations and stakeholders. We will reach out for support from major law enforcement organizations, such as the International Association of Chiefs of Police (IACP), National Sheriffs’ Association (NSA), Small and Rural Law Enforcement Executives Association (SRLEEA), and the International Association of Directors of Law Enforcement Standards and Training (IADLEST) (Attachment 8).


One week after that email, survey invitation packets will be sent via USPS. The survey invitation letter (Attachment 9) will announce the survey launch and provide the survey web address and log-in credentials and information about the survey topics so that the agency can identify the most appropriate respondent. The letter will also provide a toll-free telephone number and project-specific email address for the survey Help Desk, should the agency have any questions. Instructions for changing the POC via the survey website, fax, or telephone will be included, in the event the LEA needs to change the POC to a more appropriate person. Included with the cover letter will be an informational flyer (Attachment 10). The flyer will describe the overall BJS law enforcement survey program and how the LEMAS PATOW survey relates to the LEMAS core and other BJS collections. One-third of sampled agencies will also receive a Pre-Survey Worksheet as part of an experiment (Attachment 11, see Survey Invitation Experiment below).


After the invitation letter mailing, non-responding agencies will receive several reminders from RTI and behalf of BJS (Attachments 12-18). Non-respondents that have not completed a survey will also receive an end of the study notification (Attachment 19). Throughout the data collection period, respondents will receive a thank-you letter (Attachment 20) once they have completed the survey. The text will formally acknowledge receipt of the survey and state that the agency may be contacted for clarification once their survey responses are processed.




Table 5. Outreach schedule for 2023 LEMAS PATOW


Survey Communication

Project Week

Mode

Attachment

Pre-notifications

1-2 weeks prior to invitation

Mail, Email

6,7

Letter of support

1 week prior to invitation

Email

8

Invitation and Informational Flyer (and Experimental Worksheet, when applicable)

1

Mail

9,10,11

First Reminder

3

Email

12

Second Reminder

4

Mail

13

Third Reminder

7

Email

14

Fourth Reminder

8

Mail

15

Prompting Calls

11-16

Phone

16

Fifth Reminder

19

Mail

17

Sixth Reminder

22

Email

18

End of Study Notice

23

Mail

19

Thank You Notice

2-25

Mail

20



The 2023 LEMAS PATOW survey will employ a multi-mode approach that relies primarily on web-based data collection with hardcopy surveys provided in reminder outreach, in line with standard web-first protocols. When the 2020 LEMAS Core ended data collection, there was an overall response rate of 78.4% (about 2,700 agencies). Of these, 2,380 agencies (88%) responded via the web. Due to increased web-based capabilities of LEAs and the project’s strong encouragement to respond using the web-based data collection tool, BJS expects that most of the agencies responding to the 2023 LEMAS PATOW survey will use the web-based option.


Survey Invitation Experiment

Data collected through law enforcement surveys often include quality issues such as inaccurate reporting, item non-response, and potential error in estimation of operational budgets. A critical challenge in maintaining data quality and integrity for future BJS law enforcement surveys is to better prepare and guide respondents in answering questions that require the review of administrative records.


Previous research has demonstrated that providing respondents with a brief pre-notification letter containing basic information about the survey purpose and the survey questions can result in improved data quality.3,4 This impact has been observed across different modes of pre-notifications5 and has even been associated with a decrease in item non-response.6 However, the literature provides limited evidence on the effectiveness of providing a data worksheet prior to survey administration, which would allow the respondent to consult with colleagues and collect administrative data in advance. The purpose is to enhance accuracy in answering specific numeric survey questions (e.g., operating budgets, number of sworn officers). Although LEMAS PATOW will provide a paper version of the survey for download prior to online data entry, this remains a passive approach, and a paper survey is not provided to all nonrespondents until a later survey reminder. It is unclear how many respondents download the paper version of the survey in advance or use the provided paper survey and whether they use the paper copy to collect administrative data before ultimately submitting the survey by web. Given the benefits of providing a pre-notification letter and the potential benefit of providing an advance data worksheet to respondents, it is warranted to investigate whether actively providing an advance worksheet would enhance data quality in surveys with burdensome administrative survey questions.


Currently, several data quality measures are in place on studies such as CSLLEA and CLETA to address anticipated problems such as the agencies’ fiscal year operational budget. For example, one data quality check is to incorporate a survey prompt based on a threshold of a ±20% change from the previous data provided by the agency. This alerts the respondent of an expected range based on their previous survey submission. A similar data quality flag has also been used during data quality follow-up calls with agencies that responded using a paper instrument. Another measure is additional instructions that respondents should exclude major costs that could drive up the budget value, such as construction or equipment purchases. Furthermore, instructions are provided to the respondent to report data for the fiscal year that includes the reference dates since different entities use different fiscal year calendars. Finally, respondents are allowed to provide an estimate and mark a box to indicate that the value was estimated. However, providing agencies with the questions requiring numeric responses, such as the fiscal year budget item, in advance gives them an opportunity to consult with colleagues and review administrative records leading to better data quality.


Experimental design. BJS plans to conduct an experiment to formally test the effectiveness of an advance data worksheet in the LEMAS PATOW, with the aim of better directing and preparing respondents to answer critical data points such as the fiscal year operating and training budgets, numeric questions on training hours, and questions on staffing.


The proposed experiment includes an equal sample distributed among the following three groups, from an overall sample of 3,500, during the upcoming administration of the LEMAS PATOW:

  • Group 1 – Control group conducted in accordance with the standard PATOW data collection procedures

  • Group 2 – Experimental group which would be provided with the paper instrument at the time of the initial invitation

  • Group 3 – Experimental group which would be sent the advance data worksheet at the time of the initial invitation

All agencies will receive an initial invitation by mail. Group 1 is the standard approach where respondents are asked to submit data online, but they are informed that a paper version of the survey is available to download from the website or that one can be sent to them upon request. Groups 2 and 3 are asked to submit data online in the initial invitation, but a paper instrument (Group 2) or worksheet (Group 3) is included in their initial survey invitation envelope. The worksheet includes all survey items related to operating budget and other survey items requesting numeric data (e.g., number of staff, number of hours) in the LEMAS PATOW (Attachment 11).


Table 6. Sample size allocation by agency type, size, and group, 2023 LEMAS PATOW


Agency Type

Size (FTE) from Frame

Total Sample Size

Group 1 (Control)

Group 2 (Paper Instrument

Group 3 (Worksheet)

Local Police

All

2,614

872

871

871

Large (FTE > 99.9)

669

223

223

223

Non-Large (FTE <= 99.9)

1,945

649

648

648

Sheriff’s Office

All

837

279

279

279

Large (FTE > 99.9)

369

123

123

123

Non-Large (FTE <= 99.9)

468

156

156

156

Primary State Police

All

49

17

16

16

Large (FTE > 99.9)

49

17

16

16

Non-Large (FTE <= 99.9)

0

0

0

0

All Agency Types

All

3,500

1,168

1,166

1,166

Large (FTE > 99.9)

1,087

363

362

362

Non-Large (FTE <= 99.9)

2,413

805

804

804


The experiment analysis will include the following data between groups: days to submit the survey, data quality for the survey items that are also included on the worksheet (item missingness, number of edit failures on web, and breakoff status), and duration in minutes to complete survey on web. Web respondents will be asked if they reviewed or used the hardcopy instrument or worksheet, and if they found it helpful in preparing responses at the end of the survey. While all agencies will be involved in the experiment, only agencies that have responded before the hard copy survey is sent in the fourth reminder (week 8) will be included in this analysis.


Data Processing

Upon receipt of a survey (web or hardcopy), data will be reviewed and edited, and if needed, the respondent will be contacted to clarify answers or provide missing information. Respondents who submit via web will be prompted with real-time validation checks when submitting missing or inconsistent data. Any unresolved items that remain after the respondent submits will result in recontact by RTI staff to the respondent to attempt to resolve these issues.


The hardcopy survey will be developed and keyed using TeleForm, which will allow the surveys to be scanned and the data read directly into the same database containing the web survey data. This will ensure that the same post-collection data quality review procedures, which mirror and expand upon the web validation checks, are applied to all survey data, regardless of response mode. The following is a summary of the data quality assurance steps that RTI will take during the data collection and processing period:

Data Editing. RTI will attempt to reconcile missing or erroneous data through automated and manual edits of each questionnaire. In collaboration with BJS, RTI will develop a list of edits that can be completed by referring to other data provided by the respondent on the survey instrument. For example, if a screening question was left blank, but the follow-up questions were completed, a manual edit could be made to indicate the intended positive response to the screening question. Through this process, RTI can quickly identify which hardcopy cases require follow-up and indicate the items that need clarification or retrieval from the respondent.

Data Retrieval. When it is determined that data retrieval is needed, an Agency Liaison will contact the respondent for clarification. Throughout the data retrieval process, RTI will document the questions needing retrieval (e.g., missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.

Data Entry. Respondents completing the survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning the survey via hardcopy (mail or fax), data will be scanned once received and determined complete. Once the data have been entered into the database, they will be made available to BJS via an SFTP site. To confirm that editing rules are being followed, RTI will review frequencies for the entered data after the first 10% of cases are received. Any issues will be investigated and resolved. Throughout the remainder of the data collection period, RTI staff will conduct regular data frequency reviews to evaluate the quality and completeness of data captured in both the web and hardcopy modes.


  1. Methods to Maximize Response Rates


Minimizing Nonresponse

The LEMAS Core and supplements have historically achieved high survey response rates, with the earlier administrations achieving above a 80% response rate. However, the 2020 LEMAS core achieved 78% response rate after being in the field 8 months. In any instances where a response rate fell below the OMB standard of 80%, a nonresponse bias analysis was conducted to evaluate the bias levels. BJS and RTI will undertake various procedures to maximize the likelihood that the 2023 LEMAS PATOW survey reaches a similar level of participation. For example, RTI will work with BJS to adapt survey outreach protocols based on response patterns and tailor messages by agency type similar to what was done for the 2022 CSLLEA.


BJS will use a web-based instrument supported by several online help functions to maximize response rates. For convenience, respondents will receive the survey link in an email invitation and a mailed hardcopy invitation. A Help Desk will be available to provide both substantive and technical assistance. BJS will supply the Help Desk with the survey flyer, which contains answers to frequently asked questions and guidance on additional questions that may arise (Attachment 10). In addition, the web survey interface is user-friendly, which encourages response and ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the web survey instrument. The online application will be flexible so it can adapt to meet the needs of multiple device types (e.g., desktop computer and tablet), browser types (e.g., Microsoft Edge and Google Chrome), and screen sizes.


Other features in the instrument will include the following, each intended to enhance respondent experience and reduce burden, resulting in continued respondent cooperation:


  1. Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.

  2. The online instrument will be programmed with data consistency checks and automatic prompts, thereby reducing the amount of item nonresponse.

  3. The online consistency checks will be tailored based on previously reported data or on the type and size of agency, providing the respondent helpful information in resolving any validation errors and reducing the likelihood of breakoff.

  4. LEAs may also download and print a hardcopy survey from the website or request one from the Help Desk.


To obtain higher response rates and to ensure unbiased estimates, multi-stage survey administration and follow-up procedures have been incorporated into BJS’s response plans. Ensuring adequate response (not just department response rates, but also item responses) begins with introducing agencies to the survey. This will be accomplished initially through the initial invitation letter and accompanying documents (Attachments 9 and 10). Resources available to help the respondent complete the survey (e.g., telephone- or email-based Help Desk support) will be described in detail. We will provide LEAs with online and fax methods to identify respondents and change the POC assignment if needed.


Adjusting for Nonresponse. With any survey, it is typically the case that some of the selected units, in this case LEAs, will not respond to the survey request (i.e., unit nonresponse) and some will not respond to particular questions (i.e., item nonresponse). Weighting will be used to adjust for unit nonresponse in the 2023 LEMAS PATOW survey. To determine which factors to use in the agency nonresponse weight adjustments, a procedure available in RTI’s SUDAAN software based on the Generalized Exponential Model will be used to model the response propensity using information from the LEAR (e.g., agency characteristics such as geography, operating budget, whether officers have arrest powers) within sampling strata.7 SUDAAN is a statistical software package that analyzes complex survey data. Ideally, only variables highly correlated with the outcomes of interest will be included in the model used to reduce potential bias. Given the expected differential response rates by agency type and size, the weighting adjustment procedures will attempt to minimize the bias in the estimates within these domains.

As previously stated, an overall response rate of 81% is expected (Table 2). To ensure that nonresponding agencies are not fundamentally different than those that participate, a nonresponse bias analysis will be conducted if the agency-level response rate obtained in the 2023 LEMAS PATOW survey falls below 80%. Administrative data on agency type, size, census region or division, and population served will be used in the nonresponse bias analysis. For each agency characteristic, RTI will compare the distribution of respondents to nonrespondents. A Cohen’s Effect Size statistic will be calculated for each characteristic. If any characteristic has an effect size that falls into the “medium” or “high” category, as defined by Cohen, then there is a potential for bias in the estimates.8 Each estimate will be included in a nonresponse model to adjust weights to minimize the potential for bias in the estimates. In addition to estimating effect sizes, an examination of early and late responders will be conducted. If late responders (i.e., those that take more contact attempts before responding) are significantly different on the key outcomes of interest, that is also an indication of potential bias. Comparison will be made to determine if the potential for bias varies by agency type and size.

  1. Final Testing of Procedures


BJS shared a copy of the draft LEMAS PATOW survey instrument with research scholars known to have an interest in law enforcement issues and law enforcement professionals. These expert reviewers were given an electronic draft of the survey instrument and asked to comment on question wording, response categories, and overall structure and layout. Responses were primarily received as written annotations within the document. An instrument was then drafted to use in cognitive interviewing. Twenty-four LEAs across the country, distributed evenly across small, large, state and local agencies, were emailed a draft of the supplement instrument and were interviewed regarding the instrument. These twenty-four LEAs also commented on question wording, response categories and layout, and identified any issues with recall or ability to complete the instrument. Results from cognitive interviewing were used to make final revisions to the instrument (Attachment 3).



  1. Contacts for Statistical Aspects and Data Collection


  1. BJS contacts include:

Elizabeth Davis

LECS Program Manager and Statistician

202-305-2667

[email protected]

Alexia Cooper

Law Enforcement Statistics Unit Chief

202-307-0582

[email protected]


Sean Goodison

LEMAS PATOW survey Program Manager

202-532-5148

[email protected]


  1. Persons consulted on statistical methodology:


Harley Rohloff

RTI International


  1. Persons consulted on data collection and analysis:


Megan Waggy

RTI International


Tim Smith

RTI International


Mark Pope

RTI International



1 For reporting purposes, two part-time officers are treated as equivalent to one full-time officer.

2 Hawaii does not have a primary state police agency.

3 Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons.

4 Couper, M. P. (2008). Designing effective Web surveys. Cambridge University Press.

5 Bosnjak, M., Neubarth, W., Couper, M. P., Bandilla, W., & Kaczmirek, L. (2008). Prenotification in web-based access panel surveys: The influence of mobile text messaging versus e-mail on response rates and sample composition. Social Science Computer Review, 26(2), 213-223.

6 Veen, F. V., Göritz, A. S., & Sattler, S. (2016). Response effects of prenotification, prepaid cash, prepaid vouchers, and postpaid vouchers: An experimental comparison. Social Science Computer Review, 34(3), 333-346.


7 Folsom, R.E., & Singh, A.C. (2000). The generalized model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.

8 Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge, 2013.

16


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authoradamsd
File Modified0000-00-00
File Created2023-08-18

© 2024 OMB.report | Privacy Policy