MC Awareness Survey_ICR_Part B

MC Awareness Survey_ICR_Part B.docx

Survey on Driver Awareness of Motorcycles

OMB: 2127-0761

Document [docx]
Download: docx | pdf

Shape1

Information Collection Request Supporting Statement: Section B

Survey on Driver Awareness of Motorcycles

OMB Control No. New

Abstract: The National Highway Traffic Safety Administration (NHTSA) of the U.S. Department of Transportation is seeking approval to collect information from two samples of randomly selected adults who are aged 18 years or older and have driven a motor vehicle at least once in the past three months for a one-time voluntary survey to report their knowledge, attitudes, and awareness of safe-driving behaviors towards motorcycles. One sample consists of adult drivers residing in Florida and the other sample consists of adult drivers residing in Pennsylvania. NHTSA will contact a total of 33,460 to achieve a target of at least 2,486 complete voluntary responses consisting of 1,243 completed instruments from the Florida sample and 1,243 completed instruments from the Pennsylvania sample. The large geographic and demographic sizes of Florida and Pennsylvania allow for complex driving environments in which motorcycles and passenger vehicles operate in a range of traffic conditions. Notably, neither State has a universal motorcycle helmet use law, but each has a sizable population of registered motorcycles and varied helmet use rates. For example, in 2019, 52 percent of motorcyclists killed in Florida and 51 percent of motorcyclists killed in Pennsylvania were not helmeted.1 The estimated burden of this collection is 3,289 hours with 2,709 hours associated with survey invitations and reminders and 580 hours associated with survey completions. NHTSA will summarize the results of the collection using aggregate statistics in a final report to be distributed to NHTSA program and regional offices, State Highway Safety Offices, and other traffic safety and motorcycle safety stakeholders. This collection supports NHTSA’s mission by obtaining information needed for the development of traffic safety countermeasures. particularly in the areas of communications and outreach, for the purpose of reducing fatalities, injuries, and crashes associated with multi-vehicle motorcycle crashes.

B. JUSTIFICATION

B.1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection. Response rate means -- of those in your respondent sample, from what percentage do you expect to get the required information (if this is not a mandatory collection). The non-respondents would include those you could not contact, as well as those you contacted but who refused to give the information.

The purpose of the survey is to obtain information from drivers about their knowledge attitudes, and awareness of safe-driving behaviors regarding motorcycles. The goal of the survey is to assess the extent and quality of driver awareness and knowledge towards motorcycle interactions and to identify areas in outreach and education that may need improvement. Driver knowledge, attitudes, and awareness of vulnerable road users, including motorcyclists, are related to safe driving behaviors. The survey will use two samples. One sample is of adults residing in Florida and the other sample is of adults residing in Pennsylvania. These States are large geographically and demographically, have sizable motorcycle populations, and have varied helmet use rates due to their helmet laws. The proposed study will employ statistical sampling methods to collect information from the target populations and draw inferences from the sample to the target populations. The product will be a technical report to be shared with stakeholders in motorcycle safety, including State highway offices, local governments, motorcycle safety advocates, and those who develop traffic safety communications that aim to reduce motorcycle-related crashes.

B.1.a. Respondent Universe

The potential respondent universe includes residents of Florida and residents of Pennsylvania who are at least 18 years old and self-report having driven at least once in the past three months, the target populations of the survey. The design selects a probability sample of adults from sampled households, screens for driving status of the selected adult, and collects demographic data on non-drivers before they screen out while collecting the full survey data on drivers. The survey will be conducted with a sample of at least 2,486 complete voluntary responses consisting of 1,243 completed instruments from the Florida sample and 1,243 completed instruments from the Pennsylvania sample. The purpose of sampling Florida and Pennsylvania is that each State is geographically and demographically diverse, with a sizable number of motorcycle registrations. In 2019, there were 591,267 registered motorcycles in Florida and 366,641 registered motorcycles in Pennsylvania.2 The 2015-2019 American Community Survey (ACS) estimates there are 7,736,311 residential households in Florida (including 489,240 households without cars) and 5,053,106 households in Pennsylvania (including 552,961 households without cars).3

B.1.b. Respondent Sampling

The survey will use an Address-Based Sampling (ABS) approach to sample selection. The sampling frame will be based on address data from the U.S. Postal Service (USPS) computerized Delivery Sequence File (DSF) of residential addresses. The DSF is derived from mailing addresses maintained and updated by USPS and available from commercial vendors and provides a comprehensive frame that will reach the entire population living at an address that receives mail delivery.

The ABS offers advantages over telephone Random Digit Dialing (RDD), such as near universal coverage, higher response rates, and a better ability to target specific population groups by utilizing community-level sociodemographic data from both the U.S. Census Bureau and auxiliary address-level data compiled by commercial sample providers. However, using an ABS frame presents a risk in that ABS frames tend to have systematic nonresponse with respondents being older and disproportionately Caucasian (Rapoport, Sherr & Dutwin, 2012, 2014).4,5 To mitigate this risk, the procedures for sampling include the approach shown in Table 1 as a means for oversampling residences in locations with Lower Response Scores, as revealed by the Census Planning Database.

B.1.b.1 Sampling Frame

The sampling frame will be based on address data from the USPS computerized DSF of residential addresses. The DSF is a computerized file that contains all delivery point addresses serviced by the USPS, except for general delivery. Each delivery point is a separate record that conforms to all USPS-addressing standards. The initial studies of the DSF estimated that it provided coverage of approximately 97-98% of the household population.6,7

The DSF cannot be obtained directly from the USPS. It must be purchased through a licensing agreement with private vendors. These vendors are responsible for updating the address listing from the USPS, and augmenting the addresses with information (e.g., name, telephone number) from other data sources. Fors Marsh Group (FMG), the Contractor that will implement the Motorcycle Awareness Survey for NHTSA, will obtain the DSF augmented sample from Marketing Systems Group (MSG). By geocoding an address to a Census block, the MSG file augments the DSF by merging Census and other auxiliary information from the Census data files and other external data sources. MSG appends household, geographic, and demographic data to the frame.

MSG maintains a monthly updated, internal installation of the DSF from the Postal Service. By applying a series of enhancements to the DSF, MSG evolves this database of mail delivery into a sampling frame capable of accommodating multiple layers of stratification or clustering when selecting probability-based samples. Address enhancements provided by MSG include amelioration of some of the known coverage problems associated with the DSF, particularly in rural areas where more households rely on P.O. Boxes and inconsistent address formats.

The DSF is derived from mailing addresses maintained and updated by USPS and available from commercial vendors.8,9 The DSF will provide a comprehensive frame of households in Florida and households in Pennsylvania.

B.1.b.2 Sample Sizes

FMG will categorize the full housing unit street addresses by the Census Planning Database10 (PDB) Low Response Score, which is the predicted self-response rate for Census Bureau operations such as the American Community Survey and the Decennial Census. The Low Response Score was transformed so that the overall response rate matches our projected response rate of 7.4%. For the Florida sample and the Pennsylvania sample, five categories, or strata of approximately equal size will be generated according to their anticipated response rates. Households with lower anticipated response rates will be oversampled relative to those with higher anticipated response rates to obtain a more equal ultimate representation in the sample. The population strata sizes will be approximately 1.86 million housing units in Florida and approximately 1.14 million housing units in Pennsylvania. The standard ABS filters (i.e., not vacant; not a drop point location; or not a mailbox unless flagged as OWGM “the only way to get mail”) will be applied. Table 1 shows the projected sample sizes and anticipated number and rate of response for Florida and for Pennsylvania by PDB’s Low Response Score.

Table 1. Sample Stratification and Anticipated Number of Responses in Contacted Households in Florida and Pennsylvania.


Florida

Pennsylvania

Strata

Outgoing Sample

Anticipated Number of Responses

Response Rate

Outgoing Sample

Anticipated Number of Responses

Response Rate

1 (highest response rate)

2,388

249

10.42%

2,589

248

9.58%

2

3,001

248

8.26%

2,963

249

8.40%

3

3,330

248

7.45%

3,209

249

7.76%

4

3,707

249

6.72%

3,570

248

6.95%

5 (lowest response rate)

4,304

249

5.79%

4,399

249

5.66%

Total

16,730

1,243

7.43%

16,730

1,243

7.43%

Source: the 2021 Census Planning Database; Abt Associates calculations (described below).

The PDB provides the data at the level of Census tracts (contiguous geographic areas with population of about 4,000 people). The tract-level Low Response Score, which is the predicted self-completion non-response rate for Census operations such as the American Community Survey (ACS) and the Decennial Census, was used to obtain the projected number of completed surveys in the following manner. Denoting the value of the Low Response Score (LRS) variable in tract as , we calculated the log odds of response, linearly transformed it, and applied the inverse log odds link to bring it back to the anticipated response rate so that the ultimate number of completed surveys matched the anticipated target:

where the value of is solved for numerically so that if survey invitations are sent to tract , the total number of responses is approximately 1,243 per State, as specified in the power analysis of Section B.2.c. The value of is set to 0.6 to reduce the variability of the anticipated response rates, and thus producing a somewhat more conservative design closer to the simple random sample, mitigating the risk of the differential response rate assumptions being inaccurate. The resulting range of group-specific response rates between 6% and 10% from the worst to the best responding population groups is typical for general population surveys. For instance, the 2020 Behavioral Risk Factor Surveillance System data11 provide both the design weight and the final weight, thus making it possible to derive raking ratio nonresponse adjustments.12 The average raking ratio adjustment for respondents with a less-than-high-school education was 16.4, while that for college graduates was 5.8—a nearly threefold difference in relative response rates. The differences in raking ratio adjustments by race were smaller—e.g., average raking ratio of 8.0 for non-Hispanic Whites vs. 11.6 for non-Hispanic Blacks vs. 10.0 for Hispanics, indicating about 45% difference in group-wise response rates. Within strata, households will be randomly selected with equal selection probabilities within the sampling strata described above in Table 1. Additional gains in achieving a more balanced representation of the population will be achieved by using systematic sampling within strata. The commercial sample providers can be instructed to sort the frame by the available data relevant for the project—such as zip code, household size, number of vehicles, and household income where available—before the sample draw is made. The necessary sampling intervals are determined by the sample providers, based on the available number of records.


B.1.b.3. Within-Household Selection

A number of respondent selection methods have been tested for ABS mail surveys, including for the Behavioral Risk Factor Surveillance System (BRFSS).13 Although past studies have indicated a tendency for the wrong person to complete the survey when applying birthday methods of within-household selection,14 a recent evaluation of birthday selection methods for ABS surveys found a small degree of self-selection in larger households; however, the impact on the substantive estimates was small.15 Considering the low impact of the overall estimates and the simplicity of implementing the birthday methods, we will select the adult within the household who has the next birthday to complete the survey (as opposed to the last birthday or a split next/last sample). The within-household selection instructions will be included in all contacts with the household.

B.1.c. Response Rate

In the preceding section we discuss how we derived the projected sample sizes and anticipated number and rate of response, by basing calculations on the PDB’s Low Response Score, as shown in Table 1. For Florida and Pennsylvania, we propose using an outgoing sample of 16,730 each and anticipate receiving 1,243 completed survey responses from each State, leading to an overall response rate of 7.43%.

B.2. Describe the procedures for the collection of information including:

Statistical methodology for stratification and sample selection.

B.2.a. Procedures for Collection of Information

The Contractor, FMG, will select a stratified random sample of households from the DSF for Florida and a stratified random sample of households from the DSF for Pennsylvania. The procedures include a set of household contacts (“waves”) to inform households of the project purpose and sponsor and provide instructions for completing the survey. Table 2 lists the contact waves. In the first wave, FMG will mail postcards that briefly introduce the survey project to the selected households and request participation. Next, in Wave 2, an invitation letter will be sent with instructions for completing the survey online. As the survey will use the next birthday method for random selection of one respondent aged 18 or over from a household, the first letter also provides instructions for respondents to follow the “next birthday” method.

Web response is NHTSA’s preferred method for the survey. Therefore, the survey will initially offer only a web response mode for the selected household member; the letter will provide a secure, encrypted web link (using https://) to access the survey. In all cases, sampled households will receive their own non-sequential alphanumeric User ID granting them controlled access to the survey. This unique User ID enables FMG to track whether someone from a household completed the survey. For those who do not respond, FMG will mail a series of additional contact waves as reminders of the web survey and to add a paper alternative. Households that provide a completed response will be removed from subsequent contacts.

Table 2. Motorcycle Awareness Survey Contact Protocol

Wave

Mode

Contents

1

Post card to serve notice of selection

Serves notice of selection and forthcoming instructions; explains the rationale of the survey

2

Invitation letter with instructions

Cover letter with PIN, hyperlink to web survey, instructions, contact information, $1 pre-incentive.

3

Post card reminder

First reminder to participate in the survey.

4

Letter with paper instrument

Printed questionnaire, prepaid return envelope, PIN, and hyperlink to web survey.

5

Final post card reminder

Last reminder to participate in the survey.

6

Thank-you letter

Thank you letter with $10 post-incentive.

Close data collection



In addition to preventing multiple responses from an individual, this unique User ID allows the system to prevent re-entry or editing of entered data once a survey is dispositioned as completed. Cases that are submitted and dispositioned as completed will no longer be permitted to access the survey, and all planned nonresponse follow-up steps for those cases will cease.

The web survey will accommodate mobile devices and respondents will be permitted to save their responses as they progress through the survey. If the respondent is disconnected or needs to re-enter the survey, then he or she may begin the survey from the last question answered using the unique user ID provided to them. To minimize measurement error from the differences in mode, the web survey will use questionnaire wording and response options consistent with the mail survey. The web survey will have two navigational buttons: a “Back” button, which allows the respondent to change or review his or her previous responses, and a “Next” button, which saves his or her responses and progresses the survey instrument. The web survey will be accessible to people with disabilities and will be Section 508 compliant.

While the communication materials sent to the sample will encourage respondents to complete the survey online (“push to web”), some respondents may choose to complete the paper version of the questionnaire. Data from completed paper questionnaires will be entered using Scantron technology and uploaded to the respondent control system, which allows for the reporting of the status of all cases. We will conduct data-capture quality checks to ensure compatibility and readability between our scanning equipment and the survey questionnaires.

B.2.b. Precision of Sample Estimates

The objective of the sampling procedures described above is to produce a random sample of the target population. This means that with a randomly drawn sample, one can make inferences about population characteristics within certain specified limits of certainty and sampling variability. The study will select households using a stratified design to efficiency and maximize overall study precision using single housing unit addresses. Analysis of the data collected in this survey will need to be performed in statistical packages that appropriately account for the complex survey sampling design features such as weights and stratification, including SAS PROC SURVEY suite, Stata svy commands, and R library(survey). Margins of sampling error for the anticipated state-specific sample sizes are reported in Table 3 below.

B.2.c. Power Analysis

Sample size calculations, including the effect size detectable between two states with 80% power where p = 50% are presented in Table 3 below.

Table 3. Sample Size Calculations.

(1)

Effective sample size per State

(2)

Margin of error, p = 50%

(3)

Margin of error, p = 10%

(4)

Nominal sample size per State = (1) * unequal weighting design effect

(5)

Surveys to mail out per State = (4) / eligibility rate / completion rate

(6)

Effect size detectable between two states, 80% power, p = 50%

1,000

3.1%

1.9%

1,243

16,730

6.4%


B.2.d. Response Rate

In general, response rates can be affected by many factors, such as mode, sponsor, topic, questionnaire length, use of incentives, and frequency and intensity of follow-up efforts for nonrespondents (see, for example, Groves & Couper 2012;16 Massey & Tourangeau 2013;17 Plewes & Tourangeau 201318). However, based on prior experience, we anticipate an AAPOR19 Response Rate 3 (RR3) of approximately 4% in each State.

B.2.e. Non-response Bias Analysis

The study will follow a two-step process to assess the risk of nonresponse bias using auxiliary variables available for the entire invited sample. First, logistic regression methods will be used to estimate response propensity, using auxiliary variables in the sampling frame that are available for both respondents and nonrespondents. For the ABS frame, this will include commercially appended auxiliary variables or appended geographic data (e.g., block group characteristics from the American Community Survey (ACS) or U.S. Census mail return rate). Then, for those demographic characteristics that are found to be significant predictors of response status, additional logistic regression analyses will be conducted to determine whether these characteristics are also significantly related to key outcome variables of interest. For characteristics related to both response propensity and survey outcomes, unadjusted estimates would be subject to bias; this will be considered when computing survey weights. It is important to note that nonresponse bias is specific to a particular statistic so separate assessments will be needed for key estimates.

B.2.f. Sample Weighting

The study will develop full-sample weights that will reflect the study design and mitigate the risks of nonresponse bias and/or coverage bias. These weights will allow expansion of the samples to the target populations of the two states (Florida and Pennsylvania). For each state, weights will be computed as follows:

Base Weights. Base weights (i.e., sampling weights) will be computed as the inverse of each household’s probability of selection from the frame, which is the number of addresses sampled divided by the number of addresses on the frame.

Nonresponse Adjustments. Weights will then be adjusted in two steps to compensate for unit nonresponse: the first step will entail an eligibility adjustment (i.e., to account for sample members with unknown eligibility) and the second step will entail a survey completion adjustment (i.e., to account for eligible households who did not complete the survey). In each stage, the weights of usable cases will be inflated to account for cases that are unusable. The eligibility adjustment will entail inflating the weights of units with known eligibility (i.e., eligible and ineligible units) to account for the unknown eligibility group. The survey completion adjustment will entail inflating the weights of eligible respondents to account for eligible households who did not complete the survey (i.e., eligible nonrespondents).

Within-Household Adjustment. This adjustment will account for household size and adjust for the probability of a single eligible adult being selected within the household. This is based upon responses to a survey question that asks how many eligible adults live in the household. The probability of selecting a single adult is inversely proportional to the number of eligible adults in the household, therefore, this weighting step entails multiplying the previous weights by the number of eligible adults in the household. For example, if there are two eligible adults in a household, then the probability of selecting a single adult is 1/2 and the previous weights will be multiplied by 2. Note that for purposes of computing this adjustment, we expect to top-code the number of eligible adults (e.g., at 2 or 3) to mitigate weight variation and improve survey precision.

Handling of Frame Multiplicity. This refers to the situation in which two or more records exist on the sample frame and the set of records refer to the same physical housing unit in the target population. This can occur, for example, if one record on the ABS frame has a P.O. address for a housing unit and a second record has the physical address of the housing unit itself. This can also occur if two records have varying abbreviations of the same address and the multiplicity was not found at the time the frame was constructed. Since these tend to be rare events, if we encounter two or more records in the sample that point to the same physical housing unit, one record will be flagged as being eligible for the study and the other record(s) will be flagged as being ineligible. All records retain their original selection weight and omitting ineligible records from the sample is an effective way to estimate the size of the eligible target population. The size of the eligible target population is the sum of the base sampling weights associated with eligible cases.

Multiplicity Adjustment for Addresses Containing Multiple Households. Multiplicity adjustments are applied to account for the lack of one-to-one correspondence between addresses and households. This first multiplicity adjustment accounts for situations in which two or more households share the same mailing address. In this case, there would be one record on the sample frame for the address, but in fact the record corresponds to more than one household. One example of this situation is for a household in which a second family is living in the basement. Another example is when there are multiple structures on a property, but each structure belongs to a different household, as in a main house/guest house situation. To address this issue, we will assume the person opening the survey solicitation material represents a randomly selected individual from all households that receive mail at the address. One survey item asks the respondent whether additional households receive mail at that address. If more than one household receives mail at that address, then the sample weight for the respondent will be multiplied by a factor of 2. This adjustment will allow us to keep the sampling unit at the household-level, and not at the address-level. This adjustment implicitly assumes that no more than two households share an address, which mitigates weight variability.

Multiplicity Adjustment for Households with Multiple Addresses. This adjustment accounts for situations in which a household can be reached at multiple addresses. For example, a household receiving mail at two distinct addresses within the state has twice the chance of selection (assuming an equal probability design). Therefore, we ask respondents questions on mail use as to ascertain whether they receive mail at multiple addresses. Respondents receiving mail at multiple addresses will have their weight multiplied by a factor of 0.5, as to avoid overrepresentation in the sample. This adjustment implicitly assumes that households have no more than two addresses at which they receive mail; this assumption allows for simpler question wording and mitigates weight variability.

Calibration (Raking) Adjustments. Weights will be calibrated to external benchmarks via raking (i.e., iterative proportional fitting). This last step produces the final survey weights for use in population level estimation and inference and ensures that the weighted sample characteristics match known distributions for the target population. Calibration adjustments can also be used to correct distortions in the sample distribution of key respondent characteristics caused by nonresponse and/or coverage error. The study will calibrate to both household-level characteristics (e.g., income, rent status, number of vehicles, number of employed adults) and individual-level characteristics (e.g., age, gender, race/ethnicity, education, and/or marital status). The American Community Survey (ACS) will be used to obtain appropriate population control totals for the Florida and Pennsylvania target populations as the external benchmarks.

B.3. Describe methods to maximize response rates.

The survey will use a logical flow of household contacts to first introduce the study, provide information on completion, encourage the conversion of nonresponse, present a paper alternative to be returned in the mail, and include an incentive. Another facilitator of response will be adaption of the web-based questionnaires for mobile platforms (e.g., smartphones, tablets) so that prospective respondents who wish to use such devices when taking the survey are not deterred. Once a questionnaire is programmed, the platform will automatically adapt the presentation to optimize completion on a mobile device. This survey involves sending potential responders and nonresponders in two states across six waves of mailings. These mailing materials will include:

  1. An advance postcard, describing the purpose of the survey, the survey request, and expected burden.

  2. An invitation letter with a secure link to the web survey, instructions for within household selection and a prepaid $1 incentive. Individuals who submit a completed questionnaire will receive a $10 incentive.

  3. A postcard reminder to complete the survey that repeats the language of the advance letter.

  4. A second mailing, which will aim to obtain responses from those who are unable or do not wish to complete the survey online. This mailing will contain a cover letter, paper survey instrument, and a postage paid business reply envelope.

  5. A final reminder postcard to all nonrespondents.

  6. A thank you note and $10 post-paid incentive.

During the survey administration, FMG will maintain support for the respondents via an e-mail help desk and a toll-free phone number. Clear instructions for accessing this support will be provided on paper materials and the web survey.

B.4. Describe any tests of procedures or methods to be undertaken.

B.4.1. Cognitive Testing

As part of the study design, the contractor conducted nine cognitive interviews to test and refine questions for the survey. All interview participants were drawn from the study’s target population.

The entire survey instrument was tested. The sessions were conducted either in person at FMG’s facilities or via video conferencing that allowed the participant and interviewer to see and hear one another. At the start of the interview, the interviewer described the purpose of the interview and provided detailed instructions to the participant. Although most respondents will complete a web-based survey, cognitive testing was conducted using a digital version of the paper survey displayed on a screen shared by the interviewer and participant. Notes were taken in real time for analysis and each session was recorded.

The pre-determined and ad-hoc probes were generally designed to:

  1. Ensure that participants understood the survey items as intended.

  2. Assess the language and clarifying definitions included.

  3. Assess the appropriateness of the response options/anchors used for the survey items.

  4. Assess the ordering of the survey items.

The notes taken during each cognitive interview were combined into a single Excel file that was organized by survey question and standard probes (i.e., those probes administered to all participants). Analysis involved identifying for each question any pain points (e.g., what worked, did not work, and why) and the number of participants that shared similar thoughts or experiences. Audio recordings were consulted as needed to confirm or clarify certain findings.

Overall, the survey was received positively and was understood by participants, who had a range of educational backgrounds and a variety of driving experience. The findings produced eight recommendations to add reminders to survey takers to answer from the perspective of an operator of a passenger vehicle (i.e., as a driver of car, van, SUV, or truck driver) versus the perspective of a motorcyclist (in the case when a respondent operates a motorcycle); specifying that survey takers should respond based on their current driving behavior; revising double-barreled questions; and clarifying that, for frequency questions, the response option “neither agree nor disagree” should be used to indicate that the participant engages in a driving behavior with equal frequency.

B.4.2. Planned data analysis

In the final report, we will present descriptive statistics of the sample (e.g., age, gender, location, survey mode) in a series of frequency and cross-tabulation tables and inferential statistical tests (e.g., correlations, t-tests, ANOVAs, and ANCOVAs) to identify statistically significant multivariate relationships. The descriptive statistics will involve a series of frequency and cross-tabulation tables to summarize the sample in terms of demographic variables including respondent age, gender, state (Florida, Pennsylvania), survey mode, years of driving experience, etc., and the mean and range of responses to survey items on attitudes, beliefs, and self-reported behaviors. We will employ inferential statistical tests to determine statistically significant relationships among variables including Chi-square tests for cross-tabulated variables and logistic regression for variables measuring driver attitudes, beliefs, knowledge, and self-reported behaviors. For example, we would test the relationship between the number of motorcyclist friends and family members of respondents to their agreement with the statement that “Drivers should take extra care to look out for motorcyclists.” Another example of a test we would conduct is a multiple regression to measure the degree to which exposure to motorcycling and motorcyclists is statistically related to self-reported driving behaviors towards motorcycles (such as maintaining “safe” distances to motorcycles).

B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.

The following individuals have reviewed technical and statistical aspects of procedures that will be used to conduct this survey:


Rob Calderón, PhD

Division Director, Organizational Research and Consulting

Fors Marsh Group

1010 N. Glebe Rd., Suite 510

Arlington, VA 22201

(703) 801-9013


Martha McRoy

Senior Associate

Data Science, Surveys, and Enabling Technologies Division

6130 Executive Blvd.

Rockville, MD 20852

(301) 347-5362


Kathryn Wochinger, PhD

Research Psychologist

Office of Behavioral Safety Research

National Highway Traffic Safety Administration

1200 New Jersey Ave SE

Washington, DC 20590

(202) 366-4300

1 National Center for Statistics and Analysis. (2021, April). Motorcycles: 2019 data (Traffic Safety Facts. Report No. DOT HS 813 112). National Highway Traffic Safety Administration.

2 U.S. Department of Transportation, Federal Highway Administration, Office of Highway Policy Information. (November 2020). Highway Statistics Series, State Motor-Vehicle Registrations – 2019. Retrieved from:

https://www.fhwa.dot.gov/policyinformation/statistics/2019/mv1.cfm

3 U.S. Census Bureau. (December 10, 2020). 2019: ACS 5-year estimates detailed tables. Retrieved from:

https://data.census.gov/cedsci/table?g=0400000US12,42&tid=ACSDT5Y2019.B25044&hidePreview=true



4 Rapoport, R., Sherr, S., & Dutwin, D. (2012). Does Ethnically Stratified Address-based Sample Result in Both Ethnic and Class Diversity; Case Studies in Oregon and Houston. Presented at the annual conference of the American Association of Public Opinion Research in Orlando, FL; May 2012.

5 Rapoport, R., Sherr, S., & Dutwin, D. (2014) Address Based Samples: Key Factors in Refining this Research Methodology. SSRS Whitepaper Archive

6 Iannacchione, V. G., Staab, J. M., & Redden, D. T. (2003). Evaluating the use of residential mailing addresses in a metropolitan household survey. Public Opinion Quarterly, 67(2), 202-210.

7 Link, M. W., Battaglia, M. P., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72, 6-27.

8 Link, M. W., Battaglia, M. P., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72, 6-27.

9 Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556-575.

11 Centers for Disease Control and Prevention. (August 2021). 2020: Behavioral Risk Factor Surveillance System Data. Retrieved from: https://www.cdc.gov/brfss/annual_data/annual_2020.html

12 https://www.cdc.gov/brfss/annual_data/annual_2020.html

13 Battaglia, M. P., Link, M. W., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). An evaluation of respondent selection methods for household mail survey. Public Opinion Quarterly, 72(3), 459–469.

14 Olson, K., Stange, M., & Smyth, J. (2014). Assessing within-household selection methods in household mail surveys. Public Opinion Quarterly, 78(3), 656–678.

15 Boyle, J., Tortora, R., Higgins, B., & Freedner-Maguire, N (2017). Mode effects within the same individual between web and mail administration. AAPOR 72nd annual conference, May 18-21, 2017.

16 Groves, R. M., & Couper, M. P. (2012). Nonresponse in household interview surveys. John Wiley & Sons.

17 Massey, D. S., & Tourangeau, R. (2013). Where do we go from here? Nonresponse and social measurement. The Annals of the American Academy of Political and Social Science, 645(1), 222-236.

18 Plewes, T. J., & Tourangeau, R. (Eds.). (2013). Nonresponse in social science surveys: a research agenda.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleTABLE OF CONTENTS
AuthorABlock
File Modified0000-00-00
File Created2022-06-29

© 2024 OMB.report | Privacy Policy