NHTSA 2022 BikePed OMB Part B 6-7-22

NHTSA 2022 BikePed OMB Part B 6-7-22.docx

National Survey of Pedestrian and Bicyclist Attitudes, Knowledge, and Behaviors

OMB: 2127-0684

Document [docx]
Download: docx | pdf

Information Collection Request Supporting Statements: Part B

National Survey of Pedestrian and Bicyclist Attitudes, Knowledge, and Behaviors

OMB Control No. 2127-0684


Abstract:1 The National Highway Traffic Safety Administration (NHTSA) of the U.S. Department of Transportation is seeking approval to reinstate with modification a previously approved information collection (OMB Control No. 2127-0684) to conduct the National Survey of Bicyclist and Pedestrian Attitudes and Behaviors (NSBPAB) by contacting an estimated 22,943 households by mail for participation. The push-to-web with mail supplement survey will be completed by a national probability sample of at least 7,500 U.S adults (aged 18 and older). Participation by respondents would be voluntary. This collection only asks respondents to report their answers; there are no record-keeping costs to the respondents. The survey was reviewed by an IRB and determined to be exempt. NHTSA will use the information to produce a technical report that presents the results of the survey. The technical report will provide aggregate (summary) statistics and tables as well as the results of statistical analysis of the information, but it will not include any personally identifiable information. The purpose of the survey is to obtain up-to-date information about bicyclist and pedestrian attitudes and behaviors, biking and walking frequency, use of e-bikes and e-scooters, and perceptions of community investments in bicycle and pedestrian infrastructure.


The technical report will be shared with State highway safety offices, local governments, transportation planners, engineers, policymakers, researchers, educators, advocates, and others who use the data from this survey to support their work. The total estimated burden for contacting 15,443 potential participant non-responders (1,469 hours) and 309 potential pilot participant non-responders (32 hours) and contacting and recruiting 7,500 participants (2,626 hours) and 150 pilot participants (55 hours) to complete the study is 4,182 total hours. All estimates were rounded up to the nearest whole hour.


When NHTSA last received approval of this information collection, the estimated burden was 3,005 hours. The increase in burden of 1,177 hours is a result of using a larger sample and including burden not just for the estimated number of completed surveys, but also for the estimated number of contacts of potential respondents. NHTSA has conducted the NSBPAB on two previous occasions—first in 20022 and again in 2012.3 (The final reports for the 2012 administration of the survey are included as Supplemental Documents.) NHTSA is seeking approval for reinstatement of the information collection because up-to-date information is needed to identify trends across time as well as to understand emerging trends such as the rapid deployment of e-bikes and e‑scooters throughout American communities and increasing levels of distraction or inattention associated with smartphone use among all travelers. Study results should produce useful information to bicycle and pedestrian safety stakeholders. The legacy study is being redesigned to sample respondents using address data from the most recent U.S. Postal Service (USPS) computerized Delivery Sequence File (DSF) of residential addresses, and administer the survey via web and mail (replacing the former random-digit dial computer-assisted telephone interview design).


B.1. Describe the potential respondent universe and any sampling or other respondent selection method to be used.


The National Survey of Bicyclist and Pedestrian Attitudes and Behaviors (NSBPAB) collects critical population-level data that will help NHTSA understand the habits and behaviors of bicyclists and pedestrians, quantify the magnitude of bicycle and pedestrian activity across the country, and gauge the safety needs of this population. The study will collect information from 7,500 adults (18 years old and older) in the United States. The proposed study will employ statistical sampling methods to collect information from the target population and draw inferences from the sample to the target population. The technical report will be shared with State highway safety offices, local governments, transportation planners, engineers, policymakers, researchers, educators, advocates, and others who use the data from this survey to support their work.


B.1.a. Respondent Universe

The 2022 NSBPAB will be conducted with a national sample of 7,500 adults ages 18 years and older, residing in the 50 States and the District of Columbia.

We will conduct a web and mail multi-mode survey with households randomly selected from an address-based sampling (ABS) frame. We will stratify the ABS frame based on the 10 NHTSA regions and proportionally allocate the total sample of addresses to the total number of residential addresses in each region. Two oversamples will be incorporated into the national sample, one to increase the number of Hispanic households and one to increase the number of young adults aged 18-34 years old. Data will be collected in English and Spanish; those who speak neither language will be excluded.

B.1.b. Respondent Sampling

The survey will use an ABS approach to sample selection. The sampling frame will be based on address data from the U.S. Postal Service (USPS) computerized Delivery Sequence File (DSF) of residential addresses. The DSF is derived from mailing addresses maintained and updated by USPS and available from commercial vendors.4,5 With 147-million residential addresses nationally, the DSF provides a comprehensive frame that will reach the entire population living at an address that receives mail delivery.

B.1.b.1 Sampling Frame

The sampling frame will be based on address data from the USPS computerized DSF of residential addresses. The DSF is a computerized file that contains all delivery point addresses serviced by the USPS with the exception of general delivery. Each delivery point is a separate record that conforms to all USPS-addressing standards. The initial studies of the DSF estimated that it provided coverage of approximately 97-98% of the household population.6,7 The DSF coverage in rural areas tends to be lower than in urban areas8 but is increasing as more rural areas are converted to city-style addresses for 911 services.9 Nonetheless, the DSF address frame provides a near complete sampling frame for household population surveys in the United States. With 147-million residential addresses nationally, the DSF provides a comprehensive frame that will reach the entire population living at an address that receives mail delivery.

The DSF cannot be obtained directly from the USPS. It must be purchased through a licensing agreement with private vendors. These vendors are responsible for updating the address listing from the USPS, and augmenting the addresses with information (e.g., name, telephone number) from other data sources. ICF, the Contractor that will implement the NSBPAB for NHTSA, will obtain the DSF augmented sample from Marketing Systems Group (MSG). By geocoding an address to a Census block, the MSG file augments the DSF by merging Census and other auxiliary information from the Census data files and other external data sources. MSG appends household, geographic, and demographic data to the frame.

MSG maintains a monthly updated, internal installation of the DSF from the Postal Service. By applying a series of enhancements to the DSF, MSG evolves this database of mail delivery into a sampling frame capable of accommodating multiple layers of stratification or clustering when selecting probability-based samples. Address enhancements provided by MSG include amelioration of some of the known coverage problems associated with the DSF, particularly in rural areas where more households rely on P.O. Boxes and inconsistent address formats.

There were approximately 147-million residential addresses in the DSF as of December 2020. This excludes business addresses. It also excludes addresses labeled as “No Stat” which are generally addresses where there is no mail delivery, such as buildings for which building permits have been obtained but mail delivery has not commenced.

The sampling frame for the NSBPAB will include all residential addresses, including city-style addresses (89.3%), P.O. Boxes (10.7%), rural routes (<0.1%), and highway contracts (<0.1%). The frame will exclude P.O. Boxes where the household also receives home delivery. The DSF classifies P.O. Boxes as Only Way to Get Mail (1.4 million) or traditional Post Office Box where the household also receives delivery at a street address (14.3 million). The NSBPAB will only include the Only Way to Get Mail (OWGM) P.O. Boxes since people having the latter are represented in the sampling frame based on their home delivery address. In total, the frame for NSBPAB includes 133 million mailable addresses.

The DSF includes flags identifying the address as seasonal (<0.1%) or vacant (7.0%). To maximize coverage of the population, the NSBPAB frame will include these addresses.

Drop points are building addresses with multiple deliveries and no separate addresses within the building (i.e., apartment numbers). Drop units --- the number of delivery units within drop points --- represent less than 1% of all residential addresses. In actual mail delivery, the drop units have names attached so that mail can be appropriately routed within the building by tenant or landlord. However, the commercial DSF file only provides the number of drop units within a drop point address. The most common approaches to handling drop points in address based samples are to either exclude the drop points (or those with more than a few drop units) or include all drop units for any selected drop point since there is no basis for selection within the drop unit. NSBPAB will include drop points in the sampling frame. The drop points will be expanded based on the number of units at that location. If a drop point is selected, research on the units will be conducted to determine the unit identifiers for the building.

Some addresses are classified as educational (<0.1%), which represents student housing. They are effectively a special type of drop point since there are not individual unit addresses within the buildings. They will be included in the sample, like drop points, particularly given the importance of the young adult sample to this survey and its under-representation in most population surveys.

B.1.b.2 Sample Sizes

The NSBPAB frame will be stratified into the 10 NHTSA regions and the overall sample of 7,500 will be allocated based on the total number of residential addresses in each region with a minimum of 500 per region. Assuming a 1.75 design effect due to weighting, we expect national estimates to have a margin of error of +/–1.5 percentage points at the 95% confidence level and error margins for regional estimates ranging from +/-4.1% to +/-5.8%.10 Table 1 contains the total number of occupied housing units, total addresses on the DSF, and the target sample size by region.


Table 1. Regional Frame Counts and Sample Size

NHTSA Region/States

2019 ACS Occupied HUs (household units)

Addresses (Dec 2020)

Selected Addresses

Expected Completes

+/–95% CI

United States - Total

120,756,048

133,090,025

25,000

7,500

1.5%

1 Maine, Massachusetts, New Hampshire, Rhode Island, Vermont

4,379,973

4,697,848

1,667

500

5.8%

2 Connecticut, New Jersey, New York, Pennsylvania

16,998,960

18,080,973

3,000

900

4.3%

3 Delaware, District of Columbia, Kentucky, Maryland, North Carolina, Virginia, West Virginia

12,436,642

13,870,631

2,667

800

4.6%

4 Alabama, Florida, Georgia, South Carolina, Tennessee

17,882,156

21,092,577

3,333

1,000

4.1%

5 Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin

20,571,711

22,422,898

3,333

1,000

4.1%

6 Louisiana, New Mexico, Mississippi, Oklahoma, Texas

14,795,848

16,977,435

3,000

900

4.3%

7 Arkansas, Iowa, Kansas, Missouri, Nebraska

6,726,468

7,299,000

1,667

500

5.8%

8 Colorado, Nevada, North Dakota, South Dakota, Utah, Wyoming

5,117,729

5,478,304

1,667

500

5.8%

9 Arizona, California, Hawaii

16,074,958

17,138,384

3,000

900

4.3%

10 Alaska, Idaho, Montana, Oregon, Washington

5,771,603

6,031,975

1,667

500

5.8%


Addresses from the DSF are drawn based on a 1-in-k systematic sample so that each address has an equal probability of selection within each stratum. The ABS database is sorted by ZIP+4 within state to ensure a geographically proportional allocation.

To oversample Hispanic households, we will geographically stratify block groups with high percentages of Hispanic households as identified in the 2015–2019 American Community Survey (ACS).11

To oversample young adults, we will use a two-phase sample, called double sampling, for stratification. We will first select a national sample of addresses from the DSF as described above. We will then append a model-based age indicator (provided by MSG), identifying addresses where the head of household is likely to be aged between 18 and 34 years. We will stratify the sample based on the 18–34 indicator and select the second-phase sample by oversampling addresses in the 18–34 stratum relative to the non-18–34 stratum. While agreement rates tend to be low on telephones,12 they tend to be higher on addresses.13 Thus, we expect that stratifying by the “likely 18–34” indicator will increase the number of young adult respondents in the sample. We will correct for the oversampling of addresses in this stratum in the weighting.

B.1.b.3 Within-Household Selection

A number of respondent selection methods have been tested for ABS mail surveys, including for the Behavioral Risk Factor Surveillance System (BRFSS).14 Although past studies have indicated a tendency for the wrong person to complete the survey when applying birthday methods of within-household selection,15 a recent evaluation of birthday selection methods for ABS surveys found a small degree of self-selection in larger households; however, the impact on the substantive estimates was small.16 Considering the low impact of the overall estimates and the simplicity of implementing the birthday methods, we will select the adult within the household who has the next birthday to complete the survey (as opposed to the last birthday or a split next/last sample). The within-household selection instructions will be included in all contacts with the household.


B.1.c. Response Rate

Table 2 details our assumptions for sample size and response rate by data collection wave. These assumptions are based on similar contact waves and response rates achieved for the 2016 Motor Vehicle Occupant Safety Survey (MVOSS-Version A), a national survey about driving behaviors and attitudes similar in length to the NSBPAB. We expect to draw an initial sample of 25,000 addresses and assume 2,057 (8%) of these records will be non-deliverable as determined from returned mail after the first two mailings. We expect the 22,943 remaining valid records—following the five-contact protocol—will result in an estimated 7,523 returned surveys for a response rate of 30% based on the American Association of Public Opinion Research (AAPOR) response rate formula #1 (RR1). (Previous NSBPAB surveys were administered using random digit dialing telephone sampling instead of the current mode of mail with push to web. As a comparison to the current study, the 2002 survey achieved a 27% response rate, while the 2012 survey had a 25.32% response rate for the landline cross-section, 13.81% for the cell phone sample, and 22.99% for the landline oversample.)


Table 2. Expected Data Collection Quantities and Response Rates

Contact Wave



Number of Records

Expected Response Rate

Total

Web letter invitation

25,000

10%

2,500

Reminder postcard #1

25,000

6%

1,500

Non-deliverable

(-2,057)



First survey mailing

18,943

10%

1,894

Reminder postcard #2

18,943

5%

947

Second survey mailing

17,049

4%

682

TOTAL RESPONDENTS


30.0%

7,523



B.2. Describe the procedures for the collection of information.

B.2.a Data Collection Protocol

The Contractor, ICF, will select a national, stratified random sample of households from the DSF, as described in the previous section. Each household will be mailed an initial letter requesting participation in the survey. The survey will employ the next birthday method for random selection of one respondent aged 18 or over from the household.

Web response is NHTSA’s preferred method for the survey. Therefore, the survey will initially offer only a web response mode, where the letter requests the selected household member to go to a designated website to take the survey. Each letter/address will contain a unique Master ID that will be used to access the website and will help track whether someone from a household completed the survey. For those that do not respond, there will be a series of additional contact waves that will add alternative modes of responding. The contact waves are presented in Table 3. Households that respond or refuse the survey will be removed from subsequent contacts.


Table 3. NSBPAB Contact Protocol

Wave

Step and Mode

Contents

Schedule

1

An invitation letter offering web response

Cover letter with PIN, hyperlink to web survey, instructions, $1 pre-incentive

Day 1

2

Reminder postcard #1

Postcard, black and white

Day 7

3

A mailed package offering mail response

Cover letter, 16-page printed questionnaire, prepaid return envelope

Day 28

4

Reminder postcard #2

Postcard, black and white

Day 35

5

A mailed replacement package offering mail response

Cover letter, 16-page printed questionnaire, prepaid return envelope

Day 49

Close data collection

Day 91




B.2.b Spanish-Language Data Collection

We will send materials in both Spanish and English to households highly likely to speak Spanish. Bilingual materials will be sent to households in block groups where the percentage of limited English-speaking households17 is at least 15% of the total households in the block group. We estimate that these block groups represent over 50% of the Spanish language isolate population. Areas outside these block groups will receive an English-language letter that contains information at the bottom, in Spanish, on how to access and complete the survey in Spanish. This approach is based on evidence from the Health Information National Trends Survey (HINTS) indicating that sending English and Spanish materials to everyone depresses response rates compared to sending Spanish- and English-language materials only to households most likely to speak Spanish (Westat, 2014). The web survey will have the option to complete the survey in English or Spanish.

B.2.c Precision of Sample Estimates

The objective of the sampling procedures described above is to produce a random sample of the target population. This means that with a randomly drawn sample, one can make inferences about population characteristics within certain specified limits of certainty and sampling variability.


The margin of error, d, of the sample estimate of a population proportion, P, equals:



Where tα equals 1.96 for 1-α = 0.95, and the standard error of P equals:


Where:

= design effect arising from the combined impact of the random selection of one eligible individual from a sample household, and unequal weights from other aspects of the sample design and weighting methodology, and

= the size of the sample (i.e., number of interviews)


Using these formulas, the margin of error for a sample size of 7,500 interviews is d = 0.015, using an average of 1.75. and setting equal to 0.50. We expect the design effect for the NSBPAB to be similar to the design effect for MVOSS, which was 1.73 and 1.76 for the two survey versions.


The total sample size for the survey is also large enough to permit estimates for subgroup analysis including age, gender, race/ethnicity, bicyclist and pedestrian status, and other demographics. A subgroup that represents at least 10% of the total sample will have a 95% confidence interval of +/-4.7%. Table 4 includes expected 95% error margins for demographic groups assuming the sample size is proportional to the population.


Table 4. Expected 95% Error Margins for Subgroups

 Demographic

Population Percentage

+/–95% Confidence Interval

Gender

 

 

Male

49%

2.1%

Female

51%

2.1%

Age Group


 

18-34

30%

2.7%

35-54

33%

2.6%

55+

37%

2.5%

Race/Ethnicity


 

Hispanic

16%

3.7%

Non-Hispanic White

64%

1.9%

Non-Hispanic Black

12%

4.3%

Non-Hispanic Other

9%

5.0%

Biker/Walker status†

 

 

Walked and biked in past year

35%

2.5%

Walked in past year, biked over a year ago

45%

2.2%

Walked in past year, never ridden bike

10%

4.7%

Never-walked, never biked (Disabled)

10%

4.7%

†Bicyclist/Pedestrian status distribution based on a number of sources.18


B.2.d Sample Weighting

The NSBPAB will be weighted to reduce any potential bias related to differential selection probabilities and non-response. The weighting process will compute:

  • Sampling weights that incorporate the probability of selection for households and the probability of selection of a respondent within a sample household;

  • Weight adjustments for non-response; and

  • Population calibration.

Sampling weights are the products of the reciprocals of the probabilities of selections associated with two sampling stages: (1) the selection of households from the ABS frame and (2) the selection of respondents within a household. The first-stage probabilities will be unequal to the extent that the sample allocation oversamples some of the regional strata based on age and Hispanic household likelihood. The address probability of selection is multiplied by the within-household probability of selection based on the number of adults in the household as reported in the survey.

We will apply weighting class adjustments designed to minimize the potential for non-response bias. These adjustments will be informed by the non-response analysis described previously. Specifically, we will select the variables used to define weight adjustment classes (or cells) using the propensity models in that analysis, as those predictors are most significant in the models. In general, adjustment classes will be homogeneous in terms of response behavior.

The weights will be calibrated based on known population totals for key demographics such as gender, age categories, education, marital status and race/ethnicity. The calibration will be based on raking, an iterative ratio adjustment of the sample to the population based on multiple dimensions. Because this step will align the survey respondents with the population, we will include non-drivers and drivers in the adjustment. Therefore, it is critical to collect basic demographic information from the non-drivers selected in the sample.


B.3. Describe methods to maximize response rates.

B.3.1 Maximizing Response

NHTSA is taking a number of steps to boost the NSBPAB response rate. Foremost will be NHTSA’s use of the multi-mode approach, where different options for responding are sequentially presented to prospective respondents (web and mail). This offers greater opportunity for people to use a response mode that they prefer and with which they are comfortable, which should enhance participation.

The protocol includes up to five mailings to non-response households. An incentive experiment done for MVOSS found that a $1 pre-incentive and $5 promised incentive provided the most cost-efficient way to increase response when compared to $2 pre-incentive and $10 post incentive.19 Thus, the first contact will include a $1 pre-incentive and a $5 promised incentive upon completion. As the MVOSS incentive results are now dated, a $10 post-incentive will also be tested during the Mini-Pilot. The results of the Mini-Pilot will determine if a $1 pre-incentive followed by a $10 post incentive is more appropriate for increasing response than a $1 pre-incentive followed by a $5 post incentive.

In contacting respondents, ICF will use NHTSA logos and branding on all outgoing mail materials, including letters, postcards, and envelopes. People will often open government envelopes out of curiosity as to why they are being contacted by the government. As stated in the previous section (B.2.b), the invitation to participate in the survey will include wording in Spanish for those who are entirely or predominantly Spanish speaking so that they are not excluded from the survey.

In adapting the questionnaires to multi-mode administration, the project team will apply principles of heuristics that people follow in interpreting visual cues in visually laying out the questions.

Another facilitator of response will be adaption of the web-based questionnaires for mobile platforms (e.g., smartphones, tablets) so that prospective respondents who wish to use such devices when taking the survey are not deterred. Once a questionnaire is programmed, the platform will automatically adapt the presentation to optimize completion on a mobile device.

The survey will include assistance devices for respondents so that they do not become frustrated and terminate their participation prior to submission of a completed questionnaire. For the web response mode, this will include easy navigation from page to page, and the capability for respondents to pause and leave the system and then re-enter at the departure point without losing any previously inserted information.

During administration, ICF will maintain support for the respondents via an e-mail help desk and a toll-free phone number. Clear instructions for accessing this support will be provided on paper materials and the web survey.

B.3.2 Non-response Analysis

Non-response bias will occur if there are differential response rates for certain subgroups of the sample and these subgroups differ with respect to the substantive survey data.  Differential response occurs when one subgroup responds to the survey at a higher rate than another subgroup (e.g., males vs. females). Therefore, the non-response analysis will focus on the distribution of respondents as compared to the expected distribution based on the population.

The analysis of non-response bias for the NSBPAB will follow three tracks.

  1. Bivariate analyses. First, the analysis will compare the distribution of survey respondents with known population distributions. This comparison will focus on key demographic variables such as race/ethnicity, gender, age groups, and education. Because many of these same factors will be used during post-stratification in the survey weighting process, the analysis will consider un-weighted data and data that are weighted prior to the post-stratification step, as well as using the final adjusted weights. Note that these analyses will capitalize on the augmented frame data (e.g., age flags used in the oversampling of young adults) as well as on Census data.

  2. Multivariate analyses. The demographic variables found to be significant in these bivariate analyses (or subgroup analyses) will then be included in multivariate logistic models. In these logistic models, usually called propensity models, the dependent variable is a dichotomous (0-1) indicator for response so the logistic model may be expressed in terms of the probability of a response. The variables that turn out to be significant in these propensity models will be considered for weight adjustments for non-response (i.e., will be candidates for defining weight adjustment classes). This approach will ensure that weight adjustments minimize the potential for non-response bias.

  3. Comparisons across waves of respondents. The second set of analyses will compare responses obtained using different levels of effort. This approach typically compares early respondents to the initial survey waves (Waves 1 to 3) to respondents to the later waves (Waves 4 and 5). The idea is that the late respondents—a group of reluctant or perhaps recalcitrant respondents—resemble non-respondents statistically.

The non-response analysis will inform the weighting adjustments to correct for a sample that is disproportionate from the population. These weighting adjustments will mitigate the risk on non-response bias to the extent that the substantive survey data is correlated with the observed differences in respondents and non-respondents.


B.4. Describe any tests of procedures or methods to be undertaken.

The questions on the NSBPAB have been cognitively tested and the web and paper survey instruments will be subjected to usability testing with 18 participants.

Additionally, a methodological experiment is planned for the Mini-Pilot Study (see Section B.4.4).

B.4.1 Cognitive Testing of the Draft Survey Instrument

Cognitive testing uses in-depth interviewing to understand the mental processes respondents use to answer survey questions, evaluate questions against measurement objectives, and measure the accuracy of their response data. In March of 2021, ICF cognitively tested the latest version of NSBPAB questionnaire by recruiting adult participants using free and/or paid platforms to ensure nine completed interviews. A total of five cyclists, three non-cyclists, and one person with a physical disability were invited to participate in a cognitive interview.

ICF conducted the interviews remotely using screensharing and audioconference software. Each interview lasted between 60 and 90 minutes, and participants were provided with a $75 gift code as a thank you for their participation. Cognitive interview results were compiled in a report with recommended changes to the questionnaire such as refining images and making wording changes. The questionnaire was revised based on the recommended changes.


B.4.2 Web-based Questionnaire Usability Testing

During an in-person session scheduled for early 2022, up to nine mode-specific participants will be asked to follow the instructions in the invitation letter as if they were at home, starting with going to the website and accessing the survey. The individual will then ask to complete specific survey portions while thinking aloud. The facilitator will note errors and watch for hesitation, confusion, or frustration. Web-based questionnaire testing will include both desktop and mobile devices. Tests will be recorded to identify:

  • Problems with following invitation letter instructions and/or accessing the survey;

  • Problems with navigating screens, sections, and questions;

  • Confusion about where and when responses are saved and returning to the survey later; and

  • Interface elements (e.g., icons, menus, buttons, forms, messages, warnings, alerts) that participants did not notice or understand or that do not behave as participants expect.

Adjustments will be made to the web instrument based on the findings of this usability testing to correct for the above issues.

B.4.3 Paper Questionnaire Usability Testing

During these in-person sessions, up to nine participants will be given a copy of the appropriate invitation/reminder letter and the paper survey packet and asked to complete survey items while thinking aloud. Tests will be recorded to identify:

  • Not marking answers in the correct location or answers not fitting in the space provided;

  • Missing or misunderstanding instructions (e.g., choosing multiple responses in a case where only one response is allowed); and

  • Difficulty following skip patterns or answering questions as “not applicable.”

Adjustments will be made to the paper instrument based on the findings of this usability testing to correct for the above issues.

B.4.4 Pilot Testing

The pilot test will be used to test the entire survey administration system prior to launching the full study. We propose conducting a Mini-Pilot using the approved web and paper survey prototypes with 150 respondents in both English and Spanish. We will administer the survey with 150 respondents using the same data collection protocols as the full study. While all records will receive a $1 pre-incentive, we will include two experimental conditions varying the post-incentive—one group will receive $5 and one group will receive $10. We will equally split the sample across the two conditions to evaluate the most effective approach.


B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


The following individuals have reviewed technical and statistical aspects of procedures that will be used to conduct the 2020 NSBPAB:


Kristie Johnson

NHTSA Project Manager / COR(TO)

Research Psychologist

1200 New Jersey Avenue, SE

Washington, DC 20590

202-366-2755

E-mail: [email protected]


Olivia Saucier

ICF, Survey Manager (Contractor)

126 College St

Burlington, VT 05401

703-934-3004

[email protected]


Kisha Bailly

ICF, Quality Assurance Reviewer (Contractor)

530 Gaither Road, Suite 500

Rockville, MD 20850

612-455-7471

[email protected]


Randy ZuWallack

ICF, Senior Statistician (Contractor)

126 College St

Burlington, VT 05401

802-264-3724

[email protected]






1 The Abstract must include the following information: (1) whether responding to the collection is mandatory, voluntary, or required to obtain or retain a benefit; (2) a description of the entities who must respond; (3) whether the collection is reporting (indicate if a survey), recordkeeping, and/or disclosure; (4) the frequency of the collection (e.g., bi-annual, annual, monthly, weekly, as needed); (5) a description of the information that would be reported, maintained in records, or disclosed; (6) a description of who would receive the information; (7) if the information collection involves approval by an institutional review board, include a statement to that effect; (8) the purpose of the collection; and (9) if a revision, a description of the revision and the change in burden.

2 Royal, D. & Miller-Steiger, D. (2008, August). National survey of bicyclist and pedestrian attitudes and behavior, volume II: Findings report (Report No. DOT HS 810 972). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1845

Royal, D. & Miller-Steiger, D. (2008, August). National survey of bicyclist and pedestrian attitudes and behavior, volume 1: Summary report (Report No. DOT HS 810 971). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1844

Royal, D. & Miller-Steiger, D. (2008, August). National survey of bicyclist and pedestrian attitudes and behavior, volume III: Methods report (Report No. DOT HS 810 972). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1845

3 Schroeder, P. & Wilbur, M. (2013, October). 2012 National survey of bicyclist and pedestrian attitudes and behavior, volume 1: Summary report (Report No. DOT HS 811 841 A). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1956

Schroeder, P. & Wilbur, M. (2013, October). 2012 National survey of bicyclist and pedestrian attitudes and behavior, volume 2: Findings report (Report No. DOT HS 811 841 B). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1957

Schroeder, P. & Wilbur, M. (2013, October). 2012 National survey of bicyclist and pedestrian attitudes and behavior, volume 3: Methodology report (Report No. DOT HS 811 841 C). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1958

4 Link, M. W., Battaglia, M. P., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72, 6-27.

5 Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556-575.

6 Iannacchione, V. G., Staab, J. M., & Redden, D. T. (2003). Evaluating the use of residential mailing addresses in a metropolitan household survey. Public Opinion Quarterly, 67(2), 202-210.

7 Link, M. W., Battaglia, M. P., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72, 6-27.

8 Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556-575.

9 American Association of Public Opinion Research (AAPOR), Task Force on Address-based Sampling. (2016) Address-based sampling. Retrieved from www.aapor.org/Education-Resources/Reports/Address-based-Sampling.aspx

10 The calculation for the maximum possible error, achieved for an estimated percentage of 50%, is also premised on a design effect of 1.75 due to weighting.

11 United States Census Bureau. (2021). American Community Survey Data. www.census.gov/programs-surveys/acs

12 Boyle, J., Weiss, A., Schroeder, P., Meyers, M., & Johnson, K. (2013). Reliability of auxiliary data in RDD surveys: NHTSA 2012 Distracted Driving Survey. Presented at the 68th annual conference of the AAPOR, Boston, MA.

13 DiSogra, C., Dennis, J. M., & Fahimi, M. (2010). On the quality of ancillary data available for address-based sampling. In Proceedings of the Survey Research Methods Section of the Joint Statistical Meetings, 417483.

14 Battaglia, M. P., Link, M. W., Frankel, M. R., Osborn, L., & Mokdad, A. H. (2008). An evaluation of respondent selection methods for household mail survey. Public Opinion Quarterly, 72(3), 459–469.

15 Olson, K., Stange, M., & Smyth, J. (2014). Assessing within-household selection methods in household mail surveys. Public Opinion Quarterly, 78(3), 656–678.

16 Boyle, J., Tortora, R., Higgins, B., & Freedner-Maguire, N (2017). Mode effects within the same individual between web and mail administration. AAPOR 72nd annual conference, May 18-21, 2017.

17 The U.S. Census Bureau considers this to be households where no residents aged 14 years or older speak English well.

18 Schroeder, P. & Wilbur, M. (2013, October). 2012 National survey of bicyclist and pedestrian attitudes and behavior, volume 2: Findings report (Report No. DOT HS 811 841 B). National Highway Traffic Safety Administration. https://rosap.ntl.bts.gov/view/dot/1957

Bialik, K. (2017, July 27). 7 facts about Americans with disabilities. Pew Research Center. www.pewresearch.org/fact-tank/2017/07/27/7-facts-about-americans-with-disabilities

Chalabi, M. (2015, April 16). How many Americans don’t know how to ride a bike? FiveThirtyEight. https://fivethirtyeight.com/features/how-many-americans-dont-know-how-to-ride-a-bike/;

Lange, D. (2021, March 4). Cycling - Statistics & facts. Statista. www.statista.com/topics/1686/cycling/#dossierSummary

Schmitt, A. (2015, March 4). Survey: 100 million Americans bike each year, but few make it a habit. StreetsBlogUSA. https://usa.streetsblog.org/2015/03/04/survey-100-million-americans-bike-each-year-but-few-make-it-a-habit/

Burrows, M. (2019, May 14). Younger workers in cities more likely to bike to work. U.S. Census Bureau. www.census.gov/library/stories/2019/05/younger-workers-in-cities-more-likely-to-bike-to-work.html

Wilson, K. (2020, November 25). Study: The biggest COVID-19 bike booms weren’t where you think. StreetsBlogUSA. https://usa.streetsblog.org/2020/11/25/study-the-biggest-covid-19-bike-booms-werent-where-you-think/

19 Bailly, K., Higgins, B., Freedner-Maguire, N., & Boyle, J. (2017). Impact of pre- and post-incentives on response rates to a web and mail survey using an address-based sample frame. AAPOR 72nd annual conference, May 18-21, 2017.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorrandolph.atkins
File Modified0000-00-00
File Created2022-06-17

© 2024 OMB.report | Privacy Policy