Section B

Section B.doc

Motor Vehicle Occupant Safety Survey

OMB: 2127-0645

Document [doc]
Download: doc | pdf

TABLE OF CONTENTS


SUPPORTING STATEMENT


B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe the potential respondent universe and any sampling or other respondent selection method to be used....................................................................................... 2


2. Describe the procedures for the collection of information........................................ 6


3. Describe methods to maximize response rates and to deal with issues of non-response..................................................................................................................... 13


4. Describe any tests of procedures or methods to be undertaken................................ 14


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design............................................................................................................. 18


SUPPORTING STATEMENT


B. Collections of Information Employing Statistical Methods


The proposed Motor Vehicle Occupant Safety Survey (MVOSS) will employ statistical methods to analyze the information collected from respondents. The following sections describe the procedures for respondent sampling and data tabulation. The reported procedures are a major departure from the methodology employed during previous administrations of the MVOSS. All previous MVOSSs were conducted as telephone surveys in which all respondents were interviewed while using landline telephones. The sampling frame was active residential telephone exchanges within the designated geographic region. The newly proposed MVOSS will be administered using an Address-Based Sample (ABS) and offer multiple modes of responding. The primary response mode will be Web. Alternative response modes will be paper questionnaire and telephone. The change from RDD sampling with telephone interviewing was necessitated by declining response rates and under-representation of key groups in recent telephone surveys.


The survey will include usability testing and a pilot test prior to the full administration of the survey. Details regarding those tests are provided in Section B.4.


B.1. Describe the potential respondent universe and any sampling or other respondent selection method to be used.


a. Respondent Universe


The respondent universe is the population 16 and older residing in residential households within the United States (all 50 States and the District of Columbia). This is the age group that is age-eligible to drive motor vehicles. While occupant protection includes both drivers and non-drivers, there is evidence that drivers set the tone within the vehicle. It therefore is important that the sampling frame encompass the driving population. Moreover, this has led to a driver orientation in a number of the survey items.


The MVOSS is composed of two questionnaires, each administered to an independently drawn sample. The samples will be stratified by the ten NHTSA Regions (NHTSA segments the country into ten Regions for programmatic outreach). The population size for the 16 and older age range in each NHTSA Region is provided in Table 1. Sample allocation will be proportional to the population distribution across NHTSA Regions.


Table 1 - Estimates of the Resident Population 16 and Older By NHTSA Region:

July 1, 2011

Region

States

Population 16 and Older

Proportion

Sample






Region 1

CT, MA, ME, NH, RI, VT

11,776,425

4.8%

288

Region 2

NJ, NY, PA

33,047,698

13.4%

804

Region 3

DE, DC, KY, MD, NC, VA, WV

24,949,337

10.1%

606

Region 4

AL, FL, GA, SC, TN

35,756,033

14.5%

870

Region 5

IL, IN, MI, MN, OH, WI

41,031,408

16.7%

1002

Region 6

LA, MS, NM, OK, TX

29,941,685

12.2%

732

Region 7

AR, IA, KS, MO, NE

13,147,496

5.3%

318

Region 8

CO, NV, ND, SD, UT, WY

9,818,060

4.0%

240

Region 9

AZ CA, HI

35,670,263

14.5%

870

Region 10

AK, ID, MT, OR, WA

11,095,570

4.5%

270






Total


246,233,975

100%

6000

Taken from Table 1. Estimates of the Resident Population by Selected Age Groups for the United States, States, and Puerto Rico: July 1, 2011. Accessed March 10, 2014 at:

http://www.census.gov/popest/data/state/asrh/2011/index.html



b. Respondent Sampling


This survey will produce national estimates that will be calculated from information provided by two independently drawn probability-based samples, one for each questionnaire to be administered. The survey will use an ABS (Address-Based Sampling) approach to sample selection. ABS as a means for sampling households has developed as a result of the commercial availability of the Computerized Delivery Sequence File (CDSF) used by the U.S. Postal Service (USPS). The Delivery Sequence File (DSF) is a computerized file that contains all delivery point addresses serviced by the USPS with the exception of general delivery. Each delivery point is a separate record that conforms to all USPS-addressing standards. The initial studies of the DSF estimated that it provided coverage of approximately 97% of the household population (e.g., Iannacchione, Staab, & Redden, 20031). However, the DSF coverage in rural areas tends to be lower than in urban areas (Link et al., 20062). Nonetheless, the DSF address frame provides a near complete sampling frame for household population surveys in the United States.


The DSF cannot be obtained directly from the USPS. It must be purchased through a licensing agreement with private vendors. These vendors are responsible for updating the address listing from the USPS, and augmenting the addresses with information (e.g., name, telephone number) from other data sources. The Contractor that will implement the MVOSS, ICF International, will obtain the DSF augmented sample from Marketing Systems Group (MSG). By geocoding an address to a Census block, the MSG file augments the DSF by merging Census and other auxiliary information from the Census data files and other external data sources. MSG appends household, geographic, and demographic data to the frame.


MSG maintains a monthly updated, internal installation of the DSF from the Postal Service. By applying a series of enhancements to the DSF, MSG evolves this database of mail delivery into a sampling frame capable of accommodating multiple layers of stratification or clustering when selecting probability-based samples. In particular, address enhancements provided by MSG include amelioration of some of the known coverage problems associated with the DSF, particularly in rural areas where more households rely on P.O. Boxes and inconsistent address formats. In addition, MSG will provide telephone look-up for sample addresses, using multiple commercial databases to secure the highest possible match rates.


There were approximately 139 million residential addresses in the DSF as of February 2014. This excludes business addresses. It also excludes addresses labelled as “No Stat” which are generally buildings for which building permits have been obtained but mail delivery has not commenced. The remaining residential addresses are classified according to type of delivery point. These delivery point classifications include:

  • City style/rural route address

  • P.O Box

  • Seasonal

  • Throwback

  • Vacant

  • Drop points

  • Educational

The vast majority of delivery points are city style or rural route addresses (117.5 million). These addresses would normally be included in any ABS sample, and will be included here. However, there are a number of issues to be considered in the decision to include or exclude other types of delivery points in an ABS sample.


The second most common type of delivery point is Post Office Box. However, households may have both street/rural route addresses mail delivery and post office boxes. Fortunately, the Delivery Sequence File now classifies PO Boxes as Only Way to Get Mail (OWGM) or traditional Post Office Box (which also has delivery at a street address). The MVOSS will limit the sampling frame to OWGM Post Office Boxes (1.4 million) and exclude the traditional Post Office Box (14.1 million) since people having the latter would still be in the sampling frame as city style, rural route or throw back addresses. Otherwise, leaving the duplicate delivery channels in the sampling frame and weighting the completed interviews by number of available channels would make household response rate more problematic because some PO Boxes are not in use or are used by businesses/organizations.


Some addresses are classified as seasonal (approximately 850,000). This classification indicates a second home. Using the same rationale as post office boxes, the MVOSS will exclude seasonal addresses from the sampling frame as the household with a seasonal home also has another address in the frame for their primary residence. If the respondent is currently living in the seasonal home, it is likely that any mail to the primary residence will be forwarded to the seasonal home.


There are around 260,000 throwback addresses in the DSF. These are city style addresses that opt to have their mail forwarded to a PO Box. These throwback addresses will be included in the sampling frame since this is the only address through which they can be included in the survey.


Some addresses are classified as vacant (3.6 million). However, in their study of address-based samples in four states, Battaglia and his colleagues found that 8.5% of housing units classified as vacant produced a completed interview. MVOSS will include vacant addresses in the sampling frame in order to improve coverage.


Drop points (approximately 725,000) are building addresses with multiple deliveries and no separate addresses within the building (i.e., apartment numbers). Drop units --- the number of delivery units within drop points --- represent about two percent of all residential addresses. In actual mail delivery, the drop units have names attached so that mail can be appropriately routed within the building by tenant or landlord. However, the commercial DSF file only provides the number of drop units within a drop point address. The most common approaches to handling drop points in address-based samples are to either exclude the drop points (or those with more than a few drop units) or include all drop units for any selected drop point since there is no basis for selection within the drop unit. MVOSS will include drop points in the sampling frame. However, rather than including in the sample all drop units within selected drop points, the survey will explore the feasibility of randomly selecting a single drop unit within selected drop points, as described in methods for the pilot test in Section B.4.


Some addresses are classified as educational (approximately 95,000). These addresses are typically multi-unit housing structures, such as fraternities and off-campus student housing units. They are effectively a special type of drop point since there are not individual unit addresses within the buildings. They will be included in the sample like drop points, particularly given the importance of the young adult sample to this survey and its under-representation in most population surveys.


In drawing the sample, the sample vendor divides the universe into evenly sized intervals and selects one address at random within each interval. The ABS database is sorted by ZIP+4 within State. The selection procedure used by the sample vendor ensures a self-weighting sample of DSF addresses.


c. Response Rate


The most recent (2007) MVOSS achieved a response rate of 48% for both versions of the questionnaire for the general cross-sectional sample, with the younger person over-sample being a few points lower. This was attained using RDD methods with samples drawn from a landline telephone sampling frame. A few studies have demonstrated that ABS, self-administered surveys can achieve response rates comparable to RDD-based surveys. However, response rates for RDD surveys have declined since the MVOSS was last conducted. Therefore, the expected response for the next MVOSS is below the 2007 figure. NHTSA believes the multi-mode approach (Web; mail; telephone) will generate a vigorous response, and therefore considers 40% as a reasonable expected response rate.


B.2. Describe the procedures for the collection of information.


a. Procedures for Collection of Information


The Contractor, ICF International, will select two independent, national, stratified random samples of households from the DSF, as described in the previous section. Each household will be mailed an initial letter requesting participation in the survey. As with previous MVOSSs, the survey will employ methods for random selection of one respondent from the household in order to produce population estimates, as opposed to household estimates. As described in Section B.4, the pilot test will test two different approaches to in-house selection from among multiple eligible people within the household, a single stage last birthday method versus a two-stage enumeration. A final decision as to which approach to employ will be determined from results of the pilot test.


Web response is NHTSA’s preferred method for the survey. Therefore, the survey will initially offer only a Web response mode, where the letter requests the selected household member to go to a designated Website to take the survey. For those that do not respond, there will be a series of additional contact waves that will add alternative modes of responding. The contact waves will be as follows:


  • Wave 1 – A letter mailed to the household offering response by Web only.

  • Wave 2 – A package mailed to the household that continues to offer the Web mode but also includes a paper questionnaire that can be completed and mailed back instead.

  • Wave 3 – A postcard reminder.

  • Wave 4 – A package mailed to the household that offers the Web and mail response.

  • Wave 5 – Telephone contact of those for whom telephone numbers can be identified, with the interviews conducted by telephone. For those households whose phone numbers can’t be identified, they will be sent a final package offering Web and mail response.


There is a question as to whether starting with a Web-only invitation is the optimum contact protocol, even when Web is the preferred response mode of the survey sponsor. Therefore, besides method of in-house selection, the Pilot Test will also test two alternative methods of Wave 1 contact: the Web-only invitation versus the Web + Mail invitation. The test is described in Section B.4. The results may lead to modification of the above contact protocol.


The letters and postcards sent to the households across contact waves will be two-sided, with English on the front and Spanish on the back. Respondents will be instructed in Spanish that they can request a mail survey in Spanish and that the Web survey can be completed in Spanish. During telephone follow-ups, if a Spanish-speaking household is reached, the record will be flagged and a bilingual interviewer will make all subsequent attempts.


Parental consent will be obtained for respondents younger than 18. For the Web survey, respondents will be asked their age at the beginning of the survey. If under 18, parental consent will be required on the Web form before the respondent can continue. For the mail survey, assuming the one-stage sampling approach is used, a consent form will be included as part of the package. If a two-stage sampling process is used for in-house respondent selection, the ages of all household members will be obtained at the first stage, and a parental consent form will be provided in the second stage mailing to all households for which the selected respondent is under 18. During the telephone follow-ups, rostering will be performed by the interviewer. If the selected respondent is under 18, the interviewer will require verbal parental consent before conducting the interview with the minor.


ABS records will be matched to telephone numbers in order to carry out telephone interviewing during the final contact wave. It is expected that 50% to 60% of ABS records will be able to be matched to telephone numbers. Interviewers will make a minimum of 15 attempts to reach an eligible household and interview an eligible respondent for each matched telephone number in the sample frame. Unless revised due to results from the Pilot Test experiment, household member selection from among multiple eligible will be accomplished through selection of the person with the most recent birthday. To maximize the likelihood of response, call attempts will be spread over three calling periods—weekday days, weekday evenings, and weekends. At least three attempts will be made in each period. The remaining six attempts will be made at what are determined to be the most productive times, while maintaining about 20% of the calling during the weekday daytime period. In addition:


  • Eligible persons initially refusing to participate will be re-contacted one additional time for attempted conversion; anyone who communicates that they do not want to take the survey at that point will not be contacted again.

  • If an answering machine is reached, messages will be left on every third attempt, conveying the study’s importance and leaving a toll-free number for verifying the project’s legitimacy and to complete the survey.

  • Trained bilingual interviewers will be available on every shift to conduct interviews with selected respondents who speak Spanish.

  • Systematic, unobtrusive electronic monitoring (at least 10% of all interviews) will be a routine and integral part of survey procedures for all interviewers.


It is important to this project that the samples contain sufficient numbers of respondents in younger age cohorts to:


  • Conduct subgroup analyses of adolescents and young adults (i.e., respondent ages 16-24) who are over-represented across a spectrum of highway safety problems (e.g., higher crash rates, greater non-use of seat belts, greater proclivities toward speeding, etc.); and


  • Provide a sufficient number of households having young children for analysis of the child safety seat modules plus other child-oriented questionnaire items.


Previous versions of the MVOSS employed over-sampling techniques because random samples of young people 16 and older obtained through telephone survey methodology yielded sub-optimal distributions of younger aged individuals. This new MVOSS will not employ over-sampling because ABS is expected to provide improved coverage of younger cohorts. However, if during the course of the fielding period it is found that sub-optimal numbers of younger people are completing the questionnaire, then auxiliary data in the sampling frame will be used to target follow-up efforts in this segment of the sample.


b. Precision of sample estimates


The objective of the sampling procedures is to produce a random sample of the target population. A random sample shares the same properties and characteristics of the total population from which it is drawn, subject to a certain level of sampling error. This means that with a properly drawn sample we can make statements about the properties and characteristics of the total population within certain specified limits of certainty and sampling variability.


The confidence interval for sample estimates of population proportions, using simple random sampling without replacement, is calculated by the following formula:


Where:


SE (p) = the standard error of the sample estimate for a proportion


p = some proportion of the sample displaying a certain

characteristic or attribute


q = (1 - p)


n = the size of the sample


= (1-α/2)-th percentile of the standard normal distribution

(1.96 for 95% CI)


Sufficient sample will be drawn to achieve 6,000 completed interviews per questionnaire, which is the same number targeted by the previous MVOSSs. Under simple random sampling, the expected size of the sampling error for a sample size of 6,000 is + 1.3 percentage points assuming a characteristic near 50 percent. This is more than sufficient for a survey of this nature. However, the sample size of 6,000 persons per questionnaire was selected to permit detailed subsample analyses of attitudes, knowledge, and behavior in occupant protection areas. It is anticipated that these characteristics will vary by age and sex. Table 2 shows the expected distribution of the sample by age group and sex, and Table 3 shows the associated sampling error. Based on these calculations, a sample of 6,000 persons is sufficiently large to permit subsample analyses of most programmatic areas by age and sex.


However, stratification of the sample and multi-mode approach adds complexity to the design. Given a complex design, the margin of error, d, of the sample estimate of a population proportion, p, equals:



Where tα equals 1.96 for 1-α = 0.95, and the standard error of p equals:


Where:


deff = the design effect


n = the size of the sample



Table 2: Estimated July 2012 Population and Sample Distribution by Age and Sex


Total

Population %

Total

Sample

Total (16+)


Males (16+)


16-24

25-44

45-64

65+


Females (16+)


16-24

25-44

45-64

65+

248,625,928 100


121,131,165 48.7


20,393,957 8.2

41,512,399 16.7

40,409,764 16.3

18,815,045 7.6

127,494,763 51.3


19,405,934 7.8

41,313,342 16.6

42,445,176 17.1

24,330,311 9.8

6000


2923


492

1002

975

454


3077


468

997

1025

587



Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States, States, Counties, and Puerto Rico Commonwealth and Municipalities: April 1, 2010 to July 1, 2012
Source: U.S. Census Bureau, Population Division
Release Date: June 2013

Annual Estimates of the Resident Population by Single Year of Age and Sex for the United States, States, and Puerto Rico Commonwealth: April 1, 2010 to July 1, 2012
Source: U.S. Census Bureau, Population Division
Release Date: June 2013



Table 3: Expected Sampling Error


Male Female Total


Age (N) (N) (N)


16-24 (492) + 4.4 (468) +4.5 (960) +3.2

25-44 (1002) +3.1 (997) +3.1 (1999) +2.2

45-64 (975) +3.1 (1025) +3.1 (2000) +2.2

65+ (454) +4.6 (587) +4.0 (1041) +3.0


Total (2923) + 1.8 (3077) +1.8 (6000) +1.3


Assumes p = q at 95% confidence level.


It is unknown what the design effect for a survey using this methodology will be. Based on other types of surveys, an estimated design effect of 1.5 appears reasonable. Using the above formulas, the margin of error for a sample size of 6,000 interviews is d = 0.015 (+ 1.5 percentage points) using a deff of 1.5. and setting p equal to 0.50. For a sample size of 454, which is the smallest sample size in Table 3, the margin of error is d = 0.056. The sample sizes are sufficiently large for subsample analyses.


  1. Sample Weighting


Survey weights will be computed to support unbiased estimation from the respondent sample. Weights reduce bias due to differential selection probabilities and non-response. The weighting process will, in principle, compute:


  • Sampling weights that incorporate the probability of selection for households and the probability of selection of a respondent within a sample household,

  • Weight adjustments for non-response, and

  • Post-stratification adjustments.


The weight computation includes a continuous QC component that checks the sum (mean) and variability of the weights at various stages. If the variability of the adjusted weights is high, the weights will be trimmed in a way that preserves the weight sum within each adjustment cell.


Sampling weights are the products of the reciprocals of the probabilities of selection associated with two sampling stages: 1) the selection of households from the ABS frame, and 2) the selection of respondents within a household.


All households will have an equal probability of selection from the sampling frame as no over-sampling will be used in allocating sample across NHTSA Regions nor will any other disproportionate sampling of households be employed. Neither will the MVOSS engage in any disproportionate sampling of household members. However, the probability of selection of a household member will differ according to the number of people in the household eligible to participate in the survey. The latter probability is 1/m where m is the number of eligible persons in the household. Oversampling at the household or within household level may be revisited if the ABS proportionate sampling approach during the pilot test results in underrepresentation of key population segments.


Weighting class adjustments will be applied that are designed to minimize non-response bias potential. In general, weight adjustments are applied so that the sum of the adjusted weights over respondents is equal to the sum of the unadjusted weights over respondents and non-respondents. These adjustments will be informed by the non-response bias analysis. Specifically, the variables used to define weight adjustment classes (or cells) will be selected using the propensity models in that analysis. These variables will be those predictors most significant in the models. In general, adjustment classes will be homogeneous in terms of response behavior.


Non-response adjustments may take the form of a single-stage adjustment, or a two-stage ratio adjustment if a two-stage selection process is adopted (as determined from the pilot test). In the first stage, weights for respondents will be inflated to account for total non-respondents in each adjustment class. In the possible second stage, the adjusted weights for respondents (i.e., those inflated with the first ratio adjustment) will be inflated using the data available for households that complete the rostering form. These data will include household size as well as sex and age of each household member. In other words, weight adjustment classes for the second adjustment will be defined in terms of household size, sex, and age.


Post-stratification adjustments capitalize on known population totals for key demographics such as sex, age categories, and race/ethnicity. In essence, these adjustments make the final weights sum to the known population control totals along these dimensions. For MVOSS, post-strata will be defined within the same Regions used in the design stratification (Table 1). Using Census data, population control totals for post-stratum cells will be computed that are defined by Region, sex, age category, and race/ethnicity.


A determination will also be made as to whether trimming is necessary to limit the variability in the weights, a decision based on the coefficient of variation (CV) of the weights overall and within key analytic domains. While reducing variances, trimming methods have the potential to induce some bias. The approach to be used will minimize this bias potential in two ways. First, trimming will be conducted within post-stratum cells to preserve weight sums within poststrata. Second, this survey will adopt calibrated trimming approaches using the interquartile range (IQR) of the weight distribution so that moderate trimming is applied only if, and where, necessary.


d. Non-response bias analysis


The analysis of non-response bias for the MVOSS will follow three tracks.


1) Non-response analysis—bivariate and multivariate analyses. First, the analysis will compare the distribution of survey respondents with known population distributions. This comparison will focus on key demographic variables such as race/ethnicity, sex, age groups, and education. Because many of these same factors will be used during post-stratification in the survey weighting process, the analysis will consider un-weighted data and data that are weighted prior to the post-stratification step, as well as using the final adjusted weights. Note that these analyses will capitalize on the augmented frame data as well as on Census data.


The demographic variables found to be significant in these bivariate analyses (or sub-group analyses) will then be included in multivariate logistic models. In these logistic models, usually called propensity models, the dependent variable is a dichotomous (0-1) indicator for response so the logistic model may be expressed in terms of the probability of a response. The variables that turn out to be significant in these propensity models will be considered for weight adjustments for non-response (i.e., will be candidates for defining weight adjustment classes). This approach will ensure that weight adjustments minimize the potential for non-response bias.


2) Comparisons using rostering data. These comparisons will be possible if the two-stage within household method of selection is used, capitalizing on data available for households that complete the rostering form but do not complete the survey. This part of the analysis will use the data available from the rostering form, including household size as well as sex and age of each household member. In essence, we will compare two subsets of rostering respondents: a) those who complete the survey, and b) those who do not complete the survey. This analysis will also inform the weight adjustments for non-response.


3) Comparisons across waves of respondents. The third set of analyses will compare responses obtained using different levels of effort. This approach typically compares early respondents to the initial survey waves (Waves 1 to 3) to respondents to the later waves (Waves 4 and 5). The idea is that the late respondents—a group of reluctant or perhaps recalcitrant respondents—resemble non-respondents statistically.


B.3. Describe methods to maximize response rates and to deal with issues of non-response.


NHTSA is taking a number of steps to boost the MVOSS response rate. Foremost will be NHTSA’s use of the multi-mode approach, where different options for responding are presented to prospective respondents (Web; mail; telephone). This offers greater opportunity for people to use a response mode that they prefer and with which they are comfortable, which should enhance participation.


In contacting respondents, NHTSA will use official government envelopes for the mailings. People will often open government envelopes out of curiosity as to why they are being contacted by the government. As stated in the previous section, the invitation to participate in the survey will include wording in Spanish for those who are entirely or predominantly Spanish speaking so that they are not excluded from the survey. The invitation will also include a QR code for quick access to the Web version of the questionnaire.


In adapting the questionnaires to multi-mode administration, the project team will apply principles of heuristics that people follow in interpreting visual cues in visually laying out the questions. The Questionnaire Design Specialist that will be working on this survey has extensive background in visual and interface design and is an expert at using visual heuristics to create usable, reliable data collection. She will ensure that the MVOSS questionnaires provide a pleasant user experience and result in the collection of reliable data across the Web and mail modes.


Another facilitator of response will be adaption of the Web-based questionnaires for mobile platforms (e.g., smartphones, tablets) so that prospective respondents who wish to use such devices when taking the survey are not deterred. Once a questionnaire is programmed, the platform will automatically adapt the presentation to optimize completion on a mobile device.


The survey will include a number of assistance devices for respondents so that they don’t become frustrated and terminate their participation prior to submission of a completed questionnaire. This will include for the Web response mode inserting help screens (e.g., the respondent can click to get a definition), providing easy navigation from page to page, and furnishing the capability for respondents to pause and leave the system and then re-enter at the departure point without losing any previously inserted information. For all response modes, the respondents will be provided clear methods by which they can contact the Contractor if they have questions about the survey.


As described in the next section, NHTSA is including an experiment within its pilot test to assess whether a change in the contact protocol would increase the response rate. Different within household methods of selection will also be examined for, among other things, relative impact on response rates.


B.4. Describe any tests of procedures or methods to be undertaken.


The survey will be carried out in two Phases, with the first Phase being the development and finalization of the survey methods. The second Phase will be the full administration of the survey. During Phase 1 development, the project team will develop the Website for on-line survey administration, adapt the questionnaires for multi-mode administration, and carry out an extensive Pilot Test. Tests associated with these Tasks will be as follows:


a) Web Site


Website development will include Alpha and Beta tests. In general, the Alpha test of a hardware or software system is an internal test to confirm the product’s correct functionality. The survey project team will conduct an Alpha test of the Web survey system, front-end, back-end, external links for respondents, and remote management control. The Alpha test will occur following the software’s initial installation on servers and will use a copy of the approved questionnaire translated into a Web-based form.


Test results and feedback derived from the Alpha design will be incorporated into development of an upgraded version of the Alpha site, which is the Beta design. Once uploaded, the survey project team will run the Beta site and survey through several quality control (QC) tests. The QC process examines different issues over the course of three stages. During each stage of the review process, the project team will engage at least two reviewers to inspect the survey. In the first QC stage, project staff will review and enumerate typographical errors in the web survey questionnaire. During the second stage of the QC process, the team will review the survey’s overall look and feel and its workflow. Finally, the team uses the data entered during the second stage, recording the answers separately and then comparing the responses to the data downloaded from the back-end of the data storage server. The team provides any errors found in these processes to the survey programmers for correction; who re-check the program to verify that all issues have been completely addressed.


b) Usability Testing


Usability testing will be conducted of both MVOSS questionnaires in each of the three response modes: Web, paper, and telephone. The Website testing will include testing with mobile devices. Each distinct user experience will be tested with 15 participants. The allocation of usability test participants across user conditions is listed in Table 4. Research by the Nielsen Norman Group indicates that the vast majority of usability problems are identified within the first 5 usability tests of any one interface, and it takes about 15 participants to find nearly all of the problems in an interface.


Table 4: Distribution of Usability Test Participants


Version A Questionnaire

Version B Questionnaire

Web questionnaire: desktop/laptop experience


7


8

Web questionnaire: mobile device experience


7


8

Telephone questionnaire

7

8

Paper questionnaire

7

8


The usability testing of the paper and web modes will be conducted in a DC-area facility. Participants will be recruited who do not routinely participate in usability tests in order to avoid any learning-related effects. For the Web questionnaire testing, the participant will be given a copy of the invitation letter and asked to follow the instructions it contains as if s/he were at home, starting with going to the Website and signing in as instructed in the letter. S/he will then be asked complete specific survey portions, thinking aloud as s/he goes. The test facilitator will note errors and also watch for hesitation, confusion, or frustration. Web-based questionnaire testing will include both personal computers and mobile devices. Though technologies for testing web and mobile user experiences are different, both will have video-recording of the participant’s activity on the screen, along with an audio-recording of the conversation between the test facilitator and the participant. This testing will identify:


  • Problems with following the instructions contained in the invitation letter, using the PIN, or signing in to the survey.

  • Problems with navigating screens, sections, and questions.

  • Confusion related to where and when responses are saved, and whether it is possible to return to the survey at a later time.

  • Interface elements (e.g., icons, menus, buttons, forms, messages, warnings, alerts) that participants do not notice, do not understand, or that do not behave as participants expect.


For the paper questionnaire, the participant and test facilitator will sit together at a table, with a video camera trained on the participant’s hands as s/he completes the survey. The participant will be asked to think aloud as s/he works, and the test facilitator will observe and probe for clarification on the participant’s experience as necessary. This type of testing uncovers participant issues such as:


  • Not being able to mark answers accurately in the correct location on the form or answers not fitting in the space provided.

  • Missing or misunderstanding instructions—for example, choosing multiple responses in a case where only one response is allowed.

  • Difficulty following skip pattern, or mistakenly answering questions that do not pertain to him or her.


Usability testing of the telephone questionnaire will entail a trained interviewer administering the survey by telephone, with the usability specialist listening on the line. The test protocol may be written so that the usability specialist listens and notes areas of difficulty, or it may allow the usability specialist to probe for clarification when the participant experiences difficulty with the survey. This testing will identify questions that:


  • May be ambiguous or confusing, as evidenced by participants consistently asking to have them repeated or stating they do not understand or know how to answer.

  • Seem to invite interruption by respondents.

  • Have awkward skip patterns or places where the survey does not seem to flow naturally, or any other problems with the interview script.

  • Seem more difficult to answer by telephone as compared to the self-administered survey modes—such as those that would benefit from a visual aid or a more detailed verbal explanation.


c) Pilot Test


The Phase 1 development will conclude with an extensive Pilot Test that will field test the methods for all three response modes in order to assess the programs, processes, and procedures. The Pilot Test sample will be allocated to the following three methods:


    1. A Web-only approach that will test the anticipated first wave for the main study — an advance letter on NHTSA letterhead with an invitation to conduct a Web survey, containing a URL and unique PIN allowing respondents to access and take the Web survey. The Contractor will print a QR Code on each letter, allowing respondents to link directly to the website.

    2. A Web or mail response testing the anticipated second wave of the main study—sampled households will receive a package containing a letter offering choice of Web or mail response, as well as a paper questionnaire, other exhibits (e.g., pictures of child restraints for the Version 2 questionnaire), and a postage-paid return envelope.

(3) Telephone response testing the follow-up method for non-respondents to the web and mail waves of the main survey—ABS records will be matched to telephone numbers using a look-up process involving commercial databases. The telephone survey methods will be tested on those households with matching telephone numbers.


Since the objective of the Pilot Test is to assess the functioning and reaction to the different response modes, the Pilot Test will not run the full sequence of contact waves as described in B.2 but rather will test the response options concurrently. Originally the plan was to have a single contact wave with one group getting the Web-only contact, one group getting the Web + mail contact, and one group contacted by telephone. However, there is suggestive evidence from research that a Web-only approach for contact wave 1 followed by a Web + mail approach for contact wave 2, as listed in B.2, may produce a cumulative net response rate lower than what a Web + mail approach in both contact waves would generate. Therefore, the Pilot Test will incorporate an experiment that will require two contact waves for the Web-only and Web + mail response modes in order to compare response rates for the different sequence conditions, as shown in Table 5.


Table 5: Experimental Conditions Testing Approaches to Initial Contacts With Prospective Respondents

Condition 1

Condition 2

Wave 1: Web-only invite

Wave 1: Web invite + mail package

Wave 2: Web invite + mail package to

non-responders

Wave 2: Web invite + mail package to

non-responders


In addition, the Pilot Test will include an experiment that will compare two different methods for within household selection of respondents. In order to be consistent with previous MVOSSs, the survey must implement methods to select respondents within households to produce national population estimates of individuals age 16 and older, rather than national household estimates. There are two approaches for the within-household selection process: a respondent-driven approach in which households are given instructions as to how to select the respondent from among multiple eligible household members, and a two-stage selection process in which the researcher makes the selection by first obtaining a roster of household members and then randomly selecting a respondent who is then reached by re-contacting the household. These approaches will be tested, with the Pilot Test examining how the method combinations perform not only in terms of response rates but also in terms of fidelity to the randomization implicitly or explicitly involved in the respondent selection procedures (i.e., how faithfully the household members followed the recommended procedures). The random selection is an aspect of probability sampling that ensures that the approach is not subject to selection biases that may be substantial. This fidelity will be assessed through information obtained in the rostering process. The Pilot Test will also allow a comparison of the demographic distributions of the respondent samples (in particular, the age/sex distributions) under the different experimental conditions.

Table 6 presents the test conditions for the household member selection experiment. The respondent-driven approach will ask households to select from among eligible household members the member who had the last birthday. The two-stage approach will ask households to provide an enumeration of its members.



Table 6: Experimental Conditions Testing Household member Selection

Condition 1

Condition 2

Condition 3

Condition 4


Web-only

last birthday


Web-only

Household enumeration


Web + mail

last birthday


Web + mail

Household enumeration


The drawn sample for the Pilot Test will be 1,000 addresses for each of the three response modes: Web-only, Web + mail, and telephone. For the Web conditions, a half-sample (500) will be randomly assigned to one of the two experimental conditions considered for respondent selection, a two-stage enumeration versus a one-stage birthday method. Statistical tests for differences between group proportions (or percentages) between two independent samples with 80% power require sample sizes of 500 for proportions of 0.49  and 0.40. The expected response rate for MVOSS is 40%. With the proposed Pilot Test sample sizes, comparisons between the groups defined by selection method crossed by mode will be made with sufficient statistical power.


The drawn sample will include drop points and educational addresses. For each of those addresses, an attempt will be made to identify the dwelling units from information external to the DSF. For those addresses where the dwelling units can be identified, one unit will be randomly selected. The selection of an individual within the selected unit would then be dictated by the experimental condition in which the address has been placed (last birthday versus enumeration).


For each mode of the Pilot Test, the same procedures will be followed for offering a Spanish-language questionnaire version (mail or Web modes) or Spanish-language interviewer (telephone mode) that are planned for the full administration of the survey.


During the Pilot Test, paradata will be collected related to conducting the survey. For the Web survey component, this process information will include the amount of time spent on the Website and on individual Web pages by respondents, use of definition and other assistance tools, breakoffs from the Website by respondents, and item non-response. The data will be used to determine how well respondents are able to progress through the interview, and to identify any problem spots to address through revisions and/or insertion of more assistance tools.


B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


The following individuals have reviewed technical and statistical aspects of procedures that will be used to conduct the 2006 Motor Vehicle Occupant Safety Survey:


Alan Block, MA

Office of Research and Technology

DOT/National Highway Traffic Safety Administration

400 Seventh Street, SW

Washington, DC 20590

(202) 366-6401


John Boyle, Ph.D.

Senior Vice President, Line of Business Lead, Survey Research

ICF International

11785 Beltsville Drive, Suite 300

Calverton, MD 20705

(301) 572-0808



Ronaldo Iachan

Technical Director

ICF International

530 Gaither Road, Suite 500

Rockville, MD 20850

(301) 572-0538


Michael Battaglia

Battaglia Consulting Group, LLC

15 Mohawk Road

Arlington, MA 02474

(781) 643-7078

1 Iannacchione, V.G., Staab, J.M., and Redden, D.T. (2003) Evaluating the Use of Residential Mailing Addresses in a Metropolitan Household Survey. Public Opinion Quarterly, 67 (2): 202-210.

2 Link, M.W., Battaglia, M.P., Frankel, M.R., Osborn, L., and Mokdad, A.H. (2006) Mode and Address Frame Alternatives to RDD. 2006 Proceedings of the American Statistical Association, Survey Methodology Section (CD-ROM), Alexandria, VA: 4156-4163.

19


File Typeapplication/msword
File TitleTABLE OF CONTENTS
AuthorABlock
Last Modified Byalan.block
File Modified2014-06-09
File Created2014-03-06

© 2024 OMB.report | Privacy Policy