UPDATED Part B TPMS-ORRC 2017

UPDATED Part B TPMS-ORRC 2017.docx

Tire Pressure Monitoring System-Outage Rates and Repair Costs (TPMS-ORRC)

OMB: 2127-0626

Document [docx]
Download: docx | pdf

Supporting Statement for Information Collection Request

Revision to OMB Control No. 2127-0626 (Current Expiration Date: 8/31/18)


Tire Pressure Monitoring System – Outage Rates and Repair Costs (TPMS-ORRC)

Revised Field Survey


Part B











National Highway Traffic Safety Administration























TABLE OF CONTENTS


SUPPORTING STATEMENT



Tire Pressure Monitoring System – Outage Rates and Repair Costs (TPMS-ORRC)

Supporting Statement for Information Collection Request (ICR)


A. Justification: Please see separate Part A section.


B. Collections of Information Employing Statistical Methods


B.1. Describe the potential respondent universe and any sampling or other respondent selection method to be used.


Field Survey


The universe of interest for the field survey is passenger vehicles (passenger cars, sport utility vehicles (SUVs), light trucks and vans) of gross vehicle weight rating (GVWR) no more than 10,000 pounds, registered in the United States, that are equipped with FMVSS-compliant tire pressure monitoring systems (TPMSs) and malfunction indicators for the model years (MYs) 2006 and newer, and their drivers.  In MY 2006, FMVSS 138 required 20 percent compliance; in MY 2007, 70 percent; and in MY 2008 and later, 100 percent. FMVSS 138 also required a standardized TPMS malfunction indicator as of MY 2008. Responsible drivers of these vehicles will be the potential interview respondents. The size of this vehicle population for 2016 is approximately 136,000,000 based on the following approximate national registration numbers1 that have been scaled to TPMS phase-in requirements:


MY 2006 (20% TPMS phase-in) 2,800,000

MY 2007 (70% TPMS phase-in) 9,800,000

MY 2008-2016 (100% TPMS) 123,600,000

Approximate Total 136,000,000


We will not survey vehicles of MYs 2017-18 because in 2018, these are very new and under warranty, so we would not expect to learn much about malfunction and aging from them. In the previously approved ICR (OMB Control No. 2127-0626), now being modified by this revised request, the field survey was a purposively selected convenience sample with no statistical inference planned. In the revision, a probability sample will be used to allow nationally representative estimates, so that the field survey may be used in future rulemaking required by the FAST Act. The field survey intercepts selected vehicles and their drivers at selected fuel stations located in selected ZIP codes of selected Primary Sampling Units (PSUs), as follows.


The first stage of sample selection will be a probability sample of 24 PSUs previously selected from a national frame for NHTSA’s Crash Investigation Sampling System (CISS). A CISS PSU is a geographic area defined by a county or group of counties. The CISS sample design allows a scalable sample size of from 16 to 73 PSUs, where the PSU sample is designed for national (not State) representativeness. The measure of size for the CISS PSUs was a composite involving crash counts, registration proportions, and populations, and favored newer vehicles, which is also advantageous to this survey. The CISS PSUs were formed as groups of adjacent counties with end-to-end distance no more than 65 miles for urban areas and 130 miles for rural areas, and are stratified by four U.S. census regions (North, Midwest, South, and West) and by rural or urban area, resulting in 4*2=8 strata. The selected 24-PSU sample was drawn with probabilities proportional to size with two rural and four urban PSUs from each region. The CISS sample design was approved in OMB 2127-0706. The primary sample size, 24 PSUs, for the TPMS-ORRC field survey was chosen as the largest affordable PSU sample size, but also is advantageous because it gives the same number of PSUs as in NHTSA’s previous TPMS survey, the TPMS Special Study (although the PSUs in the earlier study were different, drawn from the previous NHTSA Crashworthiness Data System (CDS)). We will use the already-calculated CISS PSU sampling probabilities for the first stage selection probability. Operationally, the 24 CISS PSU sample is advantageous because NHTSA already has a current presence in these PSUs through CISS. The PSU sites and their 2015 estimated populations are shown in Table B-1.


Table B-1: Sample Primary Sampling Units (PSUs)

STATE

COUNTY

STRATUM

Pop. (2015 est.)

AL

ETOWAH

SOUTH RURAL

103,057

AZ

MARICOPA

WEST URBAN

4,167,947

CA

BUTTE

WEST RURAL

225,411

CA

MONTEREY

WEST URBAN

433,898

CA

SACRAMENTO

WEST URBAN

1,501,335

CA

SAN BERNARDINO

WEST URBAN

2,128,133

ID, WA

ID: LATAH, NEZ PERCE; WA: ASOTIN, WHITMAN

WEST RURAL

149,108

IL

GALLATIN, HARDIN, WHITE

MIDWEST RURAL

23,727

IL

HENRY, ROCK ISLAND

MIDWEST URBAN

453,476

MA

BERKSHIRE

NORTHEAST RURAL

127,828

MA, RI

MA: BRISTOL; RI NEWPORT

NORTHEAST URBAN

639,195

ME

CUMBERLAND

NORTHEAST URBAN

289,977

NJ

ATLANTIC

NORTHEAST URBAN

274,219

NY

NASSAU

NORTHEAST URBAN

573,587

OH

DELAWARE, MORROW

MIDWEST URBAN

228,087

OH

HAMILTON

MIDWEST URBAN

807,598

OH

MONTGOMERY, PREBLE

MIDWEST URBAN

573,587

OK

CARTER

SOUTH RURAL

27,158

PA

CAMERON, POTTER, TIOGA

NORTHEAST RURAL

63,702

TX

COMAL

SOUTH URBAN

129,048

TX

DALLAS

SOUTH URBAN

2,553,385

TX

TARRANT

SOUTH URBAN

1,982,498

VA

CHESTERFIELD, HOPEWELL CITY

SOUTH URBAN

358,065

WI

CHIPPEWA, EAU CLAIRE

MIDWEST RURAL

165,636


The revised field survey has a goal overall sample size of 6,300 vehicles for analysis. This is similar to the previous TPMS Special Study, which had 6,103 vehicles drawn from 24 PSUs used in the main analysis (Sivinski, 2012)2. From the supporting statement for OMB 2127-0706 (CISS sample design), standard errors for seven key estimates under the previous CDS design, which featured 24 PSUs, were used as constraints in an optimization model for 24 PSUs in CISS to ensure the corresponding degree of accuracy under CISS will be at least as good as under CDS. Details on sample sizes and expected design effects are provided in “Expected Sample Sizes and Precision.”


The second stage of selection will be ZIP codes within the PSU. To get better coverage in socio-economic status, which may have influence on some questions in the survey as well as the overall prevalence of repair to TPMS malfunction, we will catalog population sizes and median income for the ZIP Codes within each PSU. We desire one ZIP with median income at or below the median of all ZIP median incomes in a PSU, and one with median income above it. (The income groups are made diverse for better coverage, but since we have only one of each type per ZIP and the overall median is likely to vary by PSU, income divisions will not be treated as strata or analytical domains). Operationally, we also prefer that the two ZIPs be relatively close to each other in a PSU of large geographic area, as we have a traveling supervisor hiring temporary collectors who preferably can feasibly work both ZIPs. Therefore, we plan to use a restricted probability proportional to size (PPS) selection in these very large PSUs. We will select the first ZIP with probability proportional to its population; chance will determine whether it is from the at-or-below-median-income group or the above-median-income group. The eligible second ZIPs are then those that are from the opposite income group, and within a feasible threshold of distance from the first sampled ZIP; we will select the second ZIP with equal probabilities of 1/m, where “m” is the number of ZIPs that are eligible for selection at that stage. With this approach, every ZIP has a chance of being selected, although the second selection is conditional on the first; and we are in two randomly selected ZIPS, one with median income above the median of the ZIPs in the PSU, and one with median income at or below the median of the ZIPs in the PSU.3 For ZIP-to-ZIP distance, we will use a 20-mile great-circle distance threshold, found in the 2016 ZIP Code Tabulation Area (ZCTA) Distance Database, published by the National Bureau of Economic Research. Based on the PSU map, we anticipate that about one-third of the PSUs are likely to have no restrictions needed, about one-third may or may not need restrictions depending on the location of the first PSU, and about one-third are likely to need the restriction.


The third stage of selection will be fuel stations within the ZIPs. We desire two stations within each selected ZIP. For station selection, the survey contractor will list the general passenger-vehicle fuel stations in the selected ZIP Codes4, and in a selected ZIP, listed stations will be sorted by a random number generated for each station. The first sampled station will be the first station in the random sort that fulfills the following sampling and operational viability requirements:


  • station traffic flow enables vehicle sampling requirements (sufficient flow of vehicles MY 2006 or newer) and periodic flow counts;

  • adequate visibility and space exist to safely conduct the interviews and observations;

  • station clientele has reasonable local representation;

  • permission can be obtained from the site’s proprietor or manager to conduct the survey; and

  • there exists at least one other viable station within a 15-minute driving distance.


If the first station in the random sort cannot fulfill viability requirements, we will move to the second, and so on. The general viability requirements build on reasoning used in the 2010 study (Sivinski, 2012). The last requirement sets up the selection of the second station in the ZIP, allowing the data collection supervisor to travel between the two selected stations and monitor activity throughout the day. To choose the second station in the ZIP, stations within 15 driving minutes of the first station will be listed and again the first viable station in a random sort will become in-sample. The probability of selection for each sample station in a given ZIP area will be approximated by 2/m where “m” is the number of stations originally listed in the ZIP area. If teams go to more than one viable station in each sort due to cooperation issues or other complications, station probabilities will be calculated accordingly. We anticipate that teams will spend two or three days in one ZIP, as needed, to collect approximately half the PSU quota, then switch.


The fourth stage of selection will be vehicles at the fuel station. In a survey such as this one, it may be necessary to get as many cooperative respondents as possible at a station. However, to avoid selection bias by data collectors, a particular pump island will be selected as the focal sampling area. Teams will approach every in-state-plate driver who can be eligible and safely approached using that island. Exceptions to approaching every driver include safety issues, surplus vehicle volume where a vehicle departs while the team is busy surveying another vehicle, out of state vehicles, and vehicles outside the scope of the study—passenger vehicles that are obviously older than MY 2006. Beyond that island, we will have a hierarchy of approach as follows:


  1. Focal island – first in hierarchy when (2) and (3) not in effect

  2. Indirect TPMS vehicle off the island (based on known makes and models such as VW, AUDI, newer HONDA) – as many as seen, since indirect are uncommon but needed for the survey.

  3. Diesel pump – if one is available, approach when used, since rarer in our population but may have indirect TPMS.

  4. If focal island empty, other islands in pre-determined order.

  5. If all islands empty, convenience store customers and inspection-line if available.


Using this hierarchy, the probability of selection will be considered pseudo-random except for the indirect systems, which will be close to 100 percent. For the probability denominator, teams will stop collection for 15 minutes every hour and log counts of all passenger vehicles by vehicle type (car, SUV, light truck, van) at the station - islands, pumps, store, service. The counts will be used to estimate station daily sampling time flow-through as [sum of counts]*3 giving the estimated total for the six hours of collection – eight hours minus eight 15-minute periods (the time of counting is not included in the total since there is no probability of selection in those periods). The vehicle probability of selection will be computed as observations divided by daily estimates; a similar method was used successfully in the earlier study (Sivinski, 2012). Data collection teams will approach vehicles as much as possible in the order in which they enter the focal hierarchy; this will be considered random and is expected to allow for a vehicle mix reflective of the population. However, if need becomes apparent, vehicle selections may be purposively rebalanced to ensure that a minimum number of vehicles are surveyed for each vehicle type and model year group (2006 and later); our initial goal over all PSUs is at least 300 for each cell of vehicle type (passenger car vs. SUV/light truck/van) by model year groups of 2006-2008/2009-2011/2012-2014/2015-2016. If vehicles flow in at proportions similar to the national population (see “Expected Sample Sizes and Precision”), the minima should be met without any purposive rebalancing. As also conducted in the earlier study, working hours will be daytime (8 AM to 5 PM or 9 AM to 6 PM) under the assumption that topics of our study are well-represented in those hours and would not vary by fuel-buying outside of those hours (Sivinski, 2012).


A fifth stage of selection, leading only certain subgroups of respondents to extended interviews, will be conducted at the vehicle subgroup level.


Subgroup 1: If a vehicle has a TPMS malfunction (expected to be 5 to 10 percent of vehicles with direct TPMS, and very rare in indirect TPMS), we will sample with certainty for an extended interview. In the unlikely event that we are collecting more Subgroup 1 extended interviews than our budget can support, we will switch to systematic sampling in later PSUs. For example, our upper bound estimate in budgeting for Subgroup 1 was 630 (see “Expected Sample Sizes and Precision”, below).  We could likely absorb a small number of extras, such as 10 percent, but if we are going beyond that rate in early PSUs and our budget cannot sustain the amount, we would institute a subsampling interval in later PSUs to bring us back to about 600-700 total malfunction extended interviews.  One example would be that if the first 12 PSUs have produced about 400 malfunctions, we may seek about 300 in the second 12 PSUs by administering the extended interview in only ¾ of the malfunctions, to be carried out by omitting it in every fourth malfunction case (with a random start).  If in that situation we decided we could only handle 200, we would program the extended interview for every second case; in general, we would apply the rate needed to keep us within our quota and budget.  The instrument tablets will be programmed to allow changes in sampling intervals to be pushed from a central source.  Our quantitative estimates of malfunction by vehicle type and age will not be affected by this subsampling, since only qualitative questions are involved in the extended interviews.


Subgroup 2: For those vehicles with a functioning direct TPMS, expected to be the bulk of cases, most (Subgroup 2a) will receive the base inspection and very brief interview. We also desire 350 extended interviews (Subgroup 2b), preferably collected at different times of day to mitigate potential for time-of-day bias. We will program the survey instrument to distribute the selection for, and collection of, extended interviews across two day parts: 1) AM; and 2) PM. Approximately four days of data collection in a PSU results in allocating approximately four Subgroup 2 extended interviews per day, or two per station per day, to achieve the quota. Subgroup 2 extended interviews will be considered simple random samples for analytical purposes; the questions are qualitative and do not contribute to our quantitative analysis.


Subgroup 3: For those vehicles with a functioning indirect TPMS (expected to be about 5 to 6 percent), we will sample with certainty for a brief extended interview and tire pressure measurement. In the unlikely event that we find we are acquiring more Subgroup 3 extended interviews than our budget can support (we planned for 320, as detailed in “Expected Sample Sizes and Precision,” below), we would switch to systematic sampling in later PSUs in the manner described for Subgroup 1.


Since a limited number of available traveling supervisors (planned as up to five) will be hiring temporary data collectors in each PSU, the PSUs will not all be surveyed in one compressed time period; up to five PSUs will be in operation in any given week, with a planned overall survey period of about 14 to 20 weeks. We will consider all survey observations as equally representative of the overall survey period.


Survey Weights


The base weight (BW), or design sampling weight, for each vehicle in the study can be computed in the following way, which reflects the multistage sampling design:         


BWijkl = [P(PSUi) * P(ZIPj|i) * P(GSk|ij) * P(vehiclel|ijk)]-1

where        

BWijkl is the sampling weight for Vehicle l at Gas Station k in ZIP j in PSU i

P(PSUi) is the probability of selecting PSU indexed i

P(ZIPj|i) is the probability of selecting ZIP Code indexed j in PSU i

P(GSk|ij) is the probability of selecting Gas Station indexed k in ZIP j in PSU i

P(vehiclel|ijk) is the probability of selecting Vehicle l at Gas Station k in ZIP j in PSU i


An equivalent way of expressing the base weight is as the product of sampling weights for PSUs, ZIP Code areas, gas stations and vehicles, with each of these factors being the inverse of the probability of selection corresponding to one stage in the sample selection process.


As we are recording certain observable aspects of refusals, the vehicle base weight may incorporate a non-response adjustment at the vehicle stage if a non-response analysis indicates it (further details are given in B.3). The subsampled subgroups described above as the fifth stage will have an extra factor based on their subsampling probabilities, but the extra factor would apply only to their extended interview questions.


Registration totals for PSUs, obtained by aggregating county or ZIP-level registration data, will be available to provide benchmarks for registration totals by PSU. Calibrating base weights to known registration totals would be expected to reduce the variability between final weights, since we are using equal sample sizes within PSUs, and thus help to mitigate design effects and overly influential points. For these reasons, we plan to post-stratify and calibrate the base weights using known vehicle type and model year registration counts within PSUs, using the quarterly snapshot of registrations closest to the overall survey period. The scaling factor for each cell (vehicle MY group by vehicle type by PSU) will be the ratio of the cell’s number of registrations to the sum of the cell’s base weights (excluding PSU weight), so the differences in the base weights due to selection probabilities will be preserved.  


An issue with the calibration to the PSU could be that some in our sample may be “drive through,” meaning not from the PSU. Drive-through should be essentially zero-sum nationally in that to be in a non-native PSU means one is out of one’s own native PSU; but to help mitigate the effect in our PSUs, we have a station viability requirement to be relatively well-representative of local clientele; also, we will screen out out-of-state plates (which can be done visually causing no respondent burden), giving a greater likelihood that the respondent is from the present PSU.  If the respondent is not from the present PSU, but is in-state, we will accept the respondent as a proxy for a PSU resident.5 

In addition, national registration totals will be available for more accurate estimation at the national level enabling a final post-stratification step at that level, ensuring that weighted estimates sum to known population control totals available by model year.


Expected Sample Sizes and Precision


We will target approximately equal sample sizes in each PSU. Selecting samples of equal size from PSUs that are selected with PPS leads to samples with equal probabilities of selection (“epsem”), or self-weighting samples. These samples are statistically optimal in the sense of minimizing variances as they include no (or little) variance impacts due to unequal weighting effects.6


As noted above, from the supporting statement for OMB 2127-0706 (CISS sample design), the CISS PSU design was optimized to ensure the corresponding degree of accuracy under CISS will be at least as good as under its predecessor CDS. An analysis of data from the TPMS Special Study, conducted in the 24 PSUs of the CDS design, estimated average design effect (DEFF) of approximately 2.0; the minimum and maximum estimated DEFFs were 0.19 and 5.90, respectively. Because the design of the new TPMS-ORRC sample using CISS PSUs is expected, based on the CISS design, to be at least as efficient as the TPMS Special Study under the CDS design, the DEFFs under the new TPMS-ORRC sample are expected to be no higher than, and likely lower, than the DEFFs under the TPMS Special Study (i.e., average values of 2). We also expect that by using equal sampling within PSUs and weight calibration with post-stratification, we will find that while the DEFF component due to clustering will be similar to the previous study, the DEFF component due to unequal weighting may well be noticeably smaller.


A key quantitative estimate from the study will be malfunction rates by vehicle age and type. Because of engineering aspects of TPMS, we expect most malfunctions to be in direct systems. We used national registration data to generate expected sample cell counts for direct TPMS vehicles by age and type, shown in Table B-2. Note that because only a subset of MY 2006-2007 vehicle models are known to be equipped with compliant TPMS and malfunction indicators, the cell counts for MY 2006 and MY 2007 are expected to be considerably smaller than other model years; we include them in our survey to get some measure of the malfunction rates in the oldest TPMS-compliant vehicles.7


Table B-2: Expected Sample Sizes by Model Year and Vehicle Type, Direct TPMS

Model

Year

2006

2007

2008

2009

2010

2011

2012

2013

2014

2015

2016

Total

Car

23

80

298

237

260

244

312

332

301

298

299

5980

LTVa

35

122

346

196

259

331

324

363

401

464

455

aLight Trucks and Vans (LTV) includes pickup trucks, SUVs, CUVs, minivans and full-sized vans with GVWR of 10,000 pounds or less


Cells are collapsed into larger groups for planned analytic purposes in Table B-3.


Table B-3: Likely Analytical Groupings, Direct TPMS

 MY Group

Car

LTVa

Total

2006-2008

401

502

2430

2009-2011

742

786

2012-2014

946

1090

3550

2015-2016

597

919

Total

2684

3286

5980

aLight Trucks and Vans (LTV) includes pickup trucks, SUVs, CUVs, minivans and full-sized vans with GVWR of 10,000 pounds or less


Interest will lie in differences between malfunction rates by vehicle age and type. For expected rates of 0.10 or smaller, we can identify statistically significant differences among the groups with alpha of 0.05 and power of 0.80. A difference in proportions test allows detection of differences of 2.5 percent for our effective row sample sizes of 1215 and 1775 (cell counts divided by design effect).8 For the column totals the effective sample sizes (1342,1643) allow a similar detectable difference. Statistically significant differences under 2.5 percent may not be detectable, but also may not be meaningful in the context of this study. It is also possible that trends will be detectable in the individual model years or groups such as the groups shown in Table B-2. In that case, the trend observed may be of note even if each group is not found to be statistically significantly different from one another.


For the subgroups assigned in sampling stage 5, expected sample sizes and maximum standard errors for categorical proportion variables are shown in Table B-4.


Table B-4: Field Survey subgroups by analytical measures, derivation of subgroup size, expected subgroup size, and expected maximum standard errors for percentages

Subgroup

Analytical Measures

Derivation of Subgroup Size

Expected Subgroup Size

Expected Max. Standard Error of Percent

Subgroup 1: Malfunctioning TPMS

Knowledge of, and experience with, TPMS systems

5-10 percent of direct TPMS

315-630

2.82%

Subgroup 2a: Functioning Direct TPMS

Direct TPMS Malfunction Rate (with Subgroups 1 and 2b)

All respondents not in Subgroup 1, 2b or 3

5045-5360

0.97%

Subgroup 2b: Functioning Direct TPMS – Extended Interview

Knowledge of, and experience with, TPMS systems

350 by design

350

3.78%

Subgroup 3: Functioning Indirect TPMS

Indirect TPMS miscalibration rate, and knowledge of calibration issues with indirect TPMS

5 – 6 percent of fleet9

320

3.95%


For subgroup 3, the overall miscalibration of indirect TPMS is the primary quantitative interest, so model year differences, which would have very small cells, are not key analyses.


Suppliers Survey and Repair Facilities Survey

The TPMS-ORRC Suppliers Survey and Repair Facilities Survey were previously approved in OMB 2127-0626, have been conducted, and are therefore not repeated here.



B.2. Describe the procedures for the collection of information.


Data will be collected by teams of two data collectors, designated as the “interviewer” and the “inspector.” The team will identify and approach vehicles of model years 2006 or newer to assess the operational status of FMVSS-138-compliant direct TPMS or indirect TPMS.


The interviewer will approach the driver and solicit cooperation. The interviewer will administer a driver interview to each cooperating driver. Since our population of interest begins with model year 2006, field teams will not approach drivers whose vehicles appear to be obviously older than model year 2006; however, since it may be hard to tell, there may be some pre-2006 vehicles approached.


The inspector will scan (or manually enter, if necessary) the VIN of each cooperating driver’s vehicle, and match it against NHTSA’s VIN decoders to determine eligibility. The inspector will then check for the operational status of each eligible vehicle’s TPMS.


NHTSA’s Vehicle Product Information Catalog (vPIC), an online VIN decoder, will be accessed by the survey instrument to identify vehicles. VIN decoders provide accurate identification of model year, make, and model. The vPIC database will also be programmed to identify the type of TPMS (i.e., indirect versus direct TPMS) for each vehicle. Any passenger car or SUV/light truck/van of model year 2008-2016 will be considered eligible (since TPMS is required at 100 percent). For MY 2006 and 2007 only FMVSS 138-compliant vehicles will be considered eligible. The VIN will also determine overall eligibility and applicability of Subgroup 3. For encounters that are not classified as Subgroup 3, the TPMS indicator dashboard inspection will determine Subgroup 1 or 2.


The team will collect a minimal set of information from all approached drivers and vehicles, including, for tracking purposes, those who refuse or are not eligible. Data obtained from the team will be entered via hand-held electronic devices (i.e., tablets) and sent to a FedRAMP-approved cloud data storage service. The tablets offer advantages over paper and pencil data collection methods by controlling survey skip logic that improves quality control and reduces survey administration burden and data processing time. The FedRAMP-approved environment reinforces data security.


At each PSU one supervisor will oversee two 2-person teams.  The CISS PSUs are relatively large, with travel time within some measured in hours.  Therefore, stations being worked concurrently (one by each 2-person team) will be constrained by sampling to be within 15 minutes driving time of each other, so that the supervisor can travel and exercise control. A given team will work a selected fuel station for a few days and move to another selected station at least once during the fielding period. There will be backup stations on the selected list, in case a given station withdraws or there are internet connectivity issues.


The field protocol will incorporate the following elements:


  • Each team’s two members will work together to identify and approach vehicles in the target range (screening for model years 2006-2016) to identify the TPMS type (direct vs. indirect) and assess the operational status of the TPMS system

  • At fuel stations potential respondents will be approached before fueling has begun, so that if participation is obtained, the interviewer performing vehicle inspection can run ignition tests to determine operational status of TPMS prior to the gas cap being opened

  • Operational status of TPMS systems will be determined by examining the dash lights when the ignition is in both the LOCK and ON positions

  • Participating respondents will then be allowed to begin refueling their vehicle before the driver interviewer continues with the survey (to be completed, along with vehicle inspection, during refueling)

  • Teams will intercept vehicles roughly in the order they enter an area of the station, in order to collect data from a representative mix of different vehicle body types

  • Teams will operate at establishments during normal daytime hours (e.g., 8 a.m. to 5 p.m.) for safety considerations.

  • Teams will operate at each PSU for up to 2 weeks (includes training and time off), covering weekdays and weekends, with approximately 4 to 7 data collection days.

  • Teams will collect 15-minute traffic volume counts every hour at each station. These counts will be used to estimate vehicle population totals in support of weighting.

  • Teams will capture observational data from all vehicles that are approached, including those drivers who decline to participate and those who are screened out as ineligible.



B.3. Describe methods to maximize response rates and to deal with issues of non-response.


In the 2010-2011 TPMS Special Study, the final data set shows 6516 interviewed, 2567 refused, and 1807 ineligible, for a total of 10,890 approaches. Thus the refusal rate was 2567/10890 = 24 percent (76 percent response rate). This is our best evidence-based estimate for the new survey response rate, but we will work to improve rates and analyses as follows.


Several aspects of the data collection are designed to maximize the response rate. The contractor will provide training to the data collectors on all aspects of the use of the tablet software and the field procedures. Training will provide teams with study content knowledge and an approach to interact with the public. Candidate drivers will be approached in a non-threatening way, provided information about the survey, given handout materials on tire safety, and asked if they would be willing to participate in the study. The design of the survey using two-person data collection teams allows the extended interviews to be completed in approximately 12 to 15 minutes for each vehicle, with most interviews completed in about 8 minutes. Data obtained from the interviews will be entered via hand-held electronic devices (i.e., tablets) and sent to a FedRAMP-approved cloud data storage service, thereby reducing both the burden on the respondents and the time needed to process the data.


A Spanish language instrument was not developed due to budget limitations, but in at least some PSUs with a Spanish-language presence, the survey contractor will recruit bilingual temporary data collectors who can translate at the time of interview for Spanish-speaking respondents. We will also observe and record language status for all respondents and refusals to enable special estimates or non-response adjustments as needed, based on Spanish-speaking respondents with data recorded.


To encourage participation, we plan to offer participating drivers a check on open recalls for their vehicle via NHTSA’s SaferCar.gov web site. If there is no internet connection, we will distribute wallet cards with instructions on how to access the information.


Teams will collect a minimal amount of information from each driver and vehicle that is approached, including, for tracking purposes, those who refuse or are not eligible for the full interview. Analysis will be conducted on the refusals versus the participants for the observation variables, and if shown necessary, weights may be adjusted for non-response. Variables to be collected observationally from all vehicles approached include vehicle body type and make, driver language spoken, driver age, driver sex, number of adult occupants and child occupants, and presence of vehicle damage. We will use the residential ZIP information collected from all respondents to analyze malfunction rates by ZIP median income, as a proxy for socio-economic status, so that we are not limited to analyzing by the median income of the sampled ZIP, in the event that there are enough out-of-ZIP respondents to make a difference.


Data imputation is not planned for the final data set.



B.4. Describe any tests of procedures or methods to be undertaken.


Data collection forms and instructions have been developed by staff in NHTSA’s Office of Regulatory Analysis and Evaluation and ICF, the contracted vendor. The data collection forms are included as Attachment B in Part A.


Field Survey


Under the previous ICR, a pilot test was conducted in Suffolk County, NY, and additional test data were collected in Denver, CO. The Suffolk test found a response rate of about 85 percent, but the response rate in the Denver test was lower. Revisions have been made to the survey instrument since those pilots, and the indirect module (Subgroup 3) has been added. Therefore, a new pilot will be conducted. The new pilot will assess: whether the training provided was adequate in conveying team members their responsibilities; how survey procedures work in the field, including Subgroup 3 with the tire pressure taken; and whether the wording, question flow, formatting, and other characteristics of the draft forms work well. This pilot is expected to help refine procedures, forms, and cooperation for the start of the full study. Minor modifications to the data collection forms, as well as some changes in procedures, may result.



B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Survey Design: John Kindelberger, NHTSA, NSA-310, 202.366.4696

Sean Puckett, NHTSA, NSA-320, 202.366.8571

Ronaldo Iachan, ICF, 802.264.3726


Data Analysis John Kindelberger, NHTSA, NSA-310, 202.366.4696

Sean Puckett, NHTSA, NSA-320, 202.366.8571


Data Collection

Contractor: ICF (802.264.3726)

Sub-Contractor (to ICF): KLD Associates, Inc.

1 Registered passenger vehicle counts includes content furnished under license to the U.S. Department of Transportation, and subject to restrictions on disclosure, by R.L. Polk & Co. Polk data are a foundation of IHS Markit automotive solutions; Copyright © R.L. Polk & Co./IHS Markit automotive solutions, 2017. All rights reserved.

2 Sivinski, R. (2012, November). Evaluation of the effectiveness of TPMS in proper tire pressure

maintenance. (Report No. DOT HS 811 681). Washington, DC: National Highway Traffic Safety

Administration.

3 Note that equal probability sampling will facilitate the computation of the selection probabilities for the second ZIP. If a ZIP lacks a feasibly near neighbor of other income group, then it will be replaced in the selection procedure.

4 If a sample ZIP does not have two suitable stations, then it will be replaced in the selection procedure.

5 Additionally, the PSU measures of size used in CISS have a heavy component (80 percent) based on crash counts, an indication of exposure that reflects true PSU traffic; therefore, PSU weights will help adjust for over-coverage or under-coverage due to drive-through

6 Lehtonen, R. and Pahkinen, E. (2004). Practical Methods for Design and Analysis of Complex Surveys. Second Edition, John Wiley and Sons: Chichester, England.

7 In Tables B-2 through B-4, some sums are slightly affected by rounding.

9 Based on production and sales figures, we estimate that about 5-6 percent of passenger vehicles in model years 2009-2016 have indirect TPMS (no compliant indirect systems existed prior to model year 2009). Our expected sample size for years 2009-2016 is 5,980; thus we expect an indirect sample size of about 320.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJohn Kindelberger
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy