Rear SBRS Section B_2013 12-04_clean

Rear SBRS Section B_2013 12-04_clean.doc

National Survey of Principal Drivers of Vehicles with a Rear Seat Belt Reminder System

OMB: 2127-0696

Document [doc]
Download: doc | pdf

TABLE OF CONTENTS


SUPPORTING STATEMENT


B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe the potential respondent universe and any sampling or other respondent

selection method to be used


2. Describe the procedures for the collection of information


3. Describe methods to maximize response rates and to deal with issues of

non-response


4. Describe any tests of procedures or methods to be undertaken


5. Provide the name and telephone number of individuals consulted on statistical

aspects of the design


B. Collections of Information Employing Statistical Methods


The proposed study will employ statistical methods to analyze the information collected from respondents and draw inferences from the sample to the target population. It will use a list sample drawn from vehicle registration and sales lists with respondents being interviewed on landline phones and on cell phones. The following sections describe the procedures for respondent sampling and data tabulation.


B.1. Describe the potential respondent universe and any sampling or other respondent selection method to be used.


  1. Respondent universe


This data collection effort will target two separate groups of drivers in order to compare the results between each group. The target population for the first group consists of all primary drivers of vehicles equipped with a rear Seat Belt Reminder System (SBRS) who are age 18 or older residing within the United States. The target population for the second group consists of primary drivers of vehicles similar to those driven by the first group which are NOT equipped with a rear SBRS. These too will be drivers age 18 or older residing within the United States. The selected respondents from both target groups must transport people 8 years old or older in the rear seat at least once a month. The rationale behind this target population is we want respondents that have experienced the rear SBRS and we want to avoid those that transport children in car seats. Since the survey will be administered by telephone, the sampling universe will be persons who meet the above criteria and live in households with working telephones. Also, since interviews will be conducted in English only, any person who does not speak English will be excluded from this study. Based on the prevalence of the use of English in the United States, excluding respondents who speak a language other than English will have minimal impact on the study. For general population surveys which offer a Spanish language version, the second most prevalent language in the United States, less than 3% of interviews are conducted in Spanish. There is no reason to expect this survey effort will encounter more Spanish-speaking households than a general population survey would.


  1. Statistical sampling methods


Probability Sample of Principal Drivers of Vehicles with Rear SBRSs

The sampling frame for the selection of the sample in the first group will be vehicle registration lists from 36 states plus the District of Columbia and sales lists from 14 states, which restrict the release of vehicle registration lists. Most Volvo models, the Chevrolet Volt, and the Cadillac SRX are the car models currently sold in the United States which have a rear SBRS. Given the low incidence of Americans who own a car with a rear SBRS, the sample will be drawn from vehicle registration lists in states which release this information and from sales lists from states which restrict this information. Both of these lists will be obtained from R.L. Polk, which provides a telephone number and name for the registered owner of specific car models in the U.S. Abt SRBI, the survey contractor, is prepared to utilize these lists from R.L. Polk in order to produce a representative sample of principal drivers of vehicles with a rear SBRS. The sample will be drawn from a national list, stratified by NHTSA region, which will contain all records that RL Polk has on file. The sample will be allocated to each region in proportion to the total number of vehicle owners within that region. Weights will not be applied to the data due to the fact that the study employs the proportional allocation method and is utilizing a comprehensive list of vehicle owners within each region. Table 1 shows the total population of people within each region and, for illustrative purposes only, the final N size for each region assuming the universe of vehicle owners is commensurate with that of the total population. Sample sizes from the vendor for each NHTSA region are not yet available.



Table 1. Population by NHTSA Region

NHTSA

Total Pop

Pop. %

Rear SBRS N

Comparison N

Region 1

14,492,360

4.7%

93

23

Region 2

41,029,238

13.2%

263

66

Region 3

31,331,145

10.1%

201

50

Region 4

44,758,075

14.4%

287

72

Region 5

51,863,945

16.6%

333

83

Region 6

39,101,761

12.5%

251

63

Region 7

16,724,855

5.4%

107

27

Region 8

12,733,512

4.1%

82

20

Region 9

45,549,227

14.6%

292

73

Region 10

14,007,799

4.5%

90

22

 

 

 

 

 

TOTAL

311,591,917

100.0%

2000

500



Once selected, the sample will be divided into replicates in order to ensure each sample release is based on national proportions of drivers. Using this list sample, we will be able to target only those households who own this type of vehicle and greatly increase the efficiency of the data collection effort.

The list contains both landline and cell phone numbers. In order to comply with the Federal Law prohibiting dialers to be used for cell phone numbers, Abt SRBI will run the entire list through the NuStar database in order to determine which phone numbers are cell phone numbers. Cell phone numbers are flagged in the CATI system and diverted to a special queue where the interviewer must dial the number by hand in order to comply with federal law. Phone numbers that are not flagged are treated as a landline number.

The calculated screening incidence for the survey of principal drivers of vehicles with rear SBRS who regularly transport passengers 8 years old or older was 19.2%. The incidence was determined based on the following assumptions:

  • “Regularly” means transporting passengers 8 years old or older at least once a month

  • Households with children are most likely the households which will regularly transport passengers in the rear seat

  • According to the 2000 Census, 32.8% of American households contained children under 18 years old.

  • Assuming a normal distribution for the age of children, of those households with children under 18 years of age, 58.8% have children 8 years old or older

  • Thus we get an overall prevalence of 19.2% when we take 58.8% of 32.8%.

Once contact with the household is made we first ask to speak to the person whose name is on the vehicle registration list. Once this person comes to the phone we then confirm that he or she is a driver who is at least 18 years old and that he or she owns the vehicle (specific make and model) listed in our records. Once this information is confirmed the respondent is then asked if he or she transports passengers in the rear seat at least once a month. If the respondent responds in the affirmative we proceed with the interview. If the respondent responds “no,” we then ask if anyone else in the household transports passengers in the rear seat of the specific vehicle at least once a month. If no one in the household transports passengers in this way then the case is screened out.


Probability Sample of Principal Drivers of Vehicles without Rear SBRSs

For the comparison group, the sampling frame will be similar to the one used for the test group. Abt SRBI will utilize a list sample obtained from R.L. Polk. This list will contain owners of equivalent vehicle classes WITHOUT a rear SBRS, but with a similar front SBRS (not a more enhanced or aggressive SBRS, preferably the same vehicles before the introduction of a rear SBRS). Given that the same eligibility criteria apply to both samples, the same assumptions we used for the test sample will also be used for the comparison sample. The sample will be stratified and drawn according to the specifications for the test group.

Again, we calculated incidence for the survey of principal drivers of vehicles WITHOUT a rear SBRS who regularly transport passengers 8 years old or older to be 19.2%. We determined this based on the following assumptions:

  • “Regularly” means transporting passengers 8 years old or older at least once a month

  • Households with children are most likely the households which will regularly transport passengers in the rear seat

  • According to the 2000 Census, 32.8% of American households contained children under 18 years old.

  • Assuming a normal distribution for the age of children, of those households with children under 18 years of age, 58.8% have children 8 years old or older

  • Thus we get an overall prevalence of 19.2% when we take 58.8% of 32.8%.

Once contact is made with the household, the screening process is identical to the process described for Principal Drivers of Vehicles with a Rear SBRS.

The samples selected both in the test and comparison groups are self-weighting because of the allocation of the total sample to each stratum in proportion to the population size in that stratum.  After data collection, the number of respondents in each stratum may be slightly different than the planned sample size. We will take both the original allocation and the number of respondents into account when computing the variance of the estimates.


Power Analysis

Abt SRBI conducted a power analysis to determine the level of statistical power that could be expected from the sample design. The sample design features n=2,000 completed interviews with the rear SBRS group and n=500 completed interviews with the comparison group.

Figure 1 plots the statistical power to detect a difference in population percentages of interest between the rear SBRS group and the comparison group. The power curve is represented by the blue line. The x-axis displays the size of difference between the group estimates. The y-axis displays the statistical power for the difference in proportions test (1-sided Pearson Chi-Square with alpha = .05) provided by the design. Larger differences are easier to detect than smaller differences, and that is why the power curves on the graph increase as one moves from left to right in the figure. Very large differences (e.g., 10 percentage points or more) would be detected with near certainty (> 99% power)



We added a horizontal line to Figure 1 that represents 80% power, which is the most common power level to consider when determining survey sample sizes. The figure shows that the sample design would be expected to detect a difference of 6.2 percentage points with 80% power. This means that there is an 80% chance that a statistical test based on the proposed sample sizes will reject the hypothesis of no difference between two groups when there is a difference as large as 6.2 percentage points in population percentages of interest. The power provided by the sample design is quite high.

It is very important to bear in mind that the size of the detectable difference under a given design is a function of the reference proportion assumed. The figures presented in this power analysis are based on a reference proportion of 40%. If a more extreme reference proportion is used (e.g., 5% or 95%), then the sample design could detect smaller group differences with the same level of power (80%). By contrast, if a reference proportion closer to 50% is used, then the detectable differences at 80% power would be larger for the sample design. In essence, we are being conservative in assuming a proportion closer to 50% rather than a higher proportion which would be able to detect smaller differences between the two groups.

The statistical test for the difference between groups will be based on standard errors in the selected sample in each group, taking the sampling design into account.

Two unequal sample sizes are being employed for the current study due to the fact that there are two separate objectives of the study. The first objective is the comparison between those who currently have a rear SBRS in their vehicle and those who do not. The second objective is to determine which characteristics of the rear SBRS are useful or not useful to those who have them in their car. The respondents who have a rear SBRS will receive a longer questionnaire with more questions regarding the use and attitudes towards the system. This asymmetrical sample design allows us to achieve both objectives while keeping respondent burden to a minimum.


B.2. Describe the procedures for the collection of information.


  1. Call strategy

Interviewing will be conducted according to a schedule designed to facilitate successful contact with sampled households. Initial telephone contact will be attempted during the hours of the day and days of the week that have the greatest probability of respondent contact, based on the past experience of the survey contractor. This means that the primary interviewing period will be conducted between 5:30 p.m. and 10:00 p.m. on weekdays; between 9:00 a.m. and 10:00 p.m. on Saturdays; and between 12:00 noon and 10:00 p.m. on Sundays. Since interviewing will be conducted across several time zones, the interviewing shift lasts until 2:00 a.m. at night. Since the interviews are expected to average about fifteen minutes in length, the last contact attempt will not be made later than 9:30 p.m. according to local time unless scheduled for a later time by the respondent.

Interviewers will attempt up to 20 calls to each landline telephone number. This shall include 10 call attempts to ringing but unanswered numbers before the number is classified as a permanent “No Answer” and a total of 20 calls if contact is made with someone in the household on one of the first 10 attempts.

The cell phone calling protocol is more abbreviated than the landline calling protocol due to negative reactions to repeated attempts on cell phones. Interviewers will attempt up to 10 calls to each cell phone number. This will include 5 call attempts to ringing but unanswered numbers before the number is classified as a permanent “No Answer” and a total of 10 calls if contact is made with someone in the household on one of the first 5 attempts.


  1. Protocol for answering machines and voice mail


When contact is made with an answering machine or voice mail, a message will be left according to a set protocol. For landline numbers, a message will be left on the 5th, 7th, and 9th contact attempts if an answering machine is reached. The message will explain that the respondent has been selected as part of a national USDOT study, and ask that the Contractor’s toll-free number be called to schedule an interview, and include reference to the NHTSA web site which will include information about the survey so that prospective respondents can verify the survey’s legitimacy.


For cell phone numbers, a similar message will be left on the 2nd attempt so that the cell phone user can quickly attach an identity to the number s/he is seeing on the phone. A message is left on voice mail for cell phone respondents early in the process since caller ID for cell phones will not display caller information unless the caller is already in the personal contact list of the user.


  1. Initiating the interview


The initial contact with the designated respondent is crucial to the success of the project. Most refusals take place before the interviewer has even completed the survey introduction (usually within the first 30 seconds of the call). While the brief introduction to the study concerning its sponsorship, purpose and conditions are sufficient for many respondents, others will have questions and concerns that the interviewer must address. A respondent's questions should be answered clearly and simply.

For the cell phone sample, respondents will first be asked if they are in a situation where it would be unsafe to speak with the interviewer. If the respondent says “Yes,” then the interviewer will say that he or she will call back at another time, and immediately terminate the call. Once a cell phone user is reached at a safe time, the interviewer will first screen for age eligibility. If the cell phone user is eligible to participate, then the interviewer will seek to proceed with the interview. If it is an inconvenient time for the respondent, then the interviewer will try to set up an appointment.


  1. Interviewer monitoring


Each interviewer will be monitored throughout the course of the project. The Monitor will evaluate the interviewer on his or her performance and discuss any problems that an interviewer is having with the Shift Supervisor. Before the end of the interview shift, the Monitor and/or Shift Supervisor will discuss the evaluation with the interviewer. If the interviewer cannot meet the Contractor’s standards, he or she will be removed from the project.


All interviewers on the project will undergo two types of monitoring. The Study Monitor will sit at a computer where s/he can see what the interviewer has recorded, while also audio-monitoring the interview. The audio-monitoring allows the Supervisor to determine the quality of the interviewer's performance in terms of:


  1. Initial contact and recruitment procedures;

2) Reading the questions, fully and completely, as written;

3) Reading response categories, fully and completely, (or not reading them) according to study specifications;

4) Whether or not open-ended questions are properly probed;

5) Whether or not ambiguous or confused responses are clarified;

6) How well questions from the respondent are handled without alienating the respondent;

7) Avoiding bias by either comments or vocal inflection;

8) Ability to persuade wavering, disinterested or hostile respondents to continue the interview; and,

9) General professional conduct throughout the interview.


The Supervisor will also monitor the interviewer's recording of survey responses as the Supervisor's screen emulates the interviewer's screen. Consequently, the Supervisor can see whether the interviewer enters the correct code, number or verbatim response.


B.3. Describe methods to maximize response rates and to deal with issues of non-response.

High response rates are achieved by implementing several project management procedures such as carefully reviewing problem cases; routine documenting and review of cases by the Project Manager and Director; a 20-call protocol; and varying callback patterns to reach respondents at different times of the day or week.

The CATI program records the nature and reason for refusals and interview terminations, which become part of the permanent survey file. This provides us with a systematic record of the exact point in the interview that the refusal or termination occurred; the circumstances and reasons, if given, for the refusal or break-off; and the position of the person refusing/terminating, if known.


  1. Initial contact


The initial contact with the designated respondent is crucial to the success of the project. Most refusals take place before the interviewer has even completed the survey introduction. Numerous studies have shown that an interviewer's manner of approach at the time of the first contact is the single most important factor in convincing a respondent to participate in a survey. Many respondents react more to the interviewer and the rapport that is established between them than to the subject of the interview or the questions asked. This positive first impression of the interviewer is key to securing the interview.


While the brief introduction to the study concerning its sponsorship, purpose and conditions are sufficient for many respondents, others will have questions and concerns that the interviewer must address. A respondent's questions should be answered clearly and simply. The interviewers assigned to this study will be trained on how to answer the most likely questions for this survey. A hard copy of Frequently Asked Questions will be provided to each interviewer at their station. The interviewers also will be trained to answer all questions in an open, positive and confident manner, so that respondents are convinced of the value and legitimacy of the study. If respondents appear reluctant or uncertain, the interviewer will provide them with a toll-free number to call and the NHTSA website to verify the authenticity of the survey. Above all, the interviewer will attempt to create a rapport with the study subject and anyone in the household contacted in the process of securing the interview.


  1. Refusal documentation and tracking


Higher response rates can be achieved through procedures built on careful documentation of refusal cases. Whether the initial refusal is made by the selected respondent, a third party or an unidentified voice at the end of the phone, the CATI system used for this survey will be programmed to move the interviewer to a non-interview report section. The interviewer will ask the person refusing to tell him/her the main reason that he or she does not want to do the interview. The interviewer will enter the answer, if any, and any details related to the refusal into the program to assist in understanding the strength and the reason for the refusal. This information, which will be entered verbatim in a specific field in the CATI program, will be part of the formal survey reporting record, and will be reviewed on a daily basis by the Field Manager and, on a weekly basis, by the Project Director. Based on the pattern and distribution of refusals, a strategy for refusal conversion will be formalized.


The Survey Manager and the Project Director will review the information regarding refusals and terminations on the CATI system on an ongoing basis to identify any problems with the contact script, questionnaire or interviewing procedures that might contribute to non-participation. For example, they will scrutinize the distribution of refusals by time of contact, as well as any comments made by the respondent, in order to determine whether calling too early or too late in the day is contributing to non-participation. Also, the refusal rate by interviewer will be closely monitored.


In addition to relying on the CATI data records, the Project Director and Survey Manager will also consult with the interviewing Shift Supervisor, who has monitored the interviewing and debriefed the interviewers. The information from these multiple sources provides solid documentation of the nature and sources of non-response. It will also help to generate the "conversion script” which will be developed by the Project Director.


  1. Refusal conversion


Abt SRBI will implement a refusal conversion protocol. Abt SRBI plans to utilize a process in which each person selected for the sample that refuses to participate in the survey shall be re-contacted approximately one to two weeks after the initial refusal. It is important to have a “cooling off” period before recontacting the household.

Abt SRBI shall attempt to convince initial refusals to reconsider and participate in the survey. Abt SRBI shall exempt persons from the refusal conversions process only if the nature or conditions surrounding the original refusal indicates that refusal conversion would be inappropriate.

The actual process of converting terminations and refusals, once they have occurred, involves the following steps. First, there is a diagnostic period, when refusals and terminations are reported on a daily basis and the Project Director and Survey Manager review them after each shift to see if anything unusual is occurring. Second, after enough time has passed to see a large enough sample of refusals and terminations, the Project Director and his staff work out a refusal conversion script. Third, the refusal conversion effort is fielded with re-interview attempts scheduled about a week after the initial refusal. Fourth, the Project Director and Operations Manager will receive the outcomes of the refusal conversion efforts on a daily basis. Revisions of the script or the procedures would be made, if needed, based on the ongoing results of the conversion effort.


Past experience suggests that about 20% of initial refusals will reconsider. Success in refusal conversion tends to depend more heavily on the difficulty of the survey and the study population, i.e., the hardness of the initial refusals, than it does with the level of effort on refusal conversions. Nonetheless, sensitivity in the diagnosis of reasons for non-participation and creativity in responding to those problems can have a real effect on refusal conversion.


  1. NHTSA web site information


It is not uncommon for behavioral self-report surveys such as these to generate incoming calls or emails to the U.S. Department of Transportation from people who have been contacted by the survey. In a large percentage of the cases, it’s an inquiry to check its legitimacy. People are wary about attempts to extract information from them, wondering if the caller is truthful in identifying the source of the survey and the use of their information.


To help allay these fears, NHTSA will place on its web site information that prospective respondents can access to verify the survey’s legitimacy. The interviewers will provide concerned respondents with the web address for NHTSA’s home page. There will be a link on the home page that will direct the respondents to information on the source of the survey and why it is important for them to participate. A toll-free number to reach the survey Contractor will also be provided so they can schedule an interview.


  1. Non-response bias analysis


A non-response bias study will be conducted for the survey. Non-response is the failure to obtain survey measures on all sampled individuals. There are two main potential consequences from non-response: (1) non-response bias and (2) underestimation of the standard errors. The main challenge posed by non-response is that without a separate study (i.e., a non-response bias analysis), one can never be sure of whether the non-response has introduced bias or affected the estimation of the standard errors. The mere presence of non-response, even high non-response, does not necessarily imply bias or non-response error.


The non-response bias study will entail the use of auxiliary data purchased from Survey Sampling Inc. (SSI) and available at the case level. Due to the specific target group (those who own a certain make and model of vehicle) comparing results from the survey to national population estimates is not appropriate to determine non-response bias. The best available mean to measure non-response bias is by appending auxiliary data to each sampled telephone number and then comparing the auxiliary data from those who completed the survey, those who screened out, and those who did not complete the survey due to nonresponse or refusal. Auxiliary data will not be available for all phone numbers; however, for those numbers where it is available this data has been shown to be quite accurate. The auxiliary data will contain information on adults in household (age and gender), children in household (age and gender) household income, highest education obtained by a member of the household, race and ethnicity, marital status, and housing tenure (own/rent).


These variables will be checked against the available demographic variables in the questionnaire for the completed cases in order to get a sense of the accuracy. The completed cases will then be compared to the non-completes and the refusals. Given the sampling frame and available data this method will yield the best non-response bias analysis possible.



B.4. Describe any tests of procedures or methods to be undertaken.


  1. Testing the programming


The project team will test the CATI program thoroughly in test mode – running the interviewing program through multiple loops. The analytical staff will attempt to test all possible response categories for each question in order to identify embedded logic errors, as well as obvious skip problems. Several analysts may test the program simultaneously to identify problems quickly, and to double check the comprehensiveness of the testing.


After the program has been tested by reviewing the questionnaire flow and skip patterns on screen, an auto-pilot dataset will be created. An auto-pilot consists of randomly generated CATI data used for testing a CATI script. The computer will generate data for 1,000 completed cases. This allows an analyst to pull down the data and run QC checks within SPSS in order to ensure skips based on multiple questions are working to specification. Other features of the questionnaire, which are difficult to test by manually entering responses on screen, are also reviewed using the auto-pilot data.


  1. Survey pretest


The draft survey instrument has already been programmed in CATI and a pretest with nine participants has been conducted. The pretest allowed the survey contractor to test all of the survey systems, the survey instrument, and the CATI programming and ensure that everything was working according to the specifications. The pretest was conducted under conditions identical to the planned conditions for the main data collection effort. The questionnaire has been revised based on the results of the pretest for clarity of questions, improvement of the flow (order) of the questionnaire, and to reduce the average length of time down to 15 minutes (the timing of the pretest instrument was 17 minutes). The pretest did not reveal any issues with item non-response; the programming of skips or question fills from the sample.


Immediately following the interviews, the interviewers participated in a debriefing session to report any problems or observations they have regarding the instrument to the field manager. The field manager consolidated the observations and suggestions along with his own recommendations and sent them to the Project Director. Interview timings were provided as well. The completed questionnaires were analyzed, along with the extensive comments on the content and administrative aspects of the instruments made by the interviewers, monitors and research staff observing the pretest. Revisions to procedures, CATI programming, and the survey instrument were made as appropriate.

B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


The following individuals have reviewed technical and statistical aspects of procedures that will be used to conduct the survey:


Paul Schroeder, MA

Vice President

Abt SRBI, Inc.

8405 Colesville Road, Suite 300

Silver Spring, MD 20910

(301) 608-3883


K.P Srinath, Ph.D.

Senior Vice President

Abt SRBI, Inc.

8405 Colesville Road, Suite 300

Silver Spring, MD 20910

(301) 608-3883


13


File Typeapplication/msword
File TitleTABLE OF CONTENTS
AuthorABlock
Last Modified ByCarla Rush
File Modified2014-03-19
File Created2014-03-19

© 2024 OMB.report | Privacy Policy