AACPSIR Section B 10242016 RAA

AACPSIR Section B 10242016 RAA.docx

Awareness and Availability of Child Passenger Safety Information Resources

OMB: 2127-0726

Document [docx]
Download: docx | pdf

SECTION B

INFORMATION COLLECTION

SUPPORTING STATEMENT


Awareness and Availability of Child Passenger Safety Information Resources


B) Collections of Information Employing Statistical Methods

The proposed Awareness & Availability of Child Passenger Safety Information Resources (AACPSIR) survey will employ statistical methods to sample and analyze the information collected from respondents. The following sections describe the procedures for respondent sampling and data tabulation. The AACPSIR survey will be conducted using a mail-out invitation (Appendix A) to complete a web-based screener and if appropriate, a survey (Forms 1333 and 1334). Research has shown that an advance letter of invitation to participate in a survey sent by postal mail is more successful than email or postcard invitations to participate in web surveys. The proposed AACPSIR survey will be administered using an Address-Based Sample (ABS) and offer multiple modes of responding. The primary response mode will be web. An alternative response mode will be telephone. The survey includes cognitive and usability testing prior to the full administration of the survey.


B.1. Describe the potential respondent universe and any sampling or other respondent selection method to be used.


a. Respondent Universe


Under this proposed data collection, the potential respondent universe would be people aged 18 years or older who regularly transport children between the ages of 0 and 9 in their personal vehicles. The respondents may be parents, grandparents, and other child care providers. The screener (Form 1333) will verify that the respondent drives with a child in a personal vehicle at least twice a month.


b. Respondent Sampling and Estimated Response Rates


To reach this target population, a cluster design will be used. The cluster design first selects primary sampling units (PSUs), which are clusters of households, and then selects households from the sampled PSUs. We will select a nearly equal probability sample of households with the adjusted sample size from each sampled PSU from a database of US addresses provided by MSG, a vendor that provides a sampling frame of current addresses in the US. Respondents within a household will not be randomly selected. Rather the screener will ask the member of the household who most frequently drives children to complete the survey. This is a proxy to attempt to establish the most knowledgeable or qualified member of the household to serve as the survey respondent.


The National Survey of the Use of Booster Seats (NSUBS) 2015 design structure will be used to select PSUs. In the PSU sampling frame, the NSUBS has 1,601 PSUs, which were formed by single or groups of counties. The sampling method to select PSUs for NSUBS is the probability proportional to size (PPS) method using the number of children of age 0 – 7 as the measure of size (MOS) based on 2012 population data. We will use the same sampling method and MOS for AACPSIR. One of the reasons for using the NSUBS PSUs to select an AACPSIR sample is to link the two surveys for analysis purposes. This linkage will enhance the analysis capability to bring out interesting results, which are not feasible by a different sampling approach. NSUBS collects data on restraint use by children age 0-12 and adult passengers in the observed vehicles and their demographic information. To maximize the power of the linked analysis, all 30 NSUBS PSUs will be included in the PSU sample for the AACPSIR survey. An additional 30 PSUs will be selected for a total of 60 PSUs for the AACPSIR survey to increase the sample size for broader (non-linked) analysis. The increase of the PSU sample size is recommended since the inspection stations, which are an important component for AACPSIR, are sparsely scattered across the country, and therefore, require a larger sample base.


There are numerous advantages of using the NSUBS PSU frame and the subsequent linked analysis for the AACPSIR survey. For one it saves the government the cost of developing a new sampling structure, including a sampling frame for the survey. The contractor for this study, Westat, developed the NSUBS PSU frame and can easily repurpose it for this survey. Also investigations into other measures of size (MOS) revealed that the MOS defined by number of children age 0-7 in the NSUBS PSU frame is advantageous as it reflects well the ideal MOS (the number of households with children of age 0-9) for the AACPSIR survey, as the two MOS are highly correlated – the correlation is near 1. Furthermore, the NSUBS frame is already stratified by belt use law and region, which is also desirable for the AACPSIR survey. Finally, the linkage will allow for enrichment of the data from this survey with community level data such as observed restraint use, rate of incorrect CRS use based on children’s weight, height, and age, driver’s restraint use rates as well as demographics. This NSUBS information can provide useful auxiliary variables in analysis of the AACPSIR survey data.


As noted, the plan calls for a cluster design with 60 PSUs, of which one half of the PSU sample comes from those PSUs selected for NSUBS and the remaining half is from the NSUBS frame. We will select the 30 additional PSUs using the same NSUBS stratification; NSUBS uses 8 strata defined by 4 Census Regions crossed by 2 stricter children restraint use law statuses1. For NSUBS, the State law is believed to affect children’s restraint use, and thus, it was used as a stratification variable. This aspect is also relevant for AACPSIR. The NSUBS PSUs are distributed to the 8 strata as shown in Table 2, and sample allocation based on the proportionality to the total stratum MOS is also shown for NSUBS and AACPSIR. When the allocated PSU sample size is less than 2 for AACPSIR, it was increased to 2 to facilitate variance estimation. The total MOS in the table represents the total population MOS (i.e., the total population of children age 0-7 in the US excluding HI and AK), which was used to calculate the stratum proportions of MOS for proportional sample allocation. The eight strata are indexed by , and the stratum PSU population and sample sizes are denoted by and , respectively.


Table 2. PSU sample allocation over the strata for NSUBS and AACPSIR


Stratum Number

( )

CensusRegion

Law Status

Number of PSUs

( )

Total Stratum MOS

NSUBS PSU Sample Size

( )

AACPSIR PSU Sample Size

( )

1

1

Yes

160

4,730,497

4

8

2

1

No

17

432,320

1

2

3

2

Yes

388

6,169,676

6

11

4

2

No

81

698,662

1

2

5

3

Yes

417

7,611,901

7

14

6

3

No

325

4,726,094

4

9

7

4

Yes

158

6,906,757

6

12

8

4

No

55

814,871

1

2

Total

 

 

1,601

32,090,778

30

60


For the AACPSIR, the desired PPS probability of selection for PSU in stratum is defined by:


where is the MOS for PSU in stratum . Notation superscripted by an asterisk (*) is used to denote similar notation for NSUBS. For example, is the PSU selection probability for PSU in stratum for NSUBS.


To include the NSUBS 30 sampled PSUs in the AACPSIR sample through the AACPSIR sampling procedure, we use a probability conditional on whether a PSU was selected in the NSUBS sample or not. If a PSU was selected in the NSUBS sample, then we select it conditionally with a probability of one, and the remaining 30 PSUs with a modified (conditional) probability from outside of the NSUBS sample as follows:

where is the NSUBS sample of 30 PSUs.


The unconditional probability that PSU in stratum is selected in the AACPSIR sample is then given by,



where is a sample of 60 PSUs for the AACPSIR. Note that the unconditional probability is the same as the desired probability defined in (1). We may consider that this procedure is a special case of the Keyfitz procedure (Keyfitz, 1951).2


Once a PSU sample is selected, addresses belonging to each sampled PSU will be purchased from MSG to select a sample of the designated number of households. To determine the household sample size, we set the target margin of error for a 95 percent confidence interval to 0.04 for estimating a population proportion of 50 percent. This requires a sample with an effective sample size of 625. To determine the respondent sample size, this effective sample size should be multiplied by the design effect (DEFF). Since there is no previous similar survey from which we can estimate the DEFF, we assume that the intra-cluster correlation is 0.05 and the weighting factor of the final weight (loss of sampling efficiency due to variable weights) is 1.5. (The NSUBS design effect is not suitable for the current study as the NSUBS sample design is very different beyond the first stage of selecting PSUs.) Although we intend to select ultimately an equal probability sample of households, the final weight will be quite variable due to nonresponse adjustment. Applying the Kish formula3 to these assumed values, we estimate the DEFF to be 2.2. Then, the required respondent sample size is 1,400, which is the rounded number of the multiple of the effective sample size and the estimated DEFF.


NHTSA estimates a 25 percent eligibility and screener completion rate based on data from the 2014 American Community Survey (ACS) on households with children, a review of recent household travel surveys, and data from NHTSA’s Motor Vehicle Occupant Safety Survey (MVOSS). In case the estimated completion rate is lower than expected for the base field sample of 28,000 contacts, NHTSA plans to select a reserve sample of 4,000, making the total field sample 32,000. NHTSA will track response rates weekly to determine if and when the reserve sample (i.e., 4,000) needs to be deployed to reach target completions. If the reserve is needed, NHTSA will contact a maximum of 32,000 households via an invitation letter to obtain the target of 1,400 completed interviews.


Table 3 shows expected sample size and response rate estimates by phases of the survey. One of the assumptions is that not everyone who is eligible goes on to complete the entire interview, which means that the response rate calculation depends upon treatment of partially completed interviews. In the planned design using an initial sample of 28,000, the expected response rate of the estimated eligible in the sample is 25% when including the partial interviews and 20% when excluding partials. Two scenarios under which we may need to use the reserve sample would be a reduction in the rate of screener completion among the sample or a reduction in the rate of interview completion among eligible households. Table 3 contains an example of how one such scenario using the reserve sample would drive down the response rate, which is expected given the conditions under which we would use the reserve.


Table 3. Breakdown of Eligibility, Completion, and Response Rate Estimates


 

Base Sample

Example Base with Reserve Sample

Total Contacts Approached

28,000

32,000

Estimated Rate of Household Eligibility

25%

25%

Estimated Rate of Screener Completion

25%

24%

Estimated Rate of Interview Completion

among Eligible Households


80%


73%




Estimated Complete Screeners

7,000

7,680

Estimated Known Eligible Households

1,750

1,920

Estimated Complete Interviews

1,400

1,400

Estimated Partial Interviews

350

520

Estimated Eligible among Unknowns

(No screener)


5,250


6,000




AAPOR Response Rate w/ Partials

25%

24%

AAPOR Response Rate w/o Partials

20%

18%


The ideal goal is to select an equal probability sample of households using the PSU selection probability and the actual number of households in the sampled PSU as much as possible. If the MOS used to select PSUs is a perfect MOS in terms of the number of eligible households, the sample design becomes a self-weighting design where the selection probability and the weight will be all equal if a fixed number of households is selected from each sampled PSU. However, this is not the case for AACPSIR, and the MOS (based on the number of children age 0-7) will not be perfectly correlated with the number of households with children of age 0-9 (the true MOS), Therefore, the actual selection probabilities of households will not be equal. To mitigate this problem, we will use the ACS data to estimate the number of households with children of age 0-9 so that we can adjust the number of households to be selected from each sampled PSU. The adjustment factor for PSU will be calculated by the following formula:



where and are, respectively, the proxy size based on the number of children of age 0-7 and the estimated size based on the ACS data for PSU , and is a constant scale factor that adjusts the scale difference between the two size measures ( and ). The adjustment factor is greater than 1 if the proxy size is smaller than it is supposed to be (i.e., ), and vice versa, otherwise. This procedure will select a nearly equal probability sample of households with the adjusted sample size from each sampled PSU. Other than geographic variables (address, longitude and latitude, etc.), other variables in the MSG database may not be reliable as other research has shown.4


B.2. Describe the procedures for the collection of information.


a. Procedures for Collection of Information


The contractor, Westat, will select a sample of households, as described in the previous section. Each household will be mailed an initial letter (Appendix A) requesting participation in the survey. Only one individual will respond for each selected household. The letter will include the survey website URL and a unique PIN for each household. The unique PIN insures that only one respondent per household completes the survey. The letter will also reference a help desk number available for respondents without Internet access, with language barriers, or who experience technical problems. There will be a Spanish tagline at the bottom of the letter directing respondents to a translation service.


The Contractor will develop a Web management system that will include a public survey website, online survey instrument, and survey management system. The public project website will contain information about the project and survey, a frequently asked questions (FAQ) section, and contact information. The FAQ section will focus on the importance of the survey as well as address typical questions. Participants will be able to use the contact information page to reach out to the Contractor’s survey staff. Links to this page will be present on all survey pages so that participants can use it to submit questions regarding operational difficulties and ask for help. Upon submittal, an automatic response will be generated and emailed to the participant with an expected timeline for a response. The “Contact Us” page will also include a toll-free help desk number for participants who prefer to contact the survey team over the phone. If no interviewers are available to respond to a participant call, the system will present a voice mail greeting, which will include the days and hours during which calls are returned. Participants will then be able to leave a message.


The web-based survey will begin with an introductory page that will explain how much time the survey is expected to take the respondent. After a period of inactivity a message pop-up will appear on the respondent’s browser offering them the possibility of calling the help desk to obtain additional support. Following the introductory page, the online instrument will present the eligibility screener (Form 1333) containing household-level questions that will assist in determining eligibility for the survey (i.e., adults traveling with a child age 0-9 in a personal vehicle on a regular basis). If a household has at least one eligible person, they will proceed to complete the full survey (Form 1334) that includes more specific person-level questions related to experiences using CRS; experiences searching for CRS information sources; awareness, use, and experiences with CRS inspection stations; and basic demographic information including gender, race, ethnicity, and income.


While most of the web-based survey will be close-ended questions, there will be a limited number of open-ended questions to allow respondents to provide information in their own words and to maximize the depth of understanding that may be gleaned from the responses. The survey will include skips to ensure only appropriate questions are asked among those who are qualified (e.g., only those with a booster-seat aged child will be asked about how they currently install a booster seat). In addition, other data quality checks will be programmed to ensure responses are consistent and rational.


The public website will also provide supporting materials and serve as the access point for the self-administered web-based data collection instrument. Access to the online instrument will be controlled using an alphanumeric PIN, with access restricted to using encrypted connection via Transport Layer Security (TLS) certificates. To protect the online instruments from break-in attempts, the public site will feature automatic access lockdown after too many unsuccessful login attempts are performed within a short amount of time. Similarly, once a case is completed the survey will no longer be accessible to participants using their PIN. This will prevent participants from re-opening it and changing responses. These two measures protect participant responses from being compromised. The Contractor ensures quality and secure systems development through the use of industry best-practices to track work items; to identify, document, and address issues; and to manage code revisions. In addition, all code changes are subject to code reviews so that each edit is always reviewed by at least two team members. Code reviews combined with unit testing prevent software regression bugs from creeping into the Web application and ensure that best-practices are used to reduce the likelihood of security vulnerabilities. Finally, all Web applications must pass an in-depth automated security vulnerability scan before they are released to production.


Each page of the Web survey will include an option to request assistance via email. In addition, a toll-free telephone number will be available for respondents who have difficulty or are unable to complete the survey online because of technical or language issues. Help desk staff will complete the survey via telephone with the respondents. Within the Web Survey System there will be both the online survey instrument, which respondents will see when they enter their PIN, and the computer-assisted telephone interviewing (CATI) version of the survey that will be available to the help desk staff. The CATI version will closely mimic the Web version of the instrument but make it easier for a help desk staff member to administer. CATI interviews will be handled in an integrated way so help desk staff can pick up at the point a Web respondent stopped completing the survey.


The Contractor will send out one reminder postcard (Appendix B) to non-response households two weeks after the initial mail out of the invitation letter. The non-response households are determined based on a review of the unique household PINs that were not utilized to complete the survey. The reminder postcard will include the survey logo, survey website URL, and the unique household PIN. The postcard will remind households of the importance of their participation in the survey.

b. Precision of sample estimates


The target number of completed surveys is 1,400, of which about one half (700) would be expected to come from the 30 NSUBS PSUs if the sample was allocated proportionately. We assumed a design effect of 2.2, and with this design effect (DEFF), the effective sample size of the group of 700 respondents is 318, which renders 5.6 percentage points margin of error for the 95 percent confidence interval for an estimate of a population proportion of 50 percent. If we perform subgroup analysis of a 2 by 2 table with an equal cell size, then the cell (effective) sample size will be one fourth of the total (effective) sample size of 318. The margin of error for each cell is then 11.2 percentage points for 95 percent confidence interval. However, desiring to have better precision for linked analysis, the total sample size (28,000) in the base field sampling plan will be allocated disproportionately by allocating 17,500 to the NSUBS portion and 10,500 to the non-NSUBS portion, then with the overall participation rate of 5 percent (i.e., 1,400 completes / 32,000 total contacts), we will get a sample size of 875 respondents from the NSUBS PSUs, which renders 5.0 percent margin of error for the linked analysis with a DEFF of 2.2. The overall margin of error for the entire sample with the total respondent sample size of 1,400 is expected to be slightly less than 4.0 percent because the DEFF will slightly increase due to unequal allocation.


  1. Sample Weighting

Based upon the probability of selection determined in Section B.1.b, we expect the household sample will be nearly self-weighting (i.e., equal weights). Nevertheless, the base weight will be calculated by taking the inverse of the overall selection probability of the sampled household. The base weight is then adjusted for screener nonresponse, and the adjusted weight will be further adjusted for survey nonresponse if needed.


In the case of a non-response bias, we will use as many auxiliary variables as possible to predict the response probabilities at the screener and at the survey separately. The propensity score method is commonly used for this, and we will use it for AACPSIR. The logistic regression with the response status (at the screener or survey) as the dependent variable will be run to obtain the predicted response probability (propensity). To avoid excessive variance inflation by directly using the predicted propensity for nonresponse weight adjustment, we will form weighting classes based on the predicted propensity and perform nonresponse weight adjustment by class.


The final step of the weighting process will be to apply calibration weighting (benchmarking) to the nonresponse adjusted weight using the ACS population data as control totals. We will use the raking procedure for calibration using demographic variables of eligible children (e.g., age group, gender, and race/ethnicity) and geographic variable (Census region) to define raking dimensions. By including HI and AK in the control totals, we will address this under-coverage issue through calibration weighting.


d. Non-response bias analysis


If a non-response bias analysis is needed, auxiliary variables from several different sources will be used to determine if there are differences between respondents and non-respondents. This will be conducted at two levels of non-response: screener non-response and extended survey non-response. Eligible households will be identified through screening, and an extended interview will be conducted with those who are identified as eligible by the screener. Therefore, there is potential for non-response at both the screener and extended survey phases.

The screener and extended survey non-response bias analyses will use almost all of the same auxiliary data sources with one exception. Both will use individual household data, National Survey of the Use of Booster Seats (NSUBS) primary sampling unit (PSU) data, and data from the American Community Survey (ACS) and/or the Current Population Survey (CPS). The only difference is that the extended survey non-response analysis will also use data collected through the screener to compare extended interview respondents and non-respondents. The individual household level data will include the distance between the nearest inspection station and the household, which would be a good auxiliary variable because it will be correlated with survey variables. In addition, individual household level data will include urban/rural status of each household in the screener sample.

NSUBS data are available only for the 30 NSUBS PSUs that will be part of the AACPSIR PSU sample. Nevertheless, the NSUBS data are expected to be good auxiliary data sources because the data are expected to be correlated with the response behavior of sampled households and AACPSIR survey variables. The result of a non-response bias analysis using the NSUBS data could be extended to the entire population because the NSUBS portion of the AACPSIR sample is a random subsample of the entire AACPSIR sample. The ACS and/or CPS data sources would be used as needed. While these data sources have some overlap in available types of data, a combination would likely be used because the ACS provides a greater variety of variables with smaller coverage and the CPS brings greater reach with fewer variables.


If the respondents and non-respondents are similar with respect to the auxiliary variables, the nonresponse bias may not be serious. Significant differences would indicate potential non-response bias, but potential bias can be removed through non-response weight adjustment. It is highly likely, however, that a large percentage of ineligible households will not respond to the screener. If we do not obtain an adequate amount of information about ineligible households due to poor participation of ineligible households, then the screener non-response weight adjustment may not be able to remove all the potential bias caused by screener non-responses.


We do not expect a similar problem for non-response weight adjustment for the extended interview respondents among the screener respondents because there is no issue of ascertaining the eligibility and the extended interview non-response rate will be much lower than the screener non-response rate. In addition, we will have more auxiliary information for non-response weight adjustment. This does not mean that the final estimates will be unbiased unless the screener non-response bias is first effectively removed.


Table 4. Auxiliary variables and their data sources for nonresponse bias analysis

Data Source

Auxiliary Variables

NHTSA

Distance between the nearest inspection station

and household

GPS

Urban/rural status

NSUBS

Driver characteristics: belt use rate,

percent within age categories, percent female,

percent within race/ethnicity categories

NSUBS

Child characteristics: belt use rate,

percent within age categories, percent female,

percent within race/ethnicity categories

ACS

Median income, percent annual income categories, median years of education, percent college graduates, median rent, percent renters, median home value, percent white, percent black, percent Asian, percent Hispanic

CPS

Age, race, ethnicity, education attainment,

family and marital status


B.3. Describe methods to maximize response rates and to deal with issues of non-response.


NHTSA is taking a number of steps to boost the AACPSIR response rate. Foremost will be NHTSA’s use of the multi-mode approach, where different options for responding are presented to prospective respondents (Web and telephone). This offers greater opportunity for people to use a response mode that they prefer and with which they are comfortable, which should enhance participation.


Branded materials and a survey logo that will be used on the invitation letter and the survey website to help respondents recognize the survey and attract potential participants. The invitation to participate in the survey will include wording in Spanish for those who are entirely or predominantly Spanish speaking so that they are not excluded from the survey. The project website will describe the study and provide information about the project to the public, media, and survey participants. This website will add a sense of legitimacy and authenticity to the survey through the provision of links to NHTSA. The public website will also provide supporting materials and serve as the access point for the self-administered Web-based data collection instrument.


In adapting the questionnaires to multi-mode administration, the project team will apply principles of heuristics that people follow in interpreting visual cues in visually laying out the questions. At various times during materials development, the content and look will undergo expert review by the contractor’s Instrument Design, Evaluation, and Analysis (IDEA) Services, an in-house team of expert survey methodologists who provide questionnaire design and evaluation services. The IDEA group staff will work closely with the Westat subject matter experts and NHTSA to ensure that the questionnaire and respondent materials are designed as the most effective tools to address the research questions and measurement goals, to improve data quality, to reduce measurement error, and to minimize burden for respondents.


The survey will also include a number of assistance devices for respondents so that they do not become frustrated and terminate their participation prior to submission of a completed questionnaire. This will include for the Web response mode providing easy navigation from page to page and furnishing the capability for respondents to pause and leave the system and then re-enter at the departure point without losing any previously inserted information. For all response modes, the respondents will be provided clear methods by which they can contact the contractor if they have questions about the survey. As described in the next section, the survey will undergo cognitive and usability testing to help insure the information is presented appropriately and promotes clear understanding.


Additionally to help maximize response rates, the contractor will send out one reminder postcard to non-response households two weeks after the initial mail out of the invitation letter. The non-response households are determined based on a review of the unique household PINs that were not utilized to complete the survey. The reminder postcard will include the survey logo, survey website URL, and the unique household PIN. The postcard will remind households of the importance of their participation in the survey.


Finally, remuneration will be paid to study participants. The invitation letter to the online survey will include $1. Participants will receive an additional $5 for completing the survey. This information is presented clearly in the invitation letter and reminder postcard.


B.4. Describe any tests of procedures or methods to be undertaken.


As part of the study design, the Contractor conducted five cognitive interviews and four usability testing interviews to test and refine questions for the NHTSA survey. The cognitive testing was broken into two rounds. At the start of the cognitive interview the participants were presented with a consent form and informed of their rights as a research participant. They were asked to complete a paper version of the survey. The interviewers conducted the cognitive interviews using a concurrent probing technique. The interview protocol focused on the questionnaire content and comprehension including item wording, order, and terminology.


The usability testing was conducted with four participants and occurred after the survey was programmed. The process of the interviews was similar to the first round except the respondents experienced the Web platform and provided input on the content, navigability, and functionality of the survey. The draft protocols were provided after the NHTSA project manager reviewed the survey content.


All the interviews were conducted in-person at the Westat cognitive testing laboratory facility. Each interview lasted 60 minutes and was audio-recorded with the permission of the participant. The Contractor recruited interview participants that match the survey’s target population – caregivers (parents, grandparents, guardians, and child care providers) who transport child passengers aged 0 to 9 years in a personal vehicle on a regular basis (at least 4-6 times per month). The participants were between the ages of 18 and 75 and selected to represent demographic characteristics of gender, race/ethnicity, and education. Some of the cognitive interviews were conducted with caregivers having a household income of $30,000 or less.


B.5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


Adele Polson, Senior Study Director

Westat

1600 Research Blvd.

Rockville, MD 20850

(301) 610-4898


Hyunshik Lee, Senior Statistician

Westat

1600 Research Blvd.

Rockville, MD 20850

(301) 610-5112


Alan Block, MA

Office of Behavioral Safety Research

National Highway Traffic Safety Administration

1600 New Jersey Ave, SE

Washington, DC 20590

(202) 366-6401


Mary T. Byrd, MA

Office of Behavioral Safety Research

National Highway Traffic Safety Administration

1600 New Jersey Ave, SE

Washington, DC 20590

(202) 366-5595

1 All states including DC have a child restraint use law; some states implement a stricter law than others. For stratification we grouped states into two groups by the 2014 state safety seat laws as follows: one group of states with a law that covers age 0 to 7 or higher and another group of states with a law with an upper age less than 7. According to this grouping there were 30 states and DC in the former group.


2 Keyfitz, N. (1951). Sampling with probabilities proportional to size: adjustment for changes in the probabilities. Journal of the American Statistical Association, 46, 105-109.

3 The Kish formula is given by , where is the squared coefficient of variation of the final weights, is the average cluster size, and is the intra-cluster correlation. We assume that the weighting factor ( ) is 1.5.

4 Roth, S.B., Han, D., and Montaquila, J.M. (2013). The ABS frame: quality and considerations. Survey Practice, Vol. 6, No. 4.

11


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleTABLE OF CONTENTS
AuthorABlock
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy