Supporting Statement Part B - Final - 04302020

Supporting Statement Part B - Final - 04302020.docx

National Household Travel Survey (NHTS)

OMB: 2125-0545

Document [docx]
Download: docx | pdf

Part B
Statistical Methods

While many of the design elements of the 2020 National Household Travel Survey (NHTS) are fundamentally similar to those of prior administrations, the upcoming survey will leverage the KnowledgePanel® (KP) for the sampling frame. While fully grounded in probability-based sampling methodology for recruitment, utilizing the KP will reduce respondent burden, improve response rates, and improve the precision of survey estimates. Furthermore, an important benefit of using the KP is that it simplifies many of the cumbersome procedures of the earlier NHTS administrations, hence reducing the complexity of the effort.

  1. Sampling Methodology

KP is the largest online panel serving the United States that is constructed by pre-recruiting an invited probability-based sample of households. Panel recruitment relies on an Addressed-Based Sampling (ABS) methodology from the latest Delivery Sequence File (DSF) of the United States Postal Service (USPS). For this purpose, Ipsos works with the nation’s leading sample vendor, Marketing Systems Group (MSG), to enhance the DSF prior to sample selection. MSG appends a long list of ancillary data from the Census Bureau as well as commercial databases. Such added data elements can improve the efficiency of sampling procedures for the 2020 NHTS and enhance subsequent refinements that will be needed for weighting and nonresponse adjustments.

It is important to point out that all households that do not maintain an internet connection at the time of recruitment for KP are provided internet access and a tablet, thereby expanding population coverage to include non-internet households as well, while allowing a uniform mode of data collection with all panelists completing surveys online. Lastly, the ABS recruitment is supplemented with a dual-frame Random Digit Dial methodology that targets Hispanic households. As such, samples from KP cover all types of households, including those that are hard-to-reach or less acculturated.

    1. General KP Sampling Methodology

For selection of general population samples from KP, a patented methodology has been developed to address minor undercoverage challenges that result from differential recruitment and attrition rates. Briefly, this methodology starts by weighting the entire KP to a comprehensive set of population benchmarks obtained from the Current Population Survey (CPS) to ameliorate any minor misalignments that may exist in the panel at the time of sample selection. For the NHTS, the geodemographic dimensions used for weighting the entire KP will include:

  • Gender (Male and Female)

  • Age (18–29, 30–44, 45–59, and 60+)

  • Race/Ethnicity (Hispanic and non-Hispanic White, African American, Asian, and Other)

  • Education (Less than High School, High School, Some College, Bachelor and beyond)

  • Census Region (Northeast, Midwest, South, West)

  • Household Income ($0-$10K, $10K-$25k, $25K-$50k, $50K-$75k, $75K-$100k, $100K+)

  • Home ownership status (Own and Rent/Other)

  • Metropolitan Area (Yes and No)

  • Household size (1, 2, and 3 plus)

  • Vehicle ownership (0, 1, and 2 plus)

  • Employment status (Employed, Not employed)

Using the above weights as household measures of size (MoS), a Probability Proportional to Size (PPS) selection procedure is used to select study-specific samples. It is the application of this PPS methodology, with the above MoS values, that produces KP samples that will be self-weighting, or Equal Probability of Selection Method (EPSEM). Moreover, in instances where the study design requires any oversampling of specific subgroups, such departures from an EPSEM design are corrected by adjusting the corresponding design weights.

Less Than Annual Periodic Data Cycles

The NHTS has been conducted periodically since 1969. Starting in 2020 the NHTS will move to a bi-annual data collection approach for this and future iterations. Fielding a survey to net about 7,500 households every two years is sufficient for the Department of Transportation (DOT).

    1. NHTS 2020 Sampling Methodology

Unlike previous administrations, for the 2020 NHTS, the recruitment and diary portion of the study (previously called the retrieval survey) will be collected at the same time in a one-stage survey. The primary respondent will be asked to complete the recruitment survey before being directed to the travel diary section. Combining the surveys into one stage is expected to increase the overall study completion rates, eliminating the additional nonresponse that inevitably occurs utilizing a two-stage process. As summarized in the following table (Table 1), Ipsos’ experience with similar past KP surveys suggests that a total of 29,000 households will need to be sampled to ensure no less than 7,500 surveys are completed for the NHTS 2020. Table 2 details the universe, expected sample size, and expected number of respondents by the Census division.



























Table 1. Universe size and sampling assumptions

Description

Households

Total U.S.

128,617,574

Starting sample from KP

29,000

Completed NHTS 2020 surveys

7,500

Table 2. U.S. household counts and expected sample size and completed surveys by division

Census Division

Households

Total US

KP Sample

Respondents

East North Central

18,476,802

14.4%

4,167

1,078

East South Central

7,483,298

5.8%

1,687

436

Middle Atlantic

16,458,111

12.8%

3,711

960

Mountain

9,566,279

7.4%

2,158

558

New England

5,978,925

4.6%

1,349

349

Pacific

21,074,099

16.4%

4,752

1,229

South Atlantic

25,998,673

20.2%

5,862

1,516

West North Central

8,280,487

6.4%

1,866

483

West South Central

15,300,901

11.9%

3,450

892

Total

128,617,575

100.0%

29,002

7,500

For the 2020 NHTS, a customized version of the sampling methodology described above will be employed to accommodate the analytical and reporting needs of this survey. Each month, a nationally representative sample of households will be selected from KP that will exclude those that have responded in prior months. Given the goal of even coverage for all days of the year, each month’s sample will then be randomly partitioned into weekly samples. Subsequently, weekly replicates will be randomly partitioned into seven sub-replicates to be released daily. Recognizing the ongoing attrition of KP, the weekly replicates and daily sub-replicates will be selected with progressively larger allocation rates in anticipation of those members who will become inactive (stop taking surveys) as time goes on.

While some fluctuations will be inevitable due to seasonality and other varying yield rates, the application of the above regimented sample allocation is expected to secure about 20.55 completed surveys each day. As such, the sample sizes will be sufficient to roll up the data to various levels of temporal aggregations such as weeks, months, quarters, or other time periods of interest. The proportional sample allocation scheme will require no stratification and will provide the most efficient estimates (smallest margins of error) at the national level.

As required for the 2020 NHTS, this design will provide the needed sample sizes to secure Valid Statistical Representation (VSR) for all DOT reporting subdomains. That is, the resulting survey estimates will carry error margins no larger than ±5% with at least a 95% level of confidence. Because of the continuous data collection approach, the 2020 NHTS will provide complete coverage of all 365 days with a roughly equal number of completed surveys per day.

Once properly weighted, collected data across the entire year will allow generation of unbiased estimates for an annual average day, weekday, and weekend. For instance, we anticipate having data from about 2,132 = 52 × 20.5 × 2 observations for weekends. There will be substantially more completed surveys for 5- and 7-day periods to achieve the stated precision goals than what VSR guidelines require. Moreover, the final national weighted data will allow users to characterize and report personal travel behavior for other key subdomains, including:

  • National, i.e., the entire nation, including all the 50 States and Washington, DC combined

  • National travel behavior by:

    • Modes of travel and commute to work

    • Trip characteristics: purpose, starting time, ending time, distance, and duration

    • Household characteristics: size, vehicle ownership, income, urbanicity

    • Traveler characteristics: gender, age, race-ethnicity, education, and employment

  • Public Mode:

    • TNC usage (Uber, Lyft, etc.)

    • Taxi usage

    • Transit

  • Online shopping and home delivery frequency

    1. Precision of Survey Estimates

Precision of the survey estimates is a direct function of sample size, universe size, point estimate of interest, and required level of confidence. For example, when estimating population proportions, the needed sample size can be calculated by:

In the above formulation, N is the universe size (often assumed to be infinitely large), is the error bound, and z is the percentile of the standard normal distribution. For illustration purposes, the following chart (Figure 1) shows the resulting margins of error at 95% level of confidence as a function of sample size for varying point estimates assuming an infinitely large target universe size.

Figure 1: Margin of Error and Effective Sample Size

As seen in the chart, for the worst-case scenario when p = 50%, it requires about 400 independent observations to ensure that the margin of error remains below ±5% to meet the VSR requirement. As detailed later, it should be noted that survey data must be weighted before reliable estimates could be produced. However, this bias reduction will increase the sampling variances as it reduces the effective sample size (number of independent observations). This inflation due to weighting is commonly referred to as Design Effect and is approximated by the following formula, in which Wi represents the final weight of the ith respondent:

For most surveys conducted using KP as the sampling frame, weighting adjustments typically reduce the effective sample size by no more than 25%. This means that for key analytical subdomains 2020 NHTS will require a sample size of about 500 to produce an effective size that will be about 400 = 500/1.25. It should be noted that for the smallest subdomain, weekend travelers, over 2,000 observations will be obtained. Table 2 provides a summary of the universe size and expected number of completed surveys for each Census Division. Moreover, Table 3 provides expected precision of survey estimates for selected domains of interest to the primary users of the data. Note that the expected domain sizes are based on prior surveys and the corresponding standard errors are adjusted to reflect the reduction in the total sample size from 26,000 to 7,500 households.











































Table 3. Expected precision of survey estimates by domain

Domain Description

Expected

Relative Domain Size

Standard Error

Relative Standard Error

Average number of daily trips made by a household

9.5 Trips

0.085

±0.89

Average annual vehicle miles of travel per household

19850 Miles

460.050

±2.32

Average number of daily household shopping trips

3.7 Trips

0.051

±1.38

Average vehicle miles of travel per day in miles for females aged 16-20

31 Miles

2.429

±7.83

Number of people who do not drive a vehicle

30.31 Million

858,631

±2.83

Average distance to work in miles

5.5 Miles

0.102

±1.85

Average vehicle occupancy for trips of 11-15 miles

1.67 People

0.017

±1.02

Average time spent on driving to a shop in an urban area

14.8 Minutes

0.527

±3.56

Average time spent driving on any trip (nationwide)

19.91 Minutes

0.238

±1.19

Average trip duration for motorcycle riders

25 Minutes

2.633

±10.53

Average daily vehicle miles driven for people not born in the United States

30.9 Miles

2.751


±8.90



  1. Data Collection Procedures

    1. Survey Communication Protocol

The first communication (Appendix 5) with the KP sample households will be a pre-notification email sent to panelists three days in advance of their assignment to the survey, alerting them to the impending invitation they will be receiving about the travel survey. Subsequently, an email invitation (Appendix 6) will alert them to the assigned survey awaiting their completion.

The primary household respondent will complete a short roster to provide key household information (e.g., enumeration of household members and vehicles). Once the household roster is complete, household members will be asked to complete a personal travel diary. The travel diary will collect all travel information about the previous travel day from every household member 5 years of age and older. The primary respondent will be asked to first complete his or her diary for the prior day of travel and will then serve as a proxy responder for all children 5 to 15 years old in the household. Upon completing their diary and any proxy reporting, they will be prompted to request that all other household members 16 and older complete their own personal travel diaries.

Reminders to complete the survey will be sent via email to KP panelists who have not yet responded and to partial completes (Appendix 7). A partial complete is any household that has completed a portion of the recruitment, a portion of a travel diary, and/or some of a household’s travel diaries remain incomplete. Non-respondents and households with partial completes will be sent two reminder emails to prompt response. In a typical KP study, completion rates of 55 to 60 percent are secured with just one reminder. With the added complexity of securing completes from all household members 5 and older, a second reminder will be sent for this survey. At the second reminder email (Appendix 8), the primary respondent will be asked to serve as a proxy respondent for any household diaries that remain incomplete. Additional reminder emails may be sent to households that have partially completed the study and are close to becoming a complete to nudge them over the finish line.

Using primary household member as proxy reporters is a continuation of previous NHTS study protocols. The circumstances for allowing proxy reports has varied based on the primary mode of data collection (e.g., face to face, telephone, or web). On recent NHTS studies, primary respondents have generally been tasked with providing proxy reports on person-level details for all household members 15 years of age or younger. Roughly 60% of all respondents completed the 2017 NHTS Retrieval Survey via web. For the web version of the 2017 NHTS Retrieval Survey, all household members 16 years of age or older were encouraged to provide direct reports on person-level details. Proxy reports were permitted for web users if these household members were unavailable or unable to respond. While respondents were able to indicate whom was reporting the person-level details (e.g., an individual or proxy report), the 2017 NHTS User Guide concedes the true level of proxy reports cannot be known. The 2020 NHTS Study largely adopts the previous proxy reporting approach used in the 2017 NHTS to ensure continuity.

All KP surveys include contact information for a Helpdesk that respondents can call or email with any questions or concerns about the survey. The Helpdesk staff will be trained on the NHTS and provided with a list of frequently asked questions to help them respond to panelists who call or email with questions.

    1. Estimation Methodology

The data collected under the above protocol, once appropriately weighted, will accommodate the various reporting needs of the DOT and other data users. Analogous to prior administrations, several weight components will be provided to allow users to produce different estimates of interest. That is, Household- and Person-level weights will be produced as well as other weights for reporting at the vehicle and trip levels and annualized long-distance travel estimates.

Virtually all survey data must be weighted before they can be used to produce unbiased estimates of population parameters. While reflecting the selection probabilities of sampled units, weighting attempts to compensate for practical limitations of sample surveys, such as differential nonresponse and population undercoverage, by taking advantage of auxiliary information available about the target population. The 2020 NHTS weighting methodology will improve the external validity of survey estimates by improving the representation of survey respondents. The weighting process for this survey, which starts by computing a set of household-level weights, will entail several steps as outlined next.

  • In the first step, household-level design weights will be computed to reflect the selection probabilities for those assigned to this survey.

  • In the second step, design weights will be adjusted for nonresponse within homogeneous clusters of households or individuals where specific nonresponse patterns have been observed. This adjustment process will be guided by findings from a nonresponse follow-up investigation, detailed in the next section, that will be conducted to assess nonresponse bias. Moreover, profile data items that are available for all KP members will be used as appropriate.

  • In the third step, nonresponse-adjusted weights will be ratio-adjusted to a comprehensive set of geodemographic benchmark distributions that could be secured from the latest CPS. For these adjustments the method of iterative proportional fitting will be used, which is commonly known as raking, to allow simultaneous adjustments against multiple distributions.

  • In the fourth and final step, the resulting weights will be examined to detect extreme values that might require trimming. While trimming extreme weights will improve the overall efficiency of the analysis weights by reducing variability, it will be at the expense of minor misalignments against population distributions. This common compromise between bias reduction and variance inflation is one that will be carried out with consultation with the DOT’s NHTS project team and consultants.

Subsequently, additional analysis weights will be computed while recognizing specific adjustments that will be required for the corresponding target populations. For instance, person-level weights will start with household-level weights adjusted for the number of individuals surveyed in each responding household. Moreover, the benchmarking distributions that will be used to rake these weights will correspond to the U.S. population 5 years of age or older. Similar considerations will be necessary to produce analysis weights for vehicles and trips.

After calculating household- and person-level weights, a series of comparisons to external estimates will be conducted. Such comparisons may result in additional weighting adjustments that might go beyond geodemographic corrections, often referred to as calibration. However, it is important to avoid excessive calibration adjustments that can both undermine the inferential integrity and independence of this survey, while guarding against undue variability that overcorrection via weighting can entail. Some possible calibration adjustments can include:

  • Estimates of trip by area type (rural v urban) from the American Community Survey (ACS).

  • Frequency of air travel based on the Airline Origin and Destination Surveys (DB1B).

  • Other trip-related estimates available from the DOT.

Lastly, comparable estimates from other government sources will be used to further assess external validity of NHTS 2020. Such sources can include the Federal Transit Administration’s National Transit Database (NTD) and FHWA’s Highway Performance Monitoring System (HPMS).

    1. Nonresponse Bias Analysis

Typically, surveys that do not secure sufficiently high response rates are to conduct nonresponse bias analyses to assess the potential magnitude of bias before results are released. For this purpose, estimates of survey characteristics for both non-respondents and respondents will be required.

To illustrate mathematically, the bias of an estimated mean is the difference between what is estimated from respondents, , and the target parameter, , which is the mean that would result if a complete census of the target population was conducted and all units responded. This bias can be expressed as follows1:

However, for variables that are available from the sampling frame, can be estimated by without any sampling error. In this case, the bias in can then be estimated by:

and an estimate of the population mean based on respondents and non-respondents can be obtained by:

In the above, is the weighted unit nonresponse rate, based on weights prior to nonresponse adjustment. Consequently, the bias in can be estimated by:

That is, the estimate of the nonresponse bias is the difference between the mean for respondents and non-respondents multiplied by the weighted nonresponse rate, using the design weights prior to nonresponse adjustment. Here, a respondent will be defined as any sample member who is determined to be eligible for the study and has valid data for the selected set of key analytical variables.

The nonresponse bias investigation includes estimating the potential bias and statistical testing to determine if the observed bias is significant at a nominal level. Adjustment procedures will be designed as a second step to reduce nonresponse bias based on the information obtained during this investigation. In the third step, the nonresponse-adjusted weights will be applied and any remaining bias for key measures will be estimated. For this purpose, statistical tests will be performed to determine the significance of any remaining nonresponse biases. As mentioned earlier, findings from these investigations will guide the process of computing the final analysis weights.

    1. Imputation of Missing Data

All surveys are subject to missing values, which can result from item nonresponse or when observed values fail edit checks and are then set to missing. Left untreated, missing data reduce the size of the analytic database, which in turn increases the resulting error margins for survey estimates. This can compound into a much bigger problem when conducting multivariate analyses, such as regression, where any missing data of a single variable will lead to omission of the entire record – even when all other variables of that record have no missing values. Moreover, missing data never occur at random and as such can lead to biased estimates when only observed values are used in statistical inferences. Consequently, imputation is commonly used as a compensatory method in survey research for filling in the missing data2.

In practice, missing data are often imputed by relying on statistical methods to extrapolate what is missing based on observed values for a given record and those that serve as donors. Even if all missing values are not imputed, those from variables that will be needed for weighting have to be resolved before the weighting process can commence. For this purpose, the Weighted Sequential Hot-Deck method of SUDAAN will be used with the respondent survey data serving as donors to provide surrogate values for records with missing values. The basic principle of this methodology involves construction of homogeneous imputation classes, which are generally defined by cross-classification of key covariates, and then replacing missing values sequentially from a single pass through the survey data within each imputation class.

    1. Variance Estimation Approach

Survey estimates can only be interpreted properly in light of their associated sampling errors. Since weighting often increases the variance of estimates, use of standard variance calculation formulae with weighted data results in artificially narrow confidence intervals and misleading tests of significance. With weighted data, two general approaches for variance estimation can be distinguished to account for the impact of weights. One is linearization where a nonlinear estimator is approximated using a linear proxy whose variance can then be computed using standard variance estimation methods. The second is replication, in which several estimates of the population parameters under the study are generated from different, yet comparable, parts of the original sample. The variability of the resulting estimates is then used to estimate the variance of the parameters of interest.

With the advances in software and hardware technologies, however, replication is rarely used in practice anymore. This resource-intensive option requires producing dozens of extra sets of the analysis weights simply for approximation of standard errors. For these reasons, the method of linearization will be used for this survey to avoid the need for special software that may not be accessible for most data users. More detailed descriptions of these variance estimation options may be found in Wolter (1985)3.

Of note, there is another approximation method available for less technical users. Specifically, one can account for variance inflation due to weighting by simply adjusting the sample size by an estimate of the overall design effect and then use the resulting effective sample size when computing variances. For instance, the adjusted variance of a point estimate ( ) can then be obtained by multiplying the conventional variance of the given percentage, , by and then using the resulting quantity as adjusted variance:

Subsequently, the (100-α) percent confidence interval for the point estimate ( ) can be approximated by:

  1. Maximizing Response Rate

Engaging respondents with simple, clear, and effective communications and a concise, straightforward, and interesting survey instrument is the foundation of any successful survey. Communications around the 2020 NHTS will convey the importance, relevance, and societal value of the research in which panelists are being asked to participate.

Moreover, modest incentives will be used to engage respondents and entice completion. The key is to find a level of incentive that sufficiently motivates behavior without being excessive. Upon completion of the recruitment survey, the KP respondent will be awarded the equivalent of $2. KP rewards are provided in terms of points, which are redeemable for cash or other prizes at panelists’ choosing. The incentive for completing a travel diary for all household members 5 and older varies depending on the size of the household since the burden is higher for those in larger households. Households with three or fewer eligible members (i.e., 5 years of age or older) will receive $5 when all householders complete the travel survey. Households with four or more eligible members will receive $10 for when all householders complete the travel survey.

In order to facilitate responses from those with disabilities, the online survey will meet Section 508 compliance using the rules specified in sections 1194.22. Furthermore, the share of online surveys completed via smartphones and other mobile devices has dramatically increased over the past several years. Typically, a higher share of young people and persons of color respond with a mobile device. To ensure that all people, regardless of their completion device of choice, can access and easily complete the survey, the survey will be designed to be mobile-friendly.

Finally, all 2020 NHTS survey materials will be available in both English and Spanish languages to help maximize participation by Hispanic households. This is standard practice for KP surveys and, as such, the panel includes panelists who prefer to take surveys in Spanish. Offering both languages helps ensure representativeness among the rapidly growing population of Hispanics in the U.S.

Previous NHTS studies have used a variety of methodologies and modes of data collection (e.g., face to face, telephone, or web). As previously noted, the 2017 NHTS utilized a two-step methodological approach. First, a national ABS sample of respondents were recruited primarily by mail. Among those completing the Recruitment Survey, 95% of respondents did so by mailing back a paper survey and surveys were considered complete if simply the total household size was provided. Second, households were recontacted and subsequently completed the Retrieval Surveys either via telephone or web. The 2017 NHTS User Guide reports for the national sample weighted response rates of 31.3% for the Recruitment Survey, 52.1% for the Retrieval Survey, and 16.3% overall. The 2020 NHTS anticipates a response rates of 55% for the Recruitment Survey and 26% for the overall household-level cooperation rate.

  1. Material Review and Testing

All field materials have been tested to assess their utility and validity as survey instruments. These tests are described below.

4.1 Expert Review

The first step in developing a revised questionnaire and field materials for the NHTS was to conduct a review of existing materials to identify potential improvements. In the review process, a questionnaire design expert conducted a thorough appraisal of the questionnaire, providing guidance on refinement and further development activities factoring in the new and emerging modes of transit. The expert reviewer judged features of the instrument such as ease and efficiency of completion, avoidance of errors, ability to correct mistakes, and how the content is presented (overall appearance and layout of the instrument).

4.2 Cognitive Testing

Cognitive testing is routinely used for survey questionnaire development4 5 6. Cognitive testing is regularly used to determine question comprehension (What do respondents think the question is asking? What do specific words and phrases in the question mean to them?), information retrieval (What do respondents need to recall to answer the question? How do they do this?), decision processes (How do respondents choose their answers?) and usability (Can respondents complete the questionnaire easily and as they were intended to?).

The overarching objective of the cognitive testing phase of this project is to assess how well the NHTS is measuring critical concepts and behavior and, in turn, meeting its data user’s information goals. The NHTS is intended to measure local and long-distance trips from a representative set of 7,500 American households. The data elements of the NHTS are of critical importance to our nation’s transportation planners and enable informed decision when planning transportation projects across the Nation.

The survey questions were designed to build on the existing travel measurement tools, to ensure we address the following emerging research questions, and to measure how well NHTS is meeting these goals:

  • Are respondents knowledgeable of all travel made by all members of the household? Can they accurately report household travel behavior by proxy or directly?

  • Are all trips-including loop trips and short trips- understood, recalled, and reported adequately, directly, and by proxy?

  • How frequently do households have online purchases delivered to their home location?

A cognitive testing approach was executed to improve the survey and reduce measurement error. Cognitive testing of the NHTS protocol consisted of conducting nine individual online interviews with participants who have demographic and other key characteristics similar to potential survey respondents. Interviewers engaged participants in one-on-one, open-ended cognitive interviews in which they probed a participant’s approach and thinking when responding to questions within the context of the full survey administration. Cognitive interviews covered both the initial recruitment screener and Day of Travel Diary Study for both self and proxy reporting.

The results of these tests were used to modify survey questions and response options to limit misunderstandings and inconsistencies in responses. The modified survey instrument was used to create the final set of survey questions.

  1. Contact Information

Mansour Fahimi, Ph.D., Chief Statistician

Ipsos Public Affairs

2020 K Street NW, Suite 410

Washington, DC  20006

Mobile: 240-565-8711

[email protected]





Appendices:

1. National Household Travel Survey, Compendium of Uses, January 2019 - May 2019 (http://nhts.ornl.gov/2009/pub/Compendium_2014.pdf

2. Federal Register 60 Day Notice

3. Federal Register 30 Day Notice

4. Comments Submitted to Docket

5. Pre-notification Email

6. Invitation Email

7. Reminder Email

8. Partial Complete Reminder Email

9. Survey Link on Member Portal

10. Transportation Research Circular, No. E-C238, August 8-9 2018

11. National Household Travel Survey Questionnaire

1 Keeter, S., Miller, C., Kohut, A., Groves, R. M., and Presser, S. (2000). Consequences of Reducing Nonresponse in a Large National Telephone Survey. Public Opinion Quarterly, 64, 125–148.

2Kalton, G., and Kasprzak, D. (1986). The Treatment of Missing Survey Data. Survey Methodology, 12, 1–16.

3 Wolter, K.M. (1985) Introduction to Variance Estimation. New York: John Wiley & Sons.

4 Forsyth, Barbara & Lessler, Judith. (2011). Cognitive Laboratory Methods: A Taxonomy. 10.1002/9781118150382.ch20.

5 DeMaio, T. J., & Rothgeb, J. M. (1996). Cognitive interviewing techniques: In the lab and in the field. In N. Schwarz & S. Sudman (Eds.), Answering questions: Methodology for determining cognitive and communicative processes in survey research (p. 177–195). Jossey-Bass.

6 Groves, Robert & Fowler, Floyd & Couper, Mick & Lepkowski, James & Singer, Eleanor & Tourangeau, Roger. (2004). Survey Methodology. 561.

14

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMansour Fahimi
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy