Aviation_Sec_ Survey Part B 9-13-10 to DHS

Aviation_Sec_ Survey Part B 9-13-10 to DHS.doc

Aviation Security Customer Satisfaction Performance Measurement Passenger Survey

OMB: 1652-0013

Document [doc]
Download: doc | pdf

ATTACHMENT B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS



1. Describe the potential respondent universe and any sampling methods to be used.


  1. Potential Respondent Universe: The potential respondent universe is the population of passengers who go through the security checkpoints at a given airport during the survey period at airports nationwide where the survey is being conducted.

  2. Sampling Methods: The survey will be administered using an intercept methodology. The proposed intercept methodology permits passengers to complete the survey at their convenience, and does not obstruct the flow of passengers at the airport.


Surveys are conducted either by distributing response cards, conducting personal interviews, or directing passengers to kiosks. For response cards, TSA personnel will hand out response cards that contain a url address to access an online survey to passengers immediately following the passenger’s experience with the TSA’s checkpoint security functions. Passengers are invited, though not required, to view and complete the survey via an online portal. Passengers also have the option to complete and return the survey directly to TSA personnel distributing the survey. Personal interviews are conducted with the passenger immediately after they have completed the security process. When kiosks are available for use, TSA personnel will inform passengers that they have the option of using kiosks for participation in the survey. TSA personnel will select passengers using a random method to participate in the survey until the desired sample is obtained. The intercept methodology randomly selects times and locations for TSA personnel to select passengers to complete the survey in an effort to gain data representative of all passenger demographics.



2. Describe the statistical procedures for the collection of information


We anticipate that TSA will conduct this survey at 35 airports annually. All airports have at least one passenger security checkpoint, and some airports have as many as 20. We propose to randomly select times and checkpoints and conduct the survey until the desired sample size is reached. TSA seeks to sample a representative subset of passengers soon after they pass through the security checkpoint. A requirement of any sampling methodology is that each passenger in the population has a known probability of receiving a survey. The data collection methodology must also result in an unbiased sample (i.e., the characteristics of respondents are reflective of the population). We also seek to use a methodology that is simple and robust enough to be used consistently in all airports and monitored by TSA headquarters.


In order to meet a sample size with an error rate of five percentage points, TSA would need to receive between 50 and 384 responses. Based on prior survey data and research, to obtain a sample size of 384, TSA must distribute approximately 1,000 surveys.


As an example of our sampling methodology, consider Baltimore-Washington International airport (BWI). Based on an average of approximately 10,000,000 enplaned passengers per year (source: Bureau of Transportation Statistics 2004), and five major security checkpoints (Piers A, B, C, D, and E) open for an average of 20 hours each (04:00AM-12:00AM) (source: Performance Management Information System), an average of approximately 301 passengers pass through each checkpoint in a given hour. Because we draw day-time-checkpoint combinations randomly, the operating assumption that passenger volume is uniformly distributed across days, times, and checkpoints is acceptable to design our sampling methodology.


We attempt to distribute surveys to every passenger, thus we would expect to distribute an average of approximately 301 surveys per hour. Assuming a sample size target of 384, we would need to distribute a total of 1,000 surveys given our response rate prediction. Thus, we need an estimated 3 hours and 20 minutes to distribute 1,000 surveys. To achieve this volume, we will randomly choose four, one-hour blocks at randomly selected checkpoints. The number of checkpoints will vary, depending upon the size and configuration of the airport, but will generally be at one checkpoint.



3. Describe methods to maximize response rates and to deal with issues of non-response.



Based on previously tested techniques and industry-standards to increase the response rate, TSA employs the following:


  • The questionnaire is short, limited to 18 questions. We estimate that it will take respondents no more than five minutes to complete the survey.

  • The questionnaire is professionally laid-out and easy to read.

  • Survey administrators will be dressed professionally and will have airport badges. They will identify themselves as representatives of the Federal government.


Passengers generally welcome the opportunity to contribute to the improvement of TSA’s aviation security operations and respond to the survey at a rate sufficient for the results to be nationally representative. Findings from our experience demonstrate the general willingness of the public to respond to a survey conducted by TSA.


Assessing response bias is difficult because TSA does not know the characteristics of individuals who choose not to respond. To assess and compensate for the introduction of survey bias, we will employ the following techniques and/or apply industry-standard techniques as appropriate:


  • Consider prior surveys results. In previous years, we distributed surveys over a two to three-week period at each site. We hypothesized that, assuming that conditions did not systematically change at the airport from one period to the next (which they did not), results should have been similar across the periods. Indeed they were, providing evidence of the stability of the samples across surveys.

  • Monitor response rates and responses. We have been able to compare response rates and responses across various checkpoint/time strata within each airport. We also know when flights are disproportionately comprised of business or leisure travelers (based on industry analyses by day of the week and time of day). We also know the passenger volume and wait time at the checkpoint during each shift (based on the tally sheets discussed in the previous section and other data collected at the checkpoint by TSA). In our experience we have found that none of these demographics corresponded to any substantial difference in response rates.


We believe that the combination of these analyses, combined with a sound methodology that is executed rigorously, will give TSA a high level of confidence in the results. To date, we have found no evidence of a response bias with similar efforts, and we believe we will have sufficient sample opportunities to overcome the impact of bias.


4. Describe any tests of procedures or methods.


TSA has not tested the proposed procedures or methods, but TSA uses industry standards to conduct this survey as discussed above.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


The following individuals are providing continued oversight of the statistical aspects of the design:


Linda King, TSA, 571-227-3572

Dr. John Nestor, TSA, 571-227-1636

Sue Hay, TSA, 571-227-3694


3


File Typeapplication/msword
File TitleATTACHMENT B
AuthorKatrina Wawer
Last Modified Byjeffrey.champagne
File Modified2010-09-14
File Created2010-09-14

© 2024 OMB.report | Privacy Policy