1652 0013 SS Part B

1652 0013 SS Part B.doc

Aviation Security Customer Satisfaction Performance Measurement Passenger Survey

OMB: 1652-0013

Document [doc]
Download: doc | pdf

ATTACHMENT B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

OMB No. 1652-0013



1. Describe the potential respondent universe and any sampling methods to be used.


The respondent universe is the population of passengers who proceed through the security checkpoints at a given airport during the survey period. TSA plans to conduct a passenger survey at airports nationwide. The survey will be administered using an intercept methodology. The intercept methodology uses TSA personnel who are not in uniform to hand deliver to passengers business card style forms that contain a url address to access an online survey; the forms are delivered immediately following the passenger’s experience with the TSA’s checkpoint security functions. TSA personnel will select passengers to participate in the survey using a random method until the desired sample size is obtained. Passengers are invited, though not required, to view and complete the survey via an online portal. The intercept methodology randomly selects times and locations for TSA personnel to select passengers to complete the survey in an effort to gain data representative of all passenger demographics.


TSA seeks to sample a representative subset of passengers soon after they pass through the security checkpoint. Federal Security Directors (FSD) at all airports have the option of conducting this survey. We anticipate that 25 airports will conduct this survey annually. All airports have at least one passenger security checkpoint, and some airports have as many as 20. A requirement of any sampling methodology is that each passenger in the population has an equal probability of receiving a survey. The data collection methodology must also result in an unbiased sample (i.e., the characteristics of respondents are reflective of the population). We also seek to use a methodology that is simple and robust enough to be used consistently in all airports and monitored by TSA headquarters.


We propose to randomly select times and checkpoints and distribute a business card style form to every passenger until the desired sample size is reached.


In order to meet a sample size with an error rate of no more than five percentage points, TSA would need to receive between 50 and 384 responses. Based on prior survey data and research, a sample size of 384 needs approximately 1,000 surveys. TSA assumes that there will be 384 respondents from 1,000 surveys distributed. At an individual airport, we assume the burden on passengers who choose to respond to be approximately five minutes per respondent. Therefore, 384 respondents x 1 airport = 384 respondents a year. It takes approximately five minutes for each respondent to complete the survey so the total burden at one airport is 384 respondents x 5 minutes = 1,920 minutes or 32 hours per airport. We estimate that 25 airports will conduct the survey each year. Therefore, 384 respondents x 25 airports = 9,600 respondents a year. Since we assume it takes approximately five minutes for each respondent to complete the survey the total burden is 9,600 respondents x 5 minutes = 48,000 minutes, or 800 hours per year.


As an example of our sampling methodology, consider Baltimore-Washington International airport (BWI). Based on an average of approximately 10,000,000 enplaned passengers per year (source: Bureau of Transportation Statistics, 2004), and five major security checkpoints (Piers A, B, C, D, and E) open for an average of 20 hours each day (04:00AM-12:00AM) (source: Performance Management Information System), an average of approximately 301 passengers pass through each checkpoint in a given hour. Because we draw day-time-checkpoint combinations randomly, the operating assumption that passenger volume is uniformly distributed across days, times, and checkpoints is acceptable to design our sampling methodology.


We will distribute surveys rigorously to every passenger, so we would expect to distribute an average of approximately 301 surveys per hour. Assuming a sample size target of 384, we would need to distribute a total of 1,000 surveys given our response rate prediction. Thus, we need an estimated 3 hours and 20 minutes to distribute 1,000 surveys. To achieve this volume, we will randomly choose four, one-hour blocks and randomly select checkpoints.


2. Describe the statistical procedures for the collection of information


We propose to use an intercept methodology, in which the selected passengers are handed a business card style form right after they pass through the security checkpoint. This form contains a url address for respondents to access to submit their survey data. The proposed intercept methodology:


  • Permits passengers to complete the survey at their convenience;

  • Does not obstruct the flow of passengers at the airport; and

  • Saves money compared to an interviewer-administered survey and avoids the cost of postage.


Transportation Security Officers (TSO) not in uniform will act as survey administrators and will distribute the survey to every passenger who passes by their fixed point just inside the security checkpoint.


Survey administrators will be directed to say, “Please tell us about your experience at the security checkpoint today,” as they distribute the survey. The survey administrators are briefed on the objectives of the survey, given an overview of the checkpoint layout and informed of the location of the TSA checkpoint supervisor in case of a problem.


Survey administrators keep track of how many surveys they distribute each shift, using a shift tally sheet on which they record the airport name, date, time, and checkpoint of the shift, as well as serial number ranges of all surveys distributed.


3. Describe methods to maximize response rates and to deal with issues of non-response.



Based on previously tested techniques and industry-standards to increase the response rate, TSA employs the following:


  • The questionnaire is short, including 10 to 15 questions. We estimate that it will take respondents no more than five minutes to complete the survey.

  • The questionnaire is professionally laid-out and easy to read. The survey card includes the TSA logo. Passengers will be more willing to complete a survey sponsored by, and clearly identified with, TSA than with a commercial entity.

  • Survey administrators will be dressed professionally and will have airport badges. They will identify themselves as representatives of the Federal government.


Passengers generally welcome the opportunity to contribute to the improvement of TSA’s aviation security operations and respond to the survey at a rate sufficient for the results to be nationally representative. Findings from our experience demonstrate the general willingness of the public to respond to a survey conducted by TSA.


Assessing response bias is difficult because TSA does not know the characteristics of individuals who choose not to respond. Several industry-standard techniques exist to attempt to indirectly assess the prevalence of response bias, however, and our methodology includes the provisions necessary to employ these techniques:


  • In previous years, we distributed surveys over a two-to three-week period at each site. We hypothesized that, assuming that conditions did not systematically change at the airport from one period to the next (which they did not), results should have been similar across the periods. Indeed they were, providing evidence of the stability of the samples across surveys.

  • We know at which airport, checkpoint, day, and time of day each survey was distributed because the respondent will record this data in the online portal. Thus, we have been able to compare response rates and responses across various checkpoint/time strata within each airport. We also know when flights are disproportionately comprised of business or leisure travelers (based on industry analyses by day of the week and time of day). We also know the passenger volume and wait time at the checkpoint during each shift (based on the tally sheets discussed in the previous section and other data collected at the checkpoint by TSA). In our experience, we have found that none of these demographics corresponded to any substantial difference in response rates. We will continue to monitor them throughout the survey efforts.


We believe that the combination of these analyses, combined with a sound methodology that is executed rigorously, will give TSA a high level of confidence in the results. To date, we have found no evidence of a response bias with similar efforts.


4. Describe any tests of procedures or methods.


TSA has not yet tested the proposed procedures or methods. TSA plans to conduct a pilot effort to confirm the effectiveness of the sampling methods and projected response rates based on this type of intercept survey methodology.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


The following individuals are providing continued oversight of the statistical aspects of the design:


Linda King, TSA, 571-227-3572

Dr. John Nestor, TSA, 571-227-1636

Sue Hay, TSA, 571-227-3694


4


File Typeapplication/msword
File TitleATTACHMENT B
AuthorKatrina Wawer
Last Modified Byjoanna.johnson
File Modified2013-10-18
File Created2013-10-18

© 2024 OMB.report | Privacy Policy