Airport Survey Part B (FINAL 5-26-06)

Airport Survey Part B (FINAL 5-26-06).doc

Aviation Security Customer Satisfaction Performance Measurement Passenger Survey

OMB: 1652-0013

Document [doc]
Download: doc | pdf

ATTACHMENT B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS



1. Describe the potential respondent universe and any sampling methods to be used.


The respondent universe is the population of passengers who proceed through the security checkpoints at a given airport during the survey period. We plan to continue conducting one survey per year at approximately 30 airports (as described in Section 12 of Attachment A), with data collection occurring during a randomly selected two to three week period.


For each two to three-week survey, TSA seeks to sample a representative subset of passengers soon after they pass through the security checkpoint. All airports have at least one passenger security checkpoint, and some airports have as many as 20. A requirement of any sampling methodology is that each passenger in the population has an equal probability “a priori” of receiving a survey. The data collection methodology must also result in an unbiased sample (i.e., the characteristics of respondents are reflective of the population). We also seek to use a methodology that is simple and robust enough to be used consistently in all airports and monitored by TSA headquarters.


We propose to randomly select several six-hour time periods across checkpoints over each week and then distribute to every tenth passenger throughout the six-hour period. This interval may be adjusted at some airports to meet the anticipated traffic but will remain constant during the distribution period for the airport. We will select a number of six-hour periods such that—based on average estimated passenger volume per period—we will distribute a sufficient number of surveys over the two to three-week period to get a statistically valid response given the estimated response rate.


The primary alternative methodology would entail stratifying the evaluation period into strata comprised of days, time periods, and individual checkpoints; selecting some strata with probability of selection proportional to each stratum’s size (measured by its proportion of the total passenger volume through the airport in that period); and distributing a pre-determined, fixed number of surveys within each stratum. After investigating this methodology, we concluded that it was not practical, because sufficiently reliable data about relative passenger volumes through the checkpoints by time of day do not exist.


Using the methodology described in Section 2, we have typically experienced a response rate of 20 percent at most airports. We seek an overall sample size of 300-500 cases per airport, sufficient for an error rate of 4-6 percentage points (with 95 percent confidence) for our performance measures. (The actual target sample size for each survey will be established based on resources available.)


As an example of our sampling methodology, consider Baltimore-Washington International airport (BWI). Based on an average of approximately 10,000,000 enplaned passengers per year (source: Bureau of Transportation Statistics 2004), and five major security checkpoints (Piers A, B, C, D, and E) open for an average of 18 hours each (04:00AM-10:00PM) (source: PMIS), an average of approximately 2,283 passengers pass through each checkpoint in a given six-hour period. Because we draw day-time-checkpoint combinations randomly, the operating assumption that passenger volume is uniformly distributed across days, times, and checkpoints is acceptable to design our sampling methodology.


We will distribute surveys rigorously to every 10th passenger at BWI, so we would expect to distribute an average of approximately 228 surveys per six-hour period. Assuming a sample size target of 500, we would need to distribute a total of 2,778 surveys over the two-week survey period given our 20 percent response rate prediction. Thus, we need an estimated 13 six-hour periods over the course of the two-week sample period, distributed randomly among the four checkpoints, in order to distribute the required number of surveys. To achieve this volume, we will randomly choose seven six-hour blocks at a checkpoint in the first week from among the 105 available (3 six-hour blocks per day * 5 checkpoints * 7 days), and six blocks in the second week, for a total of 13 distribution periods over the two weeks. The following table displays an example of the results of such a random selection:


Week 1



Sun

Mon

Tue

Wed

Thur

Fri

Sat

Pier A

04:00-09:59AM








10:00-03:59PM








04:00-09:59PM








Pier B

04:00-09:59AM


X






10:00-03:59PM







X

04:00-09:59PM








Pier C

04:00-09:59AM

X







10:00-03:59PM








04:00-09:59PM








Pier D

04:00-09:59AM





X



10:00-03:59PM


X

X




X

04:00-09:59PM









Week 2



Sun

Mon

Tue

Wed

Thur

Fri

Sat

Pier A

04:00-09:59AM








10:00-03:59PM



X




X

04:00-09:59PM







X

Pier B

04:00-09:59AM






X

X

10:00-03:59PM








04:00-09:59PM








Pier C

04:00-09:59AM








10:00-03:59PM








04:00-09:59PM








Pier D

04:00-09:59AM







X

10:00-03:59PM








04:00-09:59PM









We will conduct each survey over a two-week period within the evaluation period, with sampling conducted in an analogous fashion based on the available checkpoints and operating hours at each other airport. The configuration and operating hours of checkpoints and enplanement data available enable this sampling methodology to be used at all airports.


2. Describe the statistical procedures for the collection of information


We propose to use an intercept methodology, in which the selected passengers are handed surveys right after they pass through the security checkpoint, for survey distribution. We propose to collect surveys back by mail, with the possibility of using drop boxes in the terminal in some instances. The intercept/mail-back methodology, which we used successfully in our previous efforts, best balances cost, implications for the representativeness of the sample, replicability at all airports, and impact on respondents. In particular, the proposed intercept methodology:


  • Results in a natural sample of passengers by flight frequency and passenger volume (and wait time) at the checkpoint.

  • For a given number of administrator hours, gives the most passengers an opportunity to be sampled.

  • Permits passengers to complete the survey at their convenience.

  • Does not obstruct the flow of passengers at the airport.

  • Saves money compared to an interviewer-administered survey.


Administrators will distribute the survey to every tenth passenger who passes by their fixed point just inside the security checkpoint. If there exists more than one exit point from the security checkpoint, administrators will rotate between exit points every half-hour and distribute to every 10/n-th passenger who passes by, where n is the number of exit points.


We have experienced a response rate of approximately 20 percent for the survey. Administrators will be directed to say, “Please tell us about your experience at the security checkpoint today,” as they distribute the survey. The administrators are provided airport badges, briefed on the objectives of the survey, given an overview of the checkpoint layout and informed of the location of the TSA checkpoint supervisor in case of a problem, reminded of the importance of statistical rigor in distribution, provided the name and phone number of a TSA contact at the airport, and provided instructions for addressing certain contingencies that arise in any intercept survey as well as some contingencies unique to the environment of the TSA survey. The following table lists the contingencies that the administrators will be trained to address:


Contingency

Instructions to administrators for addressing

The tenth passenger refuses to accept a survey.

Distribute the survey to the next passenger, and begin counting again from “one” after the passenger who receives the survey. Keep repeating until a passenger accepts the survey.

Passengers approach in a group, and I cannot tell which passenger is the tenth.

Count in ascending order from the person closest to you.

The passenger wishes to return the survey to me instead of mailing it in.

Accept the survey and thank the passenger. Drop the survey in a mailbox after your shift. (TSA airport staff are given the same instructions for surveys returned to them.)

I need to take a brief break.

Take a break, and begin counting again at “one” when you return. Ensure that you conduct six hours of actual survey distribution.

A passenger other than the tenth sees me distributing the survey and asks for one.

At first, politely refuse and tell the passenger that it is a random survey. If the passenger persists, hand him/her a survey, and then skip the next passenger who was supposed to receive a survey.*

One passenger in a group traveling together receives the survey and the rest of the group asks for surveys as well.

At first, politely refuse and tell the passengers that it is a random survey. If the passengers persist, hand them surveys, and then skip the next passenger who was supposed to receive a survey.* (Do not skip more than one passenger.)

The tenth passenger engages me for a particularly long time such that I am unable to distribute to the next tenth passenger.

Distribute to the next passenger to whom you are able. Start counting to ten over from that point.

I run out of surveys.

This should not happen, as each administrator is given several batches of surveys. However, if you do run out, note the time. (In these cases, an alternative shift is scheduled if necessary to reach the necessary distribution target.)

I lose a batch of surveys (e.g., a passenger takes them).

Note the serial numbers so that they will not be tabulated if they are returned.

A passenger asks for more information on the purpose of the survey.

Respond politely and briefly. Say that the survey is designed to gauge passenger experiences with the security checkpoint, and is being sponsored by the Transportation Security Administration. If the passenger desires further information, and you are unable to provide it, refer the passenger to the airport or headquarters contact.

A passenger asks for more information about TSA.

Politely refer the passenger to the TSA Contact Center. (The mailing address, web address, e-mail address, and toll-free phone number are also printed on the survey form.)

A passenger asks for more information about the airport, such as where a certain concourse is.

Respond politely and briefly. If you are unable to answer the question, refer the passenger to an airport traveler assistance counter or TSA checkpoint supervisor.

A passenger wishes to make a specific complaint or compliment about his/her experience going through the checkpoint.

Politely refer the passenger to the TSA checkpoint supervisor.

TSA or airport employees or law-enforcement personnel question my presence.

This should not happen, as both TSA and airport administration staff are briefed on the survey and provide all necessary permissions. However, if it does occur, refer them to the airport contact for the survey.

TSA airport personnel attempt to interfere with the survey or influence the results.

Notify the airport contact for the survey and the headquarters contact.

I cannot make it to my shift because I am sick.

Contact your supervisor. (We instructed the supervisors to provide an alternate shift the next available day at the same time period and checkpoint as the cancelled shift if an alternate administrator is not available for the originally scheduled shift.)

I have a miscellaneous problem.

Contact the TSA airport point of contact for survey or, if necessary, the headquarters contact.

* In some cases, it might be appropriate to discount such surveys (e.g., by writing the serial number of the extra surveys distributed on the daily tally sheet and then excluding them from tabulation if they are returned). We do not anticipate that this will be a common problem, however, and it was not in the pilot test.


Administrators keep track of how many surveys they distribute each shift, using a shift tally sheet on which they record the airport, date, time, and checkpoint of the shift, as well as serial number ranges of all surveys distributed. Administrators are also provided a toll-free phone number with voice mail and directed to call this number from a mobile phone or pay phone at the airport at the conclusion of their shift and relay this information. The tally sheets allow us to know in which shift each returned survey was distributed, to enable analyses of the representativeness of the sample. The toll-free phone number allows us to identify any anomalies in the distribution (e.g., not enough surveys distributed, incorrect checkpoint or time) so that they may be corrected within one day.


3. Describe methods to maximize response rates and to deal with issues of non-response.


With a response rate such as 20 percent, analyzing—and, to the extent possible, mitigating—non-response is crucially important to assure that the results are statistically valid. This analysis was performed in the initial the pilot test, and we concluded from the test that this methodology can be relied upon to produce results that are generally statistically valid. During the nationwide survey we employed a number of techniques to make the response rate as high as possible given the nature of the survey, while administering the survey in such a way as to analyze non-response so that we may correct it if necessary once the results are tabulated.


The industry-standard and pilot-tested techniques to increase the response rate in this survey that we will employ are as follows:


  • The questionnaire is short, with about eleven closed-ended questions. We estimate that it will take respondents no more than five minutes to complete the survey.

  • The questionnaire is professionally laid-out and easy to read. The form includes the TSA logo. Passengers will be more willing to complete a survey sponsored by, and clearly identified with, TSA than with a commercial entity.

  • The interviewers will be experienced professionals trained in presenting a friendly, inviting environment to passengers and in skillfully reducing the prevalence of passenger refusals.

  • Interviewers will be dressed professionally and will have airport badges. They will identify themselves as representatives of the Federal government.


Passengers generally welcome the opportunity to contribute to the improvement of TSA’s aviation security operations and respond to the survey at a rate sufficient for the results to be nationally representative. Findings from our experience demonstrate the general willingness of the public to respond to a survey conducted by TSA.


In order to assure a representative sample, we employ a methodology to distribute the card to at a fixed interval to passengers. In addition to the protocols described in the previous section to address contingencies that might interfere with administrators’ ability to obtain a rigorous sample, we have developed procedures to address any disruption in the airport environment (e.g., a security incident which causes the concourse to be cleared) which prevents completion of data collection in the designated interval or other need to add additional data collection shifts.


Assessing response bias is difficult, of course, because for the most part we do not know the characteristics of individuals who choose not to respond. Several industry-standard techniques exist to attempt to indirectly assess the prevalence of response bias, however, and our methodology includes the provisions necessary to employ these techniques:


  • In previous years, we distributed surveys over a two to three-week period at each site. We hypothesized that, assuming that conditions did not systematically change at the airport from one period to the next (which they did not), results should have been similar across the periods. Indeed they were, providing evidence of the stability of the samples across surveys.

  • We know at which airport, checkpoint, day, and time of day each survey was distributed because the surveys are recorded and serialized. Thus we have been able to compare response rates and responses across various checkpoint/time strata within each airport. We also know when flights are disproportionately comprised of business or leisure travelers (based on industry analyses by day of the week and time of day). We also know the passenger volume and wait time at the checkpoint during each shift (based on the tally sheets discussed in the previous section and other data collected at the checkpoint by TSA). In our experience we have found that none of these demographics corresponded to any substantial difference in response rates. We will continue to monitor them throughout the survey efforts.


We believe that the combination of these analyses, combined with a sound methodology that is executed rigorously, will give TSA a high level of confidence in the results. To date, we have found no evidence of a response bias with the effort.


4. Describe any tests of procedures or methods.


TSA conducted a pilot test of our proposed methodology in October-December 2002 (OMB No. 2110-0011) with the intent of evaluating whether the methodology could be applied cost-effectively nationwide to achieve statistically valid, useful performance data for TSA. The pilot test enabled us to implement our sampling system, refine our distribution protocol, and analyze response-rate and satisfaction results to assess the apparent validity of the sample. We concluded from the pilot test that the methodology was viable for nationwide implementation, and developed strategies, as discussed throughout this document, to monitor the results and assure the continued validity of the process over time.


Based on the successes and lessons learned from those pilot tests, TSA has been able to successfully implement a nationwide survey program annually at 25-30 airports to compute the CSI-A. Each participating airport has received statistically reliable results at a given confidence and error level. Those results have been aggregated and combined with the other data components to produce CSI-A scores for FY04, FY05, and it will be completed in the near future for FY06.


Sampling and administration methodology


We tested an intercept methodology in which respondents mail the survey back. We employed professional survey administrators with experience conducting intercept surveys in airports. The administrators distributed surveys to a passenger at a fixed interval that passed through the passenger security checkpoint during the days, six-hour time blocks, and checkpoints randomly selected for the study. We used the sampling methodology and administration protocols discussed in Section 2.


We have been able to obtain access for the administrators at each of the sites and complete the survey with the cooperation of—and without burdening—the airport administration. The TSA Customer Service Manager at each airport served as the point of contact for the survey. All aspects of the administration have proceeded smoothly at each airport, and the TSA staff at each airport reported that the process yielded valuable data.


Response rates and analysis of representativeness


Response rates have been approximately 25 percent for the previous nationwide efforts, varying at each airport. Response rates were essentially consistent across different studies within each airport; different days, times, and checkpoints within studies all produced similar response rates. These findings give us confidence about the stability of the methodology in collecting a statistically representative sample of customer opinion.


Satisfaction patters and analysis of representativeness


In the pilot test, the analysis of the results was focused on assessing the likelihood of response bias—that is, that the satisfaction results of the sample were not affected by the survey methodology chosen, i.e., the sample of respondents to the survey are representative of the population. We generally cannot be certain of the absence or prevalence of a response bias—because we do not know the satisfaction patterns of passengers who choose not to respond to the survey—but we did conduct several analyses of the results to gain circumstantial, or indirect, evidence for or against a response bias. These results generally gave us confidence in the validity of the sample:


  • As discussed above, response rates and satisfaction patterns were consistent across studies within airports, satisfaction patterns were consistent across all forms within airports, and response rates and satisfaction patterns were consistent across administration shifts. In short, in circumstances when administration conditions were the same, response rate and satisfaction patterns were the same—suggesting the stability and representativeness of the results of each study.

  • There was no relationship between response rates, satisfaction patterns, and passenger volume (i.e., how busy the checkpoint was and long lines were). These findings led us to conclude that there was no relationship between individuals’ motivation to respond and their level of satisfaction—an important result to increase confidence in the representativeness of the data.

  • Passengers 50 years of age and older responded to the survey at a substantially higher rate than passengers under 50, based on an intensive study of population demographics that we conducted in several studies. Additionally, passengers age 50 and over were slightly more satisfied than passengers under 50. This finding led us to monitor age demographics during distribution periods. Through post-stratification efforts, we found that this response did not appreciably affect the overall results (<1 percent). We will continue to monitor age demographic in the future to assess whether conditions change to where a post-stratification of the results is appropriate.

  • Additional minor demographic effects emerged: leisure travelers were slightly more satisfied than business travelers, infrequent travelers were slightly more satisfied than frequent travelers, and women were slightly more satisfied than men. These effects were all slight, and had no noticeable implications in the results based on our analyses. More importantly for evaluating the methodology, none of these effects appeared to correspond to any difference in propensity to return the survey, providing evidence that the sample of respondents is representative with respect to all of these demographics.


Conclusions


The results from the program over the last three years have validated that TSA can be confident in the representativeness of the results, but should and will continue to monitor demographics of the population, to enable the option of adjusting the results to align the demographics of the population with the demographics of the sample.


We believe that additional research and monitoring is required to continuously assure the validity and utility of the methodology, as discussed in this document: ongoing focus groups to test emerging and changing aspects of the customer experience such as baggage screening, possibly the use of a drop box method to increase response rates and reduce distribution costs, and continued monitoring of survey demographics to assure the representativeness of the sample.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design.


In 2002, TSA consulted with the Bureau of Transportation Statistics to develop this methodology. Following are the specific individuals who are providing continued oversight of the statistical aspects of the design:


Yani Collins, TSA, 571-227-1620

Martin Anker, BearingPoint, 571-227-3088

File Typeapplication/msword
File TitleATTACHMENT B
AuthorKatrina Wawer
Last Modified ByKatrina Wawer
File Modified2006-08-18
File Created2006-08-18

© 2024 OMB.report | Privacy Policy