1652-0013 Passback Response

1652-0013_Aviation_Security_Customer_Survey_Response (FINAL).doc

Aviation Security Customer Satisfaction Performance Measurement Passenger Survey

1652-0013 Passback Response

OMB: 1652-0013

Document [doc]
Download: doc | pdf

Response to OMB Questions about the Aviation Security Customer Satisfaction Performance Measurement Passenger Survey (1652-0013)



It wasn’t clear if TSA was intending this clearance to be a generic clearance (though it was not submitted this way) because the specific surveys and airports have not all been identified in this collection, and it appears that TSA wishes to choose items for different airports based on a “superset” of items. Please clarify.


TSA does not intend for this clearance to be generic, but rather is seeking one clearance for specific customer satisfaction airport-intercept surveys for the agency. TSA is seeking approval for all questions specified in the OMB submission. Once cleared, TSA will use a set of these questions for the formal surveys to support the Customer Satisfaction Index for Aviation Operations. TSA develops the questions to be used and identifies the participating airports during the initial planning and outreach phases of the program. TSA randomly selects airports to be surveyed for the nationwide effort during the planning phases based on their originating enplanements. This selection usually consists of 25-30 airports out of the more than 450 airports at which TSA provides security. Various TSA offices review the airport selection to ensure proper representativeness.


For the informal surveys conducted by airports, TSA customer stakeholder managers and local airport staff select questions from those approved by OMB that are most appropriate for their airport, operations, and initiatives. Although any airport can participate, it is not a requirement imposed on airports and only a fraction of airports typically participate each year (estimated 25).


Please provide more detailed reports or information on the research studies that are cited in the supporting statement on methodological issues such as use of drop boxes (sec. A.1), validity relative to other surveys (A.8), comparisons of response rates and response across checkpoint/time strata (B.3), and demographic characteristics of respondents and effect on satisfaction (B.4).


Three airports piloted the drop-box methodology in the FY03-04 CSI-A effort: Austin-Bergstrom (AUS), SouthWest Florida (RSW), and Providence (PVD) airports. TSA broke out distribution at those airports into three distinct weeks according to the method of submission. In week 1, the survey could only be deposited and returned in a mailbox. In week 2, the survey could either be deposited and returned in a mailbox or deposited in clearly marked TSA drop-boxes placed throughout terminal. In week 3, the survey could only be deposited in a TSA drop-box placed throughout the terminal. Response rates were 34% when only the mail-back option was available; 33% for drop-box and 9% for mail-back when both options were available; and 60% when only the drop-box option was available. While the response rate is significantly higher with the drop-box methodology, the program still seeks to distribute surveys at a minimum number of shifts for each airport and at most airports the full benefit for this increased response rate could not be realized. In addition, the costs, effort, and logistical issues associated with setting up and removing the drop-boxes at each of those airports made the solution impractical at all airports. In certain situations drop-boxes may be better than a mail-back survey, for instance when there is a simple layout requiring a minimal number of drop-boxes, when passenger volumes are such that the number of shifts for distribution would be significantly reduced, and when airports plan on continuing surveying activity and may leave the drop-boxes throughout the terminal.


TSA has validated the survey results from efforts against the results from the Bureau of Transportation Statistics – Household Omnibus Poll. These efforts contain a set of questions and answers in common with one another. For the same questions and answers, the results from the two efforts have been within less than 3 percentage points of one another. Furthermore, the trends seen in the demographics have been vetted against airline industry experts that have concurred with the validity of those results. For example, the industry has informed TSA that it is common for more frequent travelers and business travelers to have lower satisfaction levels than infrequent leisure travelers, a trend consistent with our results.


During this program, TSA tracks the distribution activity for each of the selected shifts. Generally, the response rate in any given shift varies slightly within the airport. If anomalies exist between this data, other sources of data, or expectations, the execution of the program is reviewed as appropriate. No trends in the response rates have been identified suggesting that the sample is misrepresentative of the population. Below are the distribution and response rates for Portland (PDX) airport for the FY03-04 effort.

Airport

Date

Checkpoint

Time Start

Time End

ID - Start

ID - End

Distributed

Returns

Response Rate

PDX

10/27/2003

D/E

1000

1530

166116

166005

111

21

19%

PDX

10/29/2003

A/B/C

1000

1530

166316

166117

199

54

27%

PDX

10/29/2003

D/E

1530

2100

167600

167501

99

21

21%

PDX

10/31/2003

A/B/C

1530

2100

169311

169001

310

52

17%

PDX

11/1/2003

A/B/C

430

1000

169700

169312

388

71

18%

PDX

11/3/2003

A/B/C

1000

1530

167700

167601

99

19

19%

PDX

11/3/2003

A/B/C

1000

1530

170222

169962

260

65

25%

PDX

11/3/2003

A/B/C

1530

2100

167800

167701

99

24

24%

PDX

11/3/2003

A/B/C

1530

2100

167939

167901

38

9

24%

PDX

11/3/2003

D/E

430

1000

169961

169701

260

59

23%

PDX

11/4/2003

A/B/C

1420

1335

170455

170416

39

13

33%

PDX

11/4/2003

D/E

1000

1420

170415

170223

192

38

20%

PDX

11/5/2003

A/B/C

430

1000

166442

166317

125

28

22%

PDX

11/6/2003

A/B/C

1530

2100

167900

167801

99

15

15%

PDX

11/6/2003

A/B/C

1530

2100

168147

167940

207

31

15%

PDX

11/8/2003

A/B/C

430

1000

166900

166443

457

105

23%

PDX

11/8/2003

A/B/C

430

1000

170500

170456

44

12

27%

PDX

11/8/2003

A/B/C

1530

2100

168270

168148

122

27

22%


Below are the demographics of the survey sample from the FY04-05 effort.


CSI: Airport-Intercept Surveys - Demographics of Respondents






Gender




Male

Female




50.4%

49.6%









Age range

Under 21

22-29

30-49

50-69

70 or older

2%

11%

41%

40%

6%






Number of round trips taken by commercial airline in the past year

1-2

3-5

6-9

10-19

20 or more

20%

34%

17%

14%

16%






Number of airports screened at in the past year

1-2

3-5

6-9

10-19

20 or more

15%

33%

24%

16%

12%


In more detailed studies we have explored the satisfaction levels of those participants within these categories. For example, men had an overall satisfaction level of 78 and women of 82 during the FY04-05 effort.


It was not clear whether TSA intends the sample of 30 airports to be nationally representative and whether the results are generalizeable to all U.S. airports. Please provide more information on how the airports are chosen. Are national estimates produced or are only airport-specific satisfaction ratings produced from this survey? If so, please provide the precision level for overall estimates and any subgroup estimates. Please provide a copy of the results report from last year.


TSA intends for the sample of airports to be representative nationally and generalizable to all U.S. airports of what the typical passenger experiences. All TSA airports are eligible for selection and each airport is assigned a random number that is then factored by the originating enplanements of that airport. TSA selects the airports with the highest X random numbers where X is the number of airports to participate. Using this methodology, larger airports are more likely to be selected than smaller airports due to their originating enplanements, but it ensures appropriate representation of all airports through the element of randomness. TSA and its partners calculate both national estimates and airport-specific ratings. At the airport level, receiving 400 returns allows for a confidence interval of five percent at a 95% confidence level. Participating airports typically achieve this response. Aggregating the results from all of the airports produces the nationwide scores, which incorporate over 10,000 responses and allow for a confidence interval of less than one percent. The results of this effort can be found on the TSA website (http://www.tsa.gov/press/releases/2005/press_release_0571.shtm). TSA has also provided the results in a separate attachment.


It is not clear why you are not stratifying your sample the sample by the location (e.g., Pier) and the day and time. It was noted that a PPS sample was not possible because data on passenger volume by time of day do not exist (B.1); however, in B3 it was noted that the passenger volume and wait time during the shift is available from tally sheets. Please explain. Even if you cannot use PPS sampling, a stratified would still provide better representation of days and time periods (and locations) than the current SRS approach.


We are not stratifying the sample by location primarily for ease of training administrators and for simplicity and consistency in the program design. To stratify the sample, administrators would have different shift lengths which could present difficulties in accurately scheduling a shift. In addition, they would have different passenger intervals at each checkpoint location which could result in confusion and inconsistent application. Some data is available showing passenger throughput; however, passenger loads vary from day to day and week to week and it would be difficult to accurately predict the correct stratification criteria for scheduling purposes. With the randomized method, only the length of the shifts can remain constant, the procedures for the administrators at each airport remain constant, and the sample of passengers that receive surveys is representative of that airport’s entire population.


It is not clear why you are instructing interviewers to vary from systematic selection of passengers, such as distributing the survey to the next passenger after a refusal.


TSA instructs interviewers to follow the systematic selection of passengers in as much as possible. When a situation presents itself in which the interviewer cannot keep to that selection process (e.g. a passenger refusal), TSA has developed contingency plans to keep the distribution as close to the overall goal of distributing a survey to a passenger at a set interval. In the case of a passenger refusal, the next passenger is given the survey to minimize the effect a refusal could have in creating a disproportionate amount of surveys distributed per passenger.


Please provide the achieved response rates for each airport surveyed in 2004 and 2005.


FY03-04 Response Rates


ALB 34%

AUS 25%

BDL 65%

BHM 23%

BNA 24%

BOI 26%

BOS 32%

BWI 28%

CMH 35%

DTW 21%

GEG 22%

HNL 29%

IAH 14%

JAC 71%

JAX 25%

JFK 20%

MCI 34%

MDW 21%

MIA 18%

MSP 28%

OMA 31%

PDX 21%

PHX 21%

PVD 41%

ROC 28%

RSW 44%

SAV 34%

SFO 22%

SJC 25%

SLC 38%

TUL 26%

TUP 90%

TUS 27%


FY04-05 Response Rates


ABQ 13%

AMA 28%

ATL 20%

BGR 88%

CLT 19%

CVG 18%

EKO 62%

FAI 48%

FWA 22%

IAD 16%

IDA 25%

JAC 77%

LAX 15%

MCI 39%

MCO 70%

MDW 28%

MGW 89%

ORD 24%

PIT 20%

ROC 76%

SEA 17%

PIT 20%

ROC 76%

SEA 17%

SFO 17%

STL 24%

TUP 96%

DAB 96%

DAY 26%

DEN 20%

JAX 22%

JFK 17%

LAS 15%

PDX 19%

PHL 17%

PHX 15%





Please provide a plan for how the survey questions will be distributed across the 30 airport surveys. Any questions not included in this ICR will need OMB approval.


When beginning the effort, TSA will conduct outreach among TSA offices to identify the questions of interest. Based upon feedback received, the survey program management will select a set of questions from the superset to be included on the survey. Next, program management will design several the versions of surveys incorporating the questions with the most interest from TSA. After TSA has selected the airports to be surveyed, TSA will determine which version(s) are most appropriate for that airport based on criteria such as the following: are there customer facing baggage screening operations, do operations typically have long waits at the passenger or baggage checkpoint, does the airport have any other unique circumstances that should play a role in the survey version used.


What percentage of respondents indicate the “don’t know” response option? Is any imputation done on missing items?


Generally, about 0.0 – 0.5% of respondents select the “don’t know” option for any given question. Because the amount is relatively small, no imputation has been performed on these items.


Item #24: What testing was done for this item? It would seem likely that “queuing” is a term many respondents may not use or understand.


Yes, these questions were reviewed by the Bureau of Transportation Statistics and tested on focus groups during the FY03-04 effort. It is used to distinguish the line the passenger waits in for security versus other lines (ticket checkers, airlines, etc.)


Please provide protocols for the focus groups.


Focus groups had previously been conducted in the Fall 2003 under this clearance. While TSA does not intend to continually conduct focus groups, TSA would like the option of holding focus groups as TSA evolves and customer attitudes change. A discussion guide is included.


The key goals of this research are to explore travelers’ screening experience – that is, their perceptions of “what goes on” during the time spent in the airport, focused especially on their opinions of the security procedures, equipment and personnel, more specifically:

  • To identify the drivers of customer satisfaction with the necessary processes at airports and

  • To evaluate the confidence levels (that air travelers are protected from untoward acts) evoked by the security screenings.

  • The check baggage experience.


In addition, the focus group may cover other topics such as new programs, policies, equipment, etc. to gauging respondent reactions. The groups also discuss survey methodology, questionnaire wording, and the impact of incentives.


A professional, independent focus group facilitator moderates sessions

  • The sessions last 60-90 minutes

  • Participants are selected to be diverse with respect to age, gender, etc.


Focus groups help reveal passengers’ perceptions and expectations – to help TSA understand what aspects of the customer experience drive public confidence and customer satisfaction.


TSA uses the findings to develop measurement tools to gather quantitative data about the customer experience – for example, survey questions, poll questions, and complaint categories.


Focus group findings are validated via statistical analysis of survey results, so that TSA can gain a considerable understanding of how different aspects of the customer experience influence satisfaction and confidence.


Question 15 of the supporting statement states, “The annual maximum number of airports surveyed for the formal survey has declined from 70 to 60.” However the table in response to question 12 shows an “annual maximum” of 70 airports.


No more than 36 airports have been formally surveyed in any given year. The maximum amount will exceed no more than 60. This reduces the burden hours approximately 400 hours for the formal CSI-A survey.

This ICR should have 3 separate ICs (Formal CSI-A survey, Focus groups, Informal Survey). The disaggregated numbers shown in the table in response to question 12 illustrate how you should breakdown the burden for the individual ICs.


This has been completed. Please see the amended ICR in ROCIS.

File Typeapplication/msword
File TitleQuestions about the Aviation Security Customer Satisfaction Performance Measurement Passenger Survey
AuthorBrian Harris-Kojetin
Last Modified ByKatrina Kletzly
File Modified2006-11-03
File Created2006-11-03

© 2024 OMB.report | Privacy Policy