Airport Survey Supp Stmt (FINAL 5-26-06)

Airport Survey Supp Stmt (FINAL 5-26-06).doc

Aviation Security Customer Satisfaction Performance Measurement Passenger Survey

OMB: 1652-0013

Document [doc]
Download: doc | pdf
  1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection. Attach a copy of the appropriate section of each statue and regulation mandating or authorizing the collection of information. (Annotate the CFR parts/sections affected).


As part of the Transportation Security Administration’s effort to manage itself as a performance-based, constituent-centric organization, it has committed to being attentive and responsive to the experiences of its customers, particularly the flying public. Congress has agreed with TSA on the importance of assessing customer satisfaction. TSA created the Customer Satisfaction Index for Aviation Operations (CSI-A), a succinct measure that incorporates information from several customer-facing performance measures to describe the success of TSA’s aviation security program in providing world-class customer service while providing world-class security. The passenger survey for which we seek OMB approval to continue is a key component of the CSI-A.


Among TSA’s directives that necessitate the collection of this information are the following:


  • In its May 19, 2002, “Performance Targets and Action Plan: 180 Day Report to Congress” delivered under the Aviation and Transportation Security Act (Pub. L. 107-71, 115 Stat. 597, Nov. 19, 2001), TSA committed to “* * * collecting information to baseline customer satisfaction as well as perceptions of the quality and courteousness of our security operations.”

  • In the Conference Report associated with H.R. 4775, “Making Supplemental Appropriations for Further Recovery from and Response to Terrorist Attacks on the United States for the Fiscal Year Ending September 30, 2002, and for Other Purposes,” H. Rept. 107-593, Jul. 19, 2002, Congress directed TSA to measure both the “* * * Average wait time at passenger screening checkpoint[s] * * *” and the “* * * Number of complaints per 1,000 passengers * * *” for airports at which security is federalized.

  • In its report “Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture,” GAO-03-190, Jan. 17, 2003, the General Accounting Office praised TSA’s customer-focused performance measurement programs, including the airport survey, and recommended that TSA “Continue to develop and implement mechanisms, such as the customer satisfaction index, to gauge customer satisfaction and improve customer service.”

  • In the 2004 Program Assessment Rating Tool (PART), “Transportation Security Administration: Screener Workforce (10002400),” conducted by the Office of Management and Budget (OMB), one of the Annual Outcome measures that are used to measure the level of effectiveness of TSA’s Screener Workforce program is the “Level of the Customer Satisfaction Index (CSI-A) for Aviation Operations.” This program will be periodically reviewed and data for this measure is expected to justify effectiveness claims by TSA.

  • In the Government Performance and Results Act of 1993 (GPRA) every U.S. Government agency was required to create and maintain a strategic plan. Within this strategic plan, performance measurements are required and include program goals based on quantifiable and achievable targets. In June of 2005, the OMB Circular No. A-11 required that these previously created strategic plans be included in various Congressional reporting documents, such as the Performance and Accountability Report (PAR), Congressional Justification (CJ), as well as in other necessary documentation. The CSI is reported in each of these congressional reports as a measurement mandated in GPRA.


TSA has developed a number of media to collect information related to passengers’ experiences. Since April 2002, TSA has participated in the Bureau of Transportation Statistics’ (BTS) Omnibus Survey program via several questions on the Household Survey poll. Additionally, TSA is collecting complaint information at airports and through its TSA Contact Center, and measuring customer throughput, wait times, and other data at airports through the Performance Management Information System (PMIS).


This airport survey represents another important part of TSA’s efforts to collect data on customer satisfaction. We propose to continue conducting airport surveys to gauge passenger satisfaction and confidence with TSA’s aviation security procedures. The objective is to capture individuals’ experiences with the passenger security checkpoint, and, where applicable, the baggage security checkpoint. TSA has conducted airport survey efforts at select airports nationwide on an annual basis since September 2003 (OMB No. 2110-0011), the results of which have achieved statistical validity in measuring customer satisfaction and confidence in TSA through the CSI-A, provided key performance data in improving TSA’s goal of providing world-class customer service, and validated the methodology discussed in this document.


Survey content


Appendices A and B provide our proposed survey content. Appendix A provides a list of possible questions for the survey. TSA developed—in collaboration with headquarters program offices, airport staffs, focus groups with passengers (see Section 8), and industry best practices—a list of questions for the survey that meet the needs of evaluating key performance elements of TSA’s mission delivery, providing managers with tangible prescriptions for performance improvement, and coinciding with the areas of service that are most relevant to passengers.


As has been done in the past, we intend to ask approximately seven substantive questions and three demographic questions on each survey. All surveys will contain the questions about overall satisfaction (item 1) and overall confidence (item 2). The other five questions will rotate, with the intent of providing individual airports with the most relevant information for their environment, while ensuring that each item is included broadly enough during each evaluation period to compute a statistically-valid system-wide score for that item. For example, relevant information for the environment would deal with wait times at airports with historically long wait times, or with checked baggage screening at airports with checked baggage screening processes that involve customer interaction; while a statistically valid survey would ensure a minimum of ten airports of varying sizes, geographies, and regions).


TSA will also conduct statistical analysis of the results to determine how different areas correlate to overall satisfaction and confidence. We may include different questions during different evaluation periods, depending on customer response and TSA’s changing performance measurement and analytical needs.


For example, TSA intends to calculate a CSI-A for FY06. During this evaluation period, we intend to conduct statistically-valid surveys at approximately 30 airports. In addition to the questions about overall satisfaction and confidence, we will select approximately 10 questions about which we wish to obtain system-wide information during this period. Multiple versions of the airport surveys will be used to include approximately five other questions, so that each question is included on the survey at approximately fifteen airports, selected to be representative system-wide. This way, TSA will have statistically-valid information on approximately fifteen items, while each airport will have statistically valid information on the approximately seven items included on its own survey.


For the demographic questions, we found through previous efforts that frequency of travel, age range, and, to a lesser extent, purpose of the day’s trip (i.e., business or leisure), and gender were important demographics. We will include approximately three demographic questions on each survey, so that TSA will obtain data that is statistically valid system-wide on all of the demographic questions.


Appendix B provides an example of the survey form.


Survey methodology


Upon developing the initial methodology for survey distribution, TSA concluded that an intercept survey—in which a random sample of adult passengers is handed a survey form immediately after passing through the passenger survey checkpoint and then asked to mail the survey back or deposit it in drop boxes throughout the terminal—was the best method to collect this data, considering the cost and likely reliability of the data. Intercept surveys, which are common in the market research industry, involve trained, professional administrators handing surveys to individuals at a fixed interval as they pass by a certain point. Section 1 of Attachment B discusses our proposed sampling methodology in more detail, and Section 2 discusses our proposed survey distribution methodology.


The pilot test of this methodology, conducted in the fall of 2002, and the annual efforts since September 2003 functioned as planned, and our analysis of the results validates that the intercept survey, using a mail-back methodology, produces statistically valid, useful results. (Section 4 of Attachment B discusses the pilot test in more detail.)


TSA also tested a methodology in which surveys are returned via drop boxes placed in the terminal. The goal of this effort was to determine the effect that this methodology would have on the aggregate response rate. Survey distribution costs are a significant driver of the total cost of the survey effort, so using drop boxes might be advantageous if it results in a significantly higher response rate such that significantly fewer surveys would need to be distributed. Intercept distribution costs about $10,000 per airport, whereas drop boxes are estimated to cost an additional $5,000 per airport. Additionally, TSA examined the turnaround time between survey distribution and collection to determine if using drop boxes was shorter than using a mail-back method and if the data might be more useful for performance measurement and targeted performance improvement efforts. The results showed that the use of drop boxes could be advantageous in some cases, but not as a whole. The costs of procuring and setting up drop boxes at larger airports with complex layouts were much greater than at smaller airports with simpler layouts. While each of the three tested airports received a higher response rate via the drop box methodology, in most cases the increased response was not great enough to reduce the distribution costs enough to offset the costs of the drop boxes. There was no statistical difference in the results of any of the survey questions for these airports, suggesting that the usefulness of the feedback and the samples themselves were similar to one another. TSA believes that in some cases the use of drop boxes can be more effective than a typical intercept survey (for smaller airports with simpler layouts where the response rate is likely to be greatly increased) and would use this methodology where appropriate. Some airports may choose to use drop boxes already in use in their airports; however, TSA is not requiring airports to use drop boxes.


Informal surveys by airports


The airport surveys have been used to compute a statistically valid CSI-A system-wide and for individual airports. These surveys are managed by TSA Headquarters using the rigorous intercept methodology described in this document. In addition to these formal, rigorous surveys, we also seek continuation for TSA Customer Service Managers at individual airports to conduct their own smaller-scale, less formal surveys at their discretion. Customer Service Managers have requested this capability, usually to test service improvements that they have implemented. Other uses of this survey are (1) to support industrial-engineering studies (e.g., by conducting a survey with several questions about passenger wait and service times to evaluate a change in the checkpoint configuration), or (2) to evaluate process changes (e.g., to evaluate response to a localized media campaign or other public-relations effort or a reduction in staffing at an airport).


Airports use the same superset of questions for the informal surveys as provided in Appendix A, are given guidance from Headquarters about sampling and survey distribution, and are given limits on the individual and cumulative burden on passengers that they are allowed to impose each year. Although the results of these surveys are not tabulated or published in any formal way by TSA Headquarters, they are very useful for individual airports to measure their own customer service.


  1. Indicate how, by whom, and for what purpose the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.


TSA uses the survey results to measure its performance in providing customer service through its aviation security functions. TSA computes a CSI-A that is intended to be representative of the TSA system, as well as provide each airport with an airport-level CSI-A each year. TSA uses the annual CSI-A as a key performance measure included in our GPRA Performance Plan, reports to Congress, and other media. This is described in detail under number 1 above.


TSA also uses the results to assess its performance with various components related to customer satisfaction and confidence. In particular, it measures passenger perceptions of the courtesy and professionalism of screeners, wait times, thoroughness of screening, and overall satisfaction. The results of these aspects are examined at the aggregate and airport levels, and examined across time. TSA identifies factors and best practices contributing to higher scores and assesses ways to implement those into their policies and procedures. In some cases, this manifests itself through the improvement of Transportation Security Officer (TSO) training to include customer service related themes. In other cases, it may merit procedural changes where procedures can be modified to enhance customer service without detriment to the primary security focus of TSA. In some cases, results for particular demographic groups or different airports vary from expected results. In these cases, TSA assesses the root causes for these variations and takes appropriate measures to address these variations with the appropriate airport or group within TSA responsible for handling those issues.


Over time, TSA has been able to use the results to assess the impacts of organizational changes. For example, TSA has responded to Congressional mandates to reduce the number of TSO FTEs each year since 2004. The results of this program have allowed TSA to assess the impact that these changes have had on customer satisfaction and confidence. The results have also allowed TSA to evaluate the effect of policy and procedural changes as they relate to customer satisfaction and confidence. In December 2005, TSA implemented more thorough procedures for secondary screenings. The results from the program, along with analysis of other related data, allowed TSA to evaluate customers’ perceptions of these procedural changes and modify the implementation of these procedures to increase customer satisfaction while maintaining security. The results have also helped TSA measure the effectiveness of particular programs at reaching the customer base. For example, through the results of the survey effort, TSA was able to determine that airport signage is widely recognized as an effective means of communicating critical information to passengers over other means, such as public service announcements or printed informational pamphlets.


The results of this program were used in the evaluation to determine the effectiveness of the Security Screening Pilot Program (PP5) airports as compared to that of airports with a federalized screener workforce. Each of the PP5 airports was surveyed to generate customer satisfaction and confidence data for those airports. These results were then used as the basis for the examination of customer service–one of the three major areas used in the evaluation. The results demonstrated the success of the PP5 airports in providing customer service comparable to that of airports with federalized screeners. This finding, as well as similar findings found in the areas of security and efficiency, demonstrated the program’s success, and allowed the PP5 program to be transformed into the Screener Partnership Program (SPP). All airports using TSA security can now apply to implement a privatized screener workforce under TSA management through the SPP program. Each year these airports are included in the survey program to measure their performance with customer service to ensure that they meet customer satisfaction and confidence standards.


The Office of Strategic Communications and Public Affairs has issued press releases to communicate the results of the program to the public. On March 3, 2005, the Department of Homeland Security and the Transportation Security Administration issued a press release entitled “Air Travelers Continue to Express High Confidence and Satisfaction In TSA Security and Customer Service.” The accompanying article described the program, methodology, results, and the insight learned and provided question by question and airport scores. In addition, several airports participating in the program were able to issue their own local press releases describing the program and the results seen at their airport.


  1. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also describe any consideration of using information technology to reduce burden. [Effective 03/22/01, your response must SPECIFICALLY reference the Government Paperwork Elimination Act (GPEA), which addresses electronic filing and recordkeeping, and what you are doing to adhere to it. You must explain how you will provide a fully electronic reporting option by October 2003, or an explanation of why this is not practicable.]


The nature of this data is not suited for electronic data collection. TSA believes that it is important to capture customers’ perceptions as soon after they experience TSA’s service delivery as possible. Hence, an in-person data collection methodology at airports is the most appropriate method. Although the Government Paperwork Elimination Act is not directly implicated in this collection, TSA does provide phone, e-mail, and internet capability for passengers to submit comments or questions to the agency. The contact information is printed on the survey.


  1. Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purpose(s) described in Item 2 above.


This survey is designed to gather data about a relatively new governmental function. No data of this type currently exists. Some airport administrations (either local Government or private entities) conduct customer surveys at airports; at each site we plan our surveys (both the formal, Headquarters-initiated survey and the informal surveys conducted by airport staff) to be non-duplicative and non-burdensome to passengers. We share data with those airport administrations conducting their own surveys to the fullest extent possible and seek to include questions on their instruments to reduce overall public burden through the efforts of individual Federal Security Directors.


  1. If the collection of information has a significant impact on a substantial number of small businesses or other small entities (Item 5 of the Paperwork Reduction Act submission form), describe the methods used to minimize burden.


The proposed survey has no impact on small businesses.


  1. Describe the consequence to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles to reducing burden.


This data collection is essential for TSA to understand its impact on the flying public, and to be able to respond to that impact by improving service, reducing burden on travelers, and improving communication. Given the Congressional mandates to collect this data, it is crucial to TSA’s mission delivery. Moreover, GAO and OMB have concurred with TSA on the importance of this element of our performance measurement system. The results from the CSI survey are used for annual performance measurement at the surveyed airports, as well as system-wide. This differs from the TSA customer comment card, which is designed to give individual airports frequent customer-initiated feedback. The results from the CSI survey are statistically significant and can be used to draw conclusions about the traveling population as a whole. The TSA customer comment card is a vehicle for gathering daily feedback at individual airports from passengers who approach TSA personnel at airports to initiate complaints and compliments.


  1. Explain any special circumstances that require the collection to be conducted in a manner inconsistent with the general information collection guidelines in 5 CFR 1320.5(d)(2).


These provisions do not apply, and the proposed data collection is minimally burdensome to the public. Customers are asked to fill out their surveys soon after receiving them, and they are given approximately three weeks to return their forms. TSA’s efforts, as well as efforts of other industry surveys of this type, have shown that a period of approximately two to three weeks is sufficient to exhaust most of the response rate. Moreover, we seek passengers’ opinions as soon after they experienced the service as possible, so as to minimize the risk that long lag times between their experiences and completion of the survey make the results less reliable.


  1. Describe efforts to consult persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency's notice, required by 5 CFR 1320.8(d) soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments. Specifically address comments received on cost and hour burden.


TSA published a Notice in the Federal Register (71 FR 13990, March 17, 2006) announcing its intent to conduct this survey. To TSA’s knowledge, no public comments have been received in response to the notice.


TSA collaborated with experts familiar with intercept survey techniques in order to develop the methodology for this survey. TSA has engaged BearingPoint, one of the largest management consultancies serving Federal agencies, to support its performance measurement efforts since our inception. BearingPoint has helped define the CSI-A and partnered with market research companies with experience in the travel industry and in performing similar intercept surveys at airports on the survey design and administration. Over the years, the airport survey methodology has been validated against those conducted by airlines and found a high correlation in responses. BearingPoint advised TSA on the program’s pilot test in the fall of 2002 and in each effort since, which proceeded essentially as planned and which provided some additional lessons for TSA. Details about and rationale for our sampling and survey distribution methodology are provided in Attachment B.


TSA also collaborated with the Bureau of Transportation Statistics on the survey design.


As part of this collection request but not part of the survey process, we propose to conduct up to 12 focus groups this fiscal year, and as needed in additional fiscal years. The goal of these efforts would be to identify the major contributing factors towards the customer experience as the memories of 9/11 begin to fade and as TSA, its policies, and its programs continue to evolve to meet the ever-changing security needs of our nation.


  1. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.


TSA will not provide any payment or gift to survey respondents. The response rate that the program has experienced has been much higher than industry standards (20 percent), and we hypothesize that passengers have an interest and are more willing to offer their feedback to the TSA than with other programs seen in the travel industry. TSA will continue to monitor the response rate to ensure that this trend continues. TSA will also study the possibility of offering an incentive in the academic literature and focus groups with passengers, but we do not anticipate an incentive being necessary to obtain a significant response rate nor cost-effective as a means of increasing the response rate.



  1. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.


Survey forms themselves will be anonymous—i.e., will not solicit specific identifying information. Thus, by design, the survey will ensure confidentiality through anonymity. However, no assurances of confidentially will be provided to any respondent.


  1. Provide additional justification for any questions of sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private.


We propose to ask no such questions.


  1. Provide estimates of hour burden of the collection of information.


TSA is responsible for security screening at the over 400 federalized airports nationwide. Approximately 80 of those airports are defined by the Federal Aviation Administration and TSA as major, generally serving 1,000,000 passengers or more annually. The majority of the 25 federalized airports that are surveyed annually in the formal survey are major airports. The five airports participating in the Screening Partnership Program (privatized screening) are also surveyed annually.


At each surveyed airport, we seek approximately 500 survey responses to obtain a statistically valid CSI-A. Thus our maximum possible annual volume for the formal survey would be 30,000 surveys (two annual surveys at each airport).


For the informal surveys conducted by TSA airport staff, all airports will have the capability to conduct these surveys, and we estimate that 25-30 airports will conduct such a survey each year.


We estimate the burden of this data collection, based on the pilot test and our own research, to be five minutes per respondent to fill out and mail the survey. We assume the burden on passengers who choose not to respond to be zero. For the informal survey conducted by TSA airport staff, the same five-minute burden limit per respondent will apply (60 minutes / 5 minutes = 12 respondents per hour), and we will limit each airport to a maximum of 50 cumulative burden-hours, or 600 respondents, per year (12 respondents/hr X 50 hrs = 600 respondents). We estimate the airports that participate in this survey will impose an average of 25 cumulative burden hours, or two 150-respondent surveys per year, on the public (12 respondents/hr X 25 hrs = 300 / 2 (150) respondent surveys).


The following tables summarize these estimated cumulative burdens for the formal CSI-A survey, focus groups, and the informal survey:


Formal CSI-A survey

Scenario

# of airports surveyed

# of respondents per airport

Burden minutes per respondent

Total burden hours

Annual expected (FY06-FY07)

30

500

5

1,200

Annual maximum (FY06-FY07)

70

500

5

2,900


Focus Groups

Scenario

# of focus groups

# of participants per group

Burden minutes per respondent

Total burden hours

Annual expected (FY06-07)

12

12

90

216

Annual maximum (FY06-07)

12

12

90

216


Informal survey

Scenario

# of airports surveyed

# of respondents per airport

Burden minutes per respondent

Total burden hours

Annual expected (FY06-FY07)

25

200

5

417

Annual maximum

(FY06-FY07)

446

200

5

7,400


Thus, the maximum total annual number of respondents is approximately 124,000 ((70 x 500 = 35000) + (12 x 12 = 144) + (446 x 200 = 89200) = 124, 344 total) and the maximum total annual cumulative burden is approximately 10,500 hours (2,900 + 216 + 7,400 = 10,516 total). (70 x 500 x 5 = 17500 hrs / 60 min = 2916.66, or 2900 rounded off) + (12 x 12 x 90 = 12960 hrs / 60 min = 216 hrs) + (446 x 200 x 5 = 446000 hrs / 60 min = 7433.33, or 7400)


Note: The burden hours for the informal surveys were overstated in the March 17, 2006, Federal Register notice due to a calculation error. The correct maximum total number of burden hours is approximately 10,500 rather than 44,600 hours as reported in the Federal Register notice.


  1. Provide an estimate of the total annual cost burden to respondents or recordkeepers resulting from the collection of information.


Respondents will incur no direct cost resulting from this data collection.


  1. Provide estimates of annualized cost to the Federal Government. Also, provide a description of the method used to estimate cost, and other expenses that would not have been incurred without this collection of information.


We estimate the Federal Government cost for this data collection to be approximately $1M annually. These costs include all direct costs of the survey, costs for research and development (such as focus groups and pilot tests), and costs for contractor and technology support to manage the data collection, and produce and analyze the CSI-A measures.


  1. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.


The hour burden has changed from the last PRA submission because of a program change. The number of respondents for the airport-initiated surveys has dropped from 300 to 200 per airport. Based on our experience, TSA officials at most airports have the time and resources to gather only 100-200 responses per survey. The annual maximum number of airports surveyed for the formal survey has declined from 70 to 60. Based on previous experience, the greatest number of airports that we have been able to survey in one fiscal year has been 30. Therefore, the maximum of 70 was too generous.


  1. For collections of information whose results will be published, outline plans for tabulation and publication. Address any complex analytical techniques that will be used. Provide the time schedule for the entire project, including beginning and ending dates of the collection of information, completion of report, publication dates, and other actions.


The primary purpose of this data collection is to produce TSA’s annual CSI-A measures. For FY06, we intend to collect data by September 30 to include in our annual performance reporting by November 30. In subsequent years, we will also report annually on the CSI-A. CSI-A annual reports will include tabulations of the results of all questions by airport and system-wide.


We also anticipate using this data for myriad additional reporting purposes to Congress, OMB, and other Federal agencies. Public and governmental interest in TSA’s performance in providing excellent customer service is high, and the survey results will be of great interest to many parties.


  1. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.


TSA is not seeking approval to not display the expiration date.



  1. Explain each exception to the certification statement identified in Item 19, “Certification for Paperwork Reduction Act Submissions,” of OMB Form 83-I.


TSA seeks no exceptions to the certification statement.


File Typeapplication/msword
AuthorKatrina Wawer
Last Modified ByKatrina Wawer
File Modified2006-08-18
File Created2006-08-18

© 2024 OMB.report | Privacy Policy