NATIONAL AERONAUTICS AND SPACE ADMINISTRATION
COMMERCIAL SUPERSONIC TECHNOLOGY PROJECT,
LANGLEY RESEARCH CENTER
TITLE: Community Response to Low-Amplitude Sonic Boom Testing
TYPE OF INFORMATION COLLECTION: New information collection.
Part A. Justification
1. Circumstances that make collection of information necessary.
NASA's Commercial Supersonic Technology Project is constructing a Low Boom Flight Demonstration (“LBFD”) Quiet Supersonic Transport (“QueSST”) experimental aircraft (“X-plane”). The X-plane supports NASA Aeronautics Research Mission Directorate’s Mega-Driver 1: Global growth in demand for high-speed mobility and Strategic Thrust 2: Innovation in Commercial Supersonic Aircraft (NASA Aeronautics Strategic Implementation Plan, March 2017). Sonic booms created by the new X-plane are expected to be of lower amplitude, and hence, considerably less annoying than those of prior commercial supersonic aircraft. Flight testing of the LBFD aircraft is intended 1) to demonstrate and validate the technology necessary for civil supersonic flight that creates low-amplitude sonic booms, and 2) to determine community response to sonic booms of lesser loudness.
If public response to quieter sonic booms proves to be as favorable as anticipated, the U.S. Federal Aviation Administration and international aircraft noise regulatory bodies such as the International Civil Aviation Organization may wish to reconsider their current prohibitions on overland supersonic overflights (cf. CFR Title 14, Chapter I, Subchapter F, Part 91, Subpart I, Section 91.817.) The current data collection would yield information needed to design studies that will provide the documentation that decision makers will need for this purpose.
A large scale social survey conducted in multiple locations will ultimately be required to measure the annoyance associated with exposure to the LBFD’s low-amplitude sonic booms. The current study, however, is intended only to pre-test methods suitable for collecting information about prompt public reactions to low-amplitude sonic booms, prior to the start of flight testing of the X-plane. No public exposure to any form of sonic boom will be created during the present testing.
A high altitude overflight by the QueSST X-plane will produce an unfamiliar, very short duration, low-amplitude shock wave, at unpredictable and infrequent intervals. Individual sounds of this sort could occasionally be noticed in any community nationwide. Present interviewing methods, developed for administration in airport environs, are intended to assess public response to cumulative noise exposure produced by repetitive, familiar, predictable, long duration, and high level aircraft noise events. These methods are not useful for assessing public response to occasional, unanticipated individual low-amplitude sonic booms. In particular, conventional aircraft noise interviewing techniques are not capable of quantifying prompt reactions to sonic booms – especially startle and short-term, single event annoyance.
Research conducted decades ago on the annoyance of sonic booms produced by Concorde and by military aircraft is of little relevance for present purposes, because these sonic booms were of much greater amplitude than those of current interest. NASA has previously conducted pilot studies of potential data collection methods for gauging response to low amplitude sonic booms at Edwards Air Force Base (e.g., Fidell et al., 2012, and Page et al., 2014). Small numbers of incentivized and self-selected participants in these studies did not constitute a representative sample of the national population, however, and produced only cell phone-based, self-reports rather than systematically solicited information. It is therefore essential to pre-test advanced methods for rapidly interviewing large numbers of people during LBFD test flights, to be certain that such methods can reliably produce highly credible information about prompt reactions to low-amplitude sonic booms.
2. How, by whom, and for what purpose is the information to be used.
The National Aeronautics and Space Administration (NASA) Commercial Supersonic Technology Project will collect community response to the novel noise associated with low-amplitude sonic booms.
No prior large-scale data collection of information about community response to low-amplitude sonic booms has been made by NASA, nor by any other agency. NASA will use the findings of the present data collection for research design purposes: that is, to evaluate the efficacy of alternate methods for assessing community response to low-amplitude sonic booms. It is expected that preferred methods of telephone interviewing identified in the pre-test will eventually be used to conduct social surveys of community response to actual low-amplitude sonic booms created by the LBFD aircraft.
The primary goal of the pre-test is to quantify interview completion rates achievable by outbound interactive voice response (“IVR,” or automated) interviews, versus computer-assisted telephone interviewing (“CATI,” or live agent) interviews. On a per-contact attempt basis, IVR interviewing is far more cost-effective than CATI interviewing, but generally yields lower interview completion rates than CATI interviewing. Thus, larger sampling frames and many more contact attempts are required for IVR than for CATI interviews. Further, an unacceptably low interview completion rate of IVR contact attempts could raise doubts about the representativeness of information collected about prompt reactions to low-amplitude sonic booms.
The results of the pilot test will help NASA to determine whether independent samples (each respondent interviewed once, without self-selection bias, and without calling prior attention to the occurrence of a sonic boom) will suffice. If not, the alternative, panel samples (each respondent offered an incentive to participate in a longitudinal study, and subsequently interviewed repeatedly about reactions to multiple sonic booms), may be required. NASA requires this methodological information prior to the start of any field tests involving the new X-plane.
3. Extent of automated information collection.
Data collection for present purposes must be conducted by telephone, since no other method of interviewing (face-to-face, postal, or Web-based) is capable of synchronizing interviews about prompt responses to low-amplitude sonic booms with those created by an aircraft flying at very high speeds over multiple, geographically dispersed communities. Since thousands of interview contact attempts will eventually be required during the course of a single LBFD flight mission, a high degree of automation is also essential.
When configured to permit four rings (a total of 24 seconds, at six seconds per ring) before classifying a call as a no-answer, or as an answering device or answering service, an outbound IVR system can place and disposition about two calls per minute per dialing port to telephone records algorithmically selected from a sampling frame. For pilot testing purposes, a 48 dialing port outbound IVR system can make tens of thousands of attempts to administer a short (3 minute or less) interview over the course of a 12 hour (9:00 AM – 9:00 PM) dialing day. Using a predictive dialing algorithm, it is also possible for 50 – 100 trained interviewers in one or more centrally-supervised calling centers to make similar numbers of telephone contact attempts over the course of several days, but at considerably greater cost.
Overall management of calling and archiving of interview responses in digital databases will be fully automated. Responses to questionnaire items will be anonymized so that they can be associated with individual respondents only by case numbers.
4. Efforts to identify duplication.
NASA’s LBFD aircraft will be a one-of-a-kind test article, and the only civil aircraft capable of producing shaped (low-amplitude) sonic booms. Prior studies of community response to sonic booms, such as those summarized by Fidell (1996), have involved public exposure to far higher levels of sonic booms and other high energy impulsive sounds. Dosage-response relationships such as those of ISO 1996-1 (2016) (“Acoustics – Description, Measurement and Assessment of Environmental Noise - Part 1: Basic Quantities and Assessment Procedures”) do not apply to impulsive noise, and hence are inappropriate for current purposes.
5. Efforts to minimize the burden on small businesses.
The target population, residential households, does not include small businesses.
6. Consequences of not collecting desired information.
If the current pilot testing is not conducted, it will complicate and increase the risks and costs of eventual flight tests of the LBFD aircraft. It will be necessary, for example, for NASA to employ redundant and unnecessarily costly interviewing methods to seek information about public reactions to low-amplitude sonic booms created during actual test flights. It will also 1) increase costs for creating larger-than-necessary samples to support redundant interviewing methods, while also increasing the total burden on the public; and 2) increase the risk of collecting non-representative and/or insufficient information about the potential annoyance and startle of exposure to low-amplitude sonic booms.
A failure to credibly assess community response to low-amplitude sonic booms will defeat the purpose of NASA’s QueSST program. Absent empirical information about likely community response to exposure to low-amplitude sonic booms, no evidentiary grounds will exist for modifying the present prohibition against overland supersonic flight of CFR Title 14, Chapter I, Subchapter F, Part 91, Subpart I, Section 91.817.
7. Special circumstances.
EXPLAIN ANY SPECIAL CIRCUMSTANCES THAT WOULD CAUSE THIS INFORMATION COLLECTION TO BE CONDUCTED IN A MANNER:
REQUIRING RESPONDENTS TO REPORT INFORMATION TO THE AGENCY MORE OFTEN THAN QUARTERLY
Eligible respondents will be invited to complete a single telephone interview.
REQUIRING RESPONDENTS TO PREPARE A WRITTEN RESPONSE TO A COLLECTION OF INFORMATION IN FEWER THAN 30 DAYS AFTER RECEIPT OF IT
No respondent will be asked to complete any written response.
REQUIRING RESPONDENTS TO SUBMIT MORE THAN AN ORIGINAL AND TWO COPIES OF ANY DOCUMENT
No respondent will be asked to submit any copies of any data collection instrument.
REQUIRING RESPONDENTS TO RETAIN RECORDS, OTHER THAN HEALTH, MEDICAL, GOVERNMENT CONTRACT, GRANT-IN-AID, OR TAX RECORDS FOR MORE THAN THREE YEARS
No respondent will be asked to retain any records for any period of time.
IN CONNECTION WITH A STATISTICAL SURVEY, THAT IS NOT DESIGNED
TO PRODUCE VALID AND RELIABLE RESULTS THAT CAN BE GENERALIZED TO THE UNIVERSE OF STUDY
No invalid statistical survey is anticipated.
REQUIRING THE USE OF A STATISTICAL DATA CLASSIFICATION THAT HAS NOT BEEN REVIEWED AND APPROVED BY OMB
No unapproved data classification activities are anticipated.
THAT INCLUDES A PLEDGE OF CONFIDENTIALITY THAT IS NOT SUPPORTED BY AUTHORITY ESTABLISHED IN STATUE OR REGULATION, THAT IS NOT SUPPORTED BY DISCLOSURE AND DATA SECURITY POLICIES THAT ARE CONSISTENT WITH THE PLEDGE, OR WHICH UNNECESSARILY IMPEDES SHARING OF DATA WITH OTHER AGENCIES FOR COMPATIBLE CONFIDENTIAL USE
All pledges are supported by the authority established in statute or regulation.
REQUIRING RESPONDENTS TO SUBMIT PROPRIETARY TRADE SECRET, OR OTHER CONFIDENTIAL INFORMATION UNLESS THE AGENCY CAN DEMONSTRATE THAT IT HAS INSTITUTED PROCEDURES TO PROTECTTHE INFORMATION'S CONFIDENTIALITY TO THE EXTENT PERMITTED BY LAW.
No trade secrets or items of similar confidential information will be requested.
8. Compliance with 5 CFR 1320.8:
A 60-day Notice was published on 06/21/2023, 88 FR 40336. No comments were received from the public.
A 30-day Notice was published on 8/25/2023, 88 FR 58318.
9. Payments or gifts to respondents.
No incentives of any form are anticipated for participants in the current data collection.
10. Assurance of confidentiality.
Voluntary participants in this pre-test will be assured that any opinions or other information that they provide will never be individually associated with them. Respondents’ answers to questionnaire items (interview responses) will be coded and archived in digital databases, and identified only by anonymous case numbers. No audio recordings will be made of respondents’ responses.
The only linkage between respondent addresses and interview responses will be through the separately maintained sampling frame. The sampling frame itself, however, will contain only information about a random set of residential and other telephone numbers, not information about individual responses, nor respondent names. The sampling frame will not be published. Archived questionnaire information about household addresses and geo-coordinates will be restricted to ZIP code or similar aggregate levels.
11. Sensitive information.
No sensitive or private information about individual survey respondents will be solicited or preserved. The intended questionnaire may be found in Appendix B.
12. Estimate of burden hours for information requested:
As described below, a conservative estimate of the maximum number of required respondents is 20,000.
Category of Respondents |
No. of Respondents |
Participation Time |
Burden Hours |
Individuals |
20,000 |
3 minutes (0.05 hours) |
1000 |
The public hour burden for the current data collection effort will not exceed three minutes (0.05 hours) per respondent. Table 1 shows sample calculations for minimal numbers of completed interviews and contact attempts necessary to confidently estimate proportions of respondents selecting response categories for any single questionnaire item, as a function of the interview completion rate. The leftmost column of the table shows the proportion of respondents selecting particular response categories for an individual questionnaire item (e.g., “very” or “extremely” annoyed). The middle column displays the number of completed interviews necessary to include the indicated proportion of respondents in the 95% confidence interval. The rightmost column shows the expected number of telephone numbers that must be dialed to yield the desired interview completion rate, for a range of completion rates from 0.01 to 0.50.
Table 1: Minimal number of completed interviews desired for confident estimation of proportions of respondents selecting individual questionnaire item response categories
Proportion Selecting Response Categories for Individual QUESTIONNAIRE Item |
NUMBER OF Completed Interviews |
expected number of contact attempts |
||||
Interview Completion rate = .01 |
||||||
.01 |
44 |
4,400 |
||||
.05 |
94 |
9,400 |
||||
.10 |
158 |
15,800 |
||||
.20 |
264 |
26,400 |
||||
.30 |
341 |
34,100 |
||||
.50 |
402 |
40,200 |
||||
Interview Completion rate = .02 |
||||||
.01 |
44 |
2,200 |
||||
.05 |
94 |
4,700 |
||||
.10 |
158 |
7,900 |
||||
.20 |
264 |
13,200 |
||||
.30 |
341 |
17,050 |
||||
.50 |
402 |
20,100 |
||||
Interview Completion rate = .05 |
||||||
.01 |
44 |
880 |
||||
.05 |
94 |
1,880 |
||||
.10 |
158 |
3,160 |
||||
.20 |
264 |
5,200 |
||||
.30 |
341 |
6,800 |
||||
.50 |
402 |
8,400 |
||||
Interview Completion rate = 0.10 |
||||||
.01 |
44 |
440 |
||||
.05 |
94 |
940 |
||||
.10 |
158 |
1,580 |
||||
.20 |
264 |
2,640 |
||||
.30 |
341 |
3,410 |
||||
.50 |
402 |
4,020 |
||||
Interview Completion rate = .20 |
||||||
.01 |
44 |
220 |
||||
.05 |
94 |
470 |
||||
.10 |
158 |
790 |
||||
.20 |
264 |
1,320 |
||||
.30 |
341 |
1,705 |
||||
.50 |
402 |
2,010 |
||||
Interview Completion rate = .50 |
||||||
.01 |
44 |
88 |
||||
.05 |
94 |
47 |
||||
.10 |
158 |
316 |
||||
.20 |
264 |
528 |
||||
.30 |
341 |
682 |
||||
.50 |
402 |
804 |
||||
Calculation: Exact (Clopper-Pearson) two-sided confidence intervals for one proportion provided by PASS 14 (2015) software: References: Fleiss et al., (2003); Newcombe (1998) |
For
example, the shaded row in Table 1 shows that if the interview
completion rate were 0.02 (i.e.,
2 out of every 100 contact attempts yielded a completed interview),
and the fraction of respondents reporting high annoyance were 0.10,
then 158 completed interviews would be required. The number of
contact attempts (telephone numbers dialed) would have to be 79,000
in order for the 0.10 outcome to fall within the 95% confidence
interval of the experimentally-derived high annoyance percentage.
Table 1 shows fewer than 500 complete interviews suffice to confidently estimate proportions of respondents selecting any response category to individual questionnaire items at any interview completion rate. Table 2, however, shows that as many as ~4800 completed interviews could be required to detect a significant difference between interview completion rates achieved by automated and live agent interviewing.
Table 2 shows the minimal numbers of completed interviews required for various combinations of CATI and IVR proportions. The table also shows the number of calls required to achieve those completed interviews for varying interview completion rates. In all cases, the desired level of confidence (alpha) is p = 0.05, and the power is 0.801.
Table 2: Minimal number of completed interviews and contact attempts to detect a difference between interview completion proportions for two interviewing methods
Proportion of Completed Interviews for Condition with Higher Completion Rate (e.g., CATI) |
Required Number of Completed Interviews |
Required Number of Contact Attempts |
||
in Condition with Higher Completion Rate (e.g., CATI) |
in Condition with Lower Completion Rate (e.g., IVR) |
in Condition with Higher Completion Rate (e.g., CATI) |
in Condition with Lower Completion Rate (e.g., IVR) |
|
Lower completion rate = .01 (e.g., IVR) |
||||
0.50 |
9 |
450 |
18 |
45,000 |
0.30 |
20 |
600 |
67 |
60,000 |
0.20 |
35 |
700 |
175 |
70,000 |
0.10 |
89 |
890 |
890 |
89,000 |
0.05 |
238 |
2,380 |
4,760 |
238,000 |
Lower completion rate = .05 (e.g., IVR) |
||||
0.50 |
10 |
100 |
20 |
2,000 |
0.30 |
28 |
168 |
94 |
3,360 |
0.20 |
60 |
240 |
300 |
4,800 |
0.10 |
358 |
716 |
3,580 |
14,320 |
Lower completion rate = .10 (e.g., IVR) |
||||
0.50 |
14 |
70 |
28 |
700 |
0.30 |
48 |
144 |
160 |
1,440 |
0.20 |
161 |
322 |
805 |
3,220 |
Lower completion rate = .20 (e.g., IVR) |
||||
0.50 |
29 |
70 |
58 |
350 |
0.30 |
249 |
374 |
860 |
1,245 |
Lower completion rate = .30 (e.g., IVR) |
||||
0.50 |
74 |
124 |
144 |
414 |
|
||||
Calculation: Numeric results for testing two proportions using the z-test with unpooled variance provided by PASS 14 (2015) software. References: Chow et al., (2008); D’Agostino et al., (1988); Fleiss et al., (2003); Lachin et al., (2000); Machin et al., (1997); Ryan (2013). |
The leftmost column of Table 2 shows the proportion of completed calls for the condition with the higher completion rate (e.g., CATI). The next two columns display the number of completed interviews necessary to include the indicated proportion of respondents in the lower-response and higher-response conditions at p < 0.05, power = 0.80 for testing the difference between two proportions. The rightmost two columns show the number of potential respondents who must be dialed to yield the desired interview completion rates, for a range of completion rates from 0.01 to 0.50. Sections of the table correspond to the condition with the lower proportion of completed calls (e.g., IVR).
For example, suppose that five percent (0.05) of all sample records dialed using the IVR protocol yielded a completed interview, and thirty percent (0.30) of all sample records dialed using the CATI yielded completed interviews. The highlighted cells in Table 2 indicate that the number of completed interviews must then be 28 and 168 for the CATI and IVR protocols, respectively. The number of contact attempts, however, would have to be 94 and 3,360, respectively, for the two interviewing protocols.
Note that the figures in Tables 1 and 2 concern only the minimal number of completed interviews necessary to reliably discriminate between automated and live agent interviews. Additional analyses of interview completion rates (by population density and national region, possibly by state, and possibly by time of day of interview, for example2) may require considerably larger sample sizes. The level of detail useful for informing flight mission planning for NASA’s supersonic demonstrator aircraft determines the number of useful completed interviews.
An upper bound on the number of completed interviews is 20,000, composed of 10,000 automated and 10,000 live agent attempts. The corresponding upper limit on the public hour burden for completed interviews would therefore not exceed 1000 hours (20,000 * 0.05 hours per interview.) The public hour burden associated with non-contacts is zero, while the public hour burden associated with refusals to grant interviews is a matter of seconds per refusal.
13. Estimate of total annual costs to respondents.
The annual cost burden on respondents and record-keepers, other than interview time, is zero.
14. Estimate of cost to the Federal government.
The annual costs of Federal employees for monitoring the contract are estimated to be $36,000, or 0.2 FTE. This estimate includes the Technical Monitor’s time and minimal time from the contracting officer and other NASA employees who participate in technical interchange meetings and reviews.
This information collection is part of a larger risk reduction effort. The costs associated with NASA contractors who will develop and administer the survey and analyze data are estimated at $350,000.
15. Explanation of program changes or adjustments.
This is a new collection, and hence, a program change.
16. Publication of results of data collection.
Publication of the product of this work is anticipated as a NASA Contractor Report. The results will document NASA’s study design considerations for subsequent flight tests of the LBFD aircraft.
17. Approval for not displaying the expiration date of OMB approval.
All interviewing will be administered by telephone, in spoken language. Since no aspect of the questionnaire will be displayed in writing to any respondent, no approval is sought for not displaying the expiration date of OMB approval.
18. Exceptions to certification statement.
The NASA Commercial Supersonic Technology Project takes no exception to 5 CFR 1320.9, per Peter Coen. No exception to the certification statement is sought.
REFERENCES:
Chow, S.C., Shao, J., and Wang, H. (2008). Sample Size Calculations in Clinical Research, Second Edition. Chapman & Hall/CRC. Boca Raton, FL.
Fidell, S., ed. (1996). “Assessment of Community Response to High Energy Impulsive Sounds,” National Research Council, National Academy Press, Washington, D.C., 1996.
Fidell, S., Horonjeff, R., and Harris, M. (2012). "Pilot Test of a Novel Method for Assessing Community Response to Low-Amplitude Sonic Booms," NASA CR-2012-217767.
Fleiss, J. L., Levin, B., Paik, M.C. (2003). “Statistical Methods for Rates and Proportions.” Third Edition. John Wiley & Sons, New York.
International Organization for Standardization (ISO) (2016) “Acoustics – Description, Measurement and Assessment of Environmental Noise - Part 1: Basic Quantities and Assessment Procedures” ISO/CD/1996, International Standard 1996-1, ISO, Geneva, Switzerland.
NASA Aeronautics Strategic Implementation Plan, March 2017. https://www.nasa.gov/aeroresearch/strategy
Newcombe, R. G. (1998). “Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods. Statistics in Medicine, 17, pp. 857-872.
Page, J., Hogdon, V., Krecker, P., Cowart, R., Hobbs, C., Wilmer, C., Koening, C., Holmes, T., Gaugler, T., Shumway, J., Rosenberger, J. and Philips, D. (2014) "Waveforms and Sonic Boom Perception and Response (WSPR): Low-Boom Community Response Program Pilot Test Design, Execution, and Analysis," NASA CR-2014-218180.
PASS 14 Power Analysis and Sample Size Software (2015). NCSS, LLC. Kaysville, UT, ncss.com/software/pass.
1 Alpha is the probability of rejecting a true null hypothesis. Power is the probability of rejecting a false null hypothesis.
2 Pragmatic constraints (for example, on numbers of X-planes available for the flight testing program, aircraft basing and pilot scheduling matters, the need and ability to fly at night, and various geographic constraints on mission routes) affect the need for large sample sizes as greatly as statistical considerations.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | NEMO |
File Modified | 0000-00-00 |
File Created | 2023-08-29 |