SSB_NPS Recreation Fee Pricing Survey

SSB_NPS Recreation Fee Pricing Survey.docx

National Park Service Recreation Fee Pricing Study-Survey Pre-Test and Pilot

OMB: 1024-0290

Document [docx]
Download: docx | pdf

Supporting Statement B

National Park Service Recreation Fee Pricing Study – Survey Pre-Testing

OMB Control Number 1024-NEW



Collections of Information Employing Statistical Methods


The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results. When the question “Does this ICR contain surveys, censuses, or employ statistical methods?” is checked "Yes," the following documentation should be included in Supporting Statement B to the extent that it applies to the methods proposed:



1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The potential universe of respondents for this collection will be adults (18 years or older) currently residing in the United States and the District of Columbia. The results of the 2020 U.S. Census Bureau reported that there are 258 million adults, 18 years or older in the United States. The sampling frame for this collection will consist of 2,500 residential addresses listed in the United States Postal Service’s Delivery Sequence File (DSF) and purchased from a private sample provider. The sample drawn from the DSF will be stratified by state and the sample size in each state will be proportional to the state’s adult population. The DSF provides an option to exclude the following types of address from the sampling frame:

  • seasonal,

  • vacant,

  • traditional post office boxes (except those designated as “only way to get mail”) and

  • drop points (mail delivery points that serve multiple households or businesses).

With these conditions the available number of addresses to sample from are 129,364,328.



Pre-test

A subsample of 500 of the 2,500 households will be randomly selected to participate in the survey pre-test. Using DSF, we anticipate that 92% (n=460) will be valid addresses and from that we expect a 25% response rate, resulting in approximately 115 completed surveys.

Pre-test Debrief

All respondents to the pretest survey will be given the opportunity to participate in a follow-up phone interview to debrief on questionnaire content, clarity, and flow. The survey team will contact up to 20 respondents who volunteer to participate. Volunteers will be asked to provide a telephone number and a time of day they may be reached. Respondents will be contacted in the order of survey completion. If a respondent is contacted and not available to talk but is willing to reschedule the interview, a follow-up appointment will be made. If contact cannot be made with a respondent at a scheduled interview time, the next respondent who agreed to participate will be contacted. Respondents who agreed to participate and do not respond to a telephone call at pre-arranged interview time will be re-contacted by email to arrange possible alternative interview times. Because respondents will have already volunteered their contact details for the express purpose of completing a debrief interview, we are assuming a relatively high response rate of 80%, resulting in 16 completed interviews.

Pilot

The remaining 2,000 households will participate in the pilot. Using the DSF, it is anticipated that 92% (n=1,840) will be valid and the remaining 8% (n=160) will be invalid. From the valid addresses, we anticipate a 25% response rate, resulting in approximately 460 completed surveys. When combined with pretest responses, this will provide approximately 575 completed surveys.

To ensure that households without internet access have an opportunity to participate in the survey, RSG will use telephone interviews as an option. A toll-free number will be provided, which respondents will use to call live interviewers. Based on past experience, using this method, we anticipate that <1% of respondents will complete the survey via telephone (Fowler et al., 2018).

Non-response Survey

A sample of 250 non-respondents will be selected for the non-response follow-up survey. The results from the non-response survey will be compared to the pilot/pre-test to identify potential response bias. As with the pilot survey, the addresses will be stratified by state and sampling will be proportional to the number of non-respondent addresses in each state. Assuming a 20% response rate, we expected to receive at least 50 non-response follow-up survey responses.

A total of 2,500 addresses will be sampled. Table 1 shows the total number of contacts, expected response rates and completions. The total number of valid contacts represents the expected number of contacts by mail or phone that will be made to study participants and discounts invalid addresses, described in the bullets above.

Table 1: Respondents for Survey Pretesting

Pretesting Elements

Total Number of Valid Contacts

Expected Response Rate

Number of Completed Responses

Refusals

Pretest

460

25%

115

345

Pilot

1,840

25%

460

1,380

Nonresponse Follow-Up

250

20%

50

200

Pre-test Debrief

20

80%

16

4

Total

2,570


641

1,929

2. Describe the procedures for the collection of information including:

* Statistical methodology for stratification and sample selection,

* Estimation procedure,

* Degree of accuracy needed for the purpose described in the justification,

* Unusual problems requiring specialized sampling procedures, and

* Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Sample Stratification, Selection and Weighting

A stratified random sample of addresses will be drawn from the DSF frame. The sample will be stratified by state to ensure an adequate geographic distribution of respondents, with the initial sample size in each state proportional to the state’s adult population.

Each sampled address will be assigned a unique passcode. Each mail-based invitation will include a sampled address and the associated passcode, which the respondent will use to access the online survey. Using a relational database, each respondent’s survey instance can be tied back to an individual sampled address. This will allow the researchers to understand completion by geography, and utilize dynamic features within the survey platform itself, such as retrieving the specific NPS units within a certain distance of a respondent’s home zip code.

A single adult will be randomly selected within each residential address using the next birthday method (Oldendick et al. 1988). The birthday method will be implemented by incorporating a statement in the cover letter that the survey should be completed by the adult living in the household whose birthday was most recently celebrated.

Survey weights will be constructed in two stages. First, design weights will be constructed as the inverse selection probability for each respondent. Given the random selection of an adult at each address, the design weights will simply equal the number of adults living at the respondent’s address scaled by a state-specific expansion factor. The state-specific expansion factor allows the sum of the respondents’ design weights in each state to match the state’s adult population. Second, these design weights will be calibrated through iterative proportion fitting, or “raking” (Kolenikov 2014; Battaglia, Hoaglin, and Frankel 2009) to match demographic controls (e.g., gender, age, ethnicity, and education) at the U.S. Census region level.

In terms of accuracy, precisely estimated coefficients (e.g., p ≤ .05) on the quantity and price parameters are required in order to estimate the desired elasticities. Based on our collective experience with prior choice-modeling studies, the expected number of survey and SP question responses will be sufficient for these purposes.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


Several measures will be implemented to maximize survey response. First, each sampled address will receive two mailings: an invitation letter and reminder postcard via first class mail. The mailings will describe the purpose/importance of the survey effort, display the NPS visual identifier, and be signed by a high-level NPS employee. Second, a $2 monetary incentive will be included with the invitation letter. Third, materials will be carefully crafted to signal a high-quality research effort and to minimize respondent burden.

Potential nonresponse bias will be evaluated by conducting a non-response follow-up survey via USPS Priority Mail. To maximize the likelihood of response, the non-response follow-up survey will be relatively short (<8 questions) and will include a $5 monetary incentive. The questions in the non-response follow-up survey will include a subset of the questions from the main survey, selected to characterize non-respondents with respect to park visitation history, attitudes toward National Parks, and demographic characteristics. The potential for nonresponse bias will be assessed by comparing responses from this follow-up survey with the responses from the primary survey.

4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


The Initial sample of 500 households selected for the pre-test version of the survey will be asked to provide any comments or suggestions that might improve the wording, flow, or layout of the questionnaire. All respondents will be asked for a telephone number for a follow-up call to participate in a follow-up debriefing interview.

A sample of 20 respondents volunteering a telephone number will be contacted within one week of completing the pretest. The debriefing interview will be conducted by proceeding sequentially through each section of the survey and asking respondents for comments on question wording, flow, and layout. Respondents who completed the stated-preference (SP) section of the survey will be flagged in the dataset storing survey responses and prioritized for these debriefing calls. Researchers will revise the survey to address any issues identified during the debriefing interviews. Following any revisions, the remaining 2,000 sampled households will be invited to participate in the pilot survey.


5. Provide the names and telephone numbers of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Individuals consulted on statistical aspects of the design

  • Dr. Thomas Adler, President, Resource Systems Group, Inc.,


  • Mr. Jeffery Dumont, Senior Data Scientist, Resource Systems Group, Inc.,


  • Mr. Robert Paterson, Principal, Industrial Economics, Inc.,


Contractor collecting and analyze the information for the agency

  • Tristan Cherry, Senior Consultant Resource Systems Group (RSG), Inc



References Cited

Battaglia, M. P., D. C. Hoaglin, and M. R. Frankel. 2009. “Practical Considerations in Raking Survey Data,” Survey Practice 2(5).

Fowler, M., T. Cherry, T. Adler, M. Bradley, and A. Richard (RSG). 2018. 2015-2017 California Vehicle Survey, California Energy Commission. Publication Number: CEC-200-2018-006.

Kolenikov, S. 2014. “Calibrating Survey Data Using Iterative Proportional Fitting (Raking),” The Stata Journal, 14(1): 22–59.

Oldendick, Robert W., George F. Bishop, Susan B. Sorenson, and Alfred J. Tuchfarber. 1988. “A Comparison of the Kish and Last Birthday Methods of Respondent Selection in Telephone Surveys.” Journal of Official Statistics, 4: 307-318.



5


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2022-06-23

© 2024 OMB.report | Privacy Policy