Supporting_Statement_A

Supporting_Statement_A.doc

Visibility Valuation Survey Pre-Testing and Pilot Study

OMB: 1024-0255

Document [doc]
Download: doc | pdf

Supporting Statement: Visibility Valuation Survey Pre-Testing and Pilot Study (OMB# 1024-XXXX)


A. Justification


1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection. Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.


The National Park Service (NPS) is directed by its Organic Act to “conserve the scenery and the natural and historic objects and the wildlife . . . and to provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations” (16 U.S.C. §a-1). The Clean Air Act (CAA) includes provisions designed explicitly to maintain and enhance visibility at national parks and wilderness areas (Sections 169A, 169B, and 110(a)(2)(j)). The CAA charges the NPS with an “affirmative responsibility to protect air quality related values (including visibility)” (42 U.S.C. §7475(d)(2)(B)).


(The above references are included as ATTACHMENT A at the conclusion of this supporting statement.)


Visibility valuation information is essential to evaluate the benefits and costs of state and federal efforts to improve air quality in an informed manner. Although several studies were conducted in the late 1970s and 1980s to estimate the benefits of improvements in visibility (e.g., Brookshire et al., 1979; Rae, 1983; Tolley et al., 1986), methods for estimating the economic benefits of improvements in environmental quality have substantially evolved since these early studies were completed. These advancements offer the opportunity to develop more accurate and reliable measures of visibility benefits.


Information regarding the value the public places on changes in visibility assists the NPS in efficiently managing park units, where visual quality is fundamental to the visitor experience (e.g., Meldrum et al., 2006). In addition, the NPS serves in an advisory capacity on regulatory measures to achieve CAA requirements (including the Regional Haze Rule, 40 CFR Part 51).


Brookshire, David S., Ralph C. d’Arge, William D. Schulze and Mark A. Thayer. 1979. Methods Development for Assessing Tradeoffs in Environmental Management, Vol. II: Experiments in Valuing Non-Market Goods: A Case Study of Alternative Benefit Measures of Air Pollution Control in the South Coast Air Basin of Southern California. Prepared for the U.S. Environmental Protection Agency, Office of Research and Development.


Meldrum, B., S. Hollenhorst, L. Le and M. Manni, Clean Air in the National Parks: A Report on Visitor Perceptions and Values, NPS Social Science Program, Draft, March, 2006.


Rae, Douglas A. 1983b. “The Value to Visitors of Improving Visibility at Mesa Verde and Great Smoky National Parks.” In R.D. Rowe and L.E. Chestnut (editors) Managing Air Quality and Scenic Resources at National Parks and Wilderness Areas. Westview Press. Boulder, CO.


Tolley, George, Alan Randall, and Glen Blomquist. 1986. Establishing and Valuing the Effects of Improved Visibility in Eastern United States. Prepared for U.S. Environmental Protection Agency, Office of Policy, Planning and Evaluation.



2. Indicate how, by whom, and for what purpose the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection. [Be specific. If this collection is a form or a questionnaire, every question needs to be justified.]


The NPS seeks updated visibility valuation information that will permit an accurate evaluation of programs and policies affecting visibility in NPS-managed areas. The current evaluation of federal and state air quality legislation or regulations, as well as regional plans or policies that impact NPS-managed areas, is based on visibility valuation information derived from Chestnut and Rowe, 1990 (e.g., see EPA, 2005). The vintage of this study aside, several limitations have been identified by regulators and stakeholders alike, including its limited sample frame (EPA, 2005; Leggett et al., 2004).


The first step in collecting this information is survey pre-testing, which will be conducted through a series of focus groups. These focus groups will be used to develop and refine the ultimate survey instrument. In each focus group, respondents will be asked to read/view, evaluate, and discuss survey questions, explanatory text, and potential photographs that may be used in the final survey instrument. In particular, the respondents will be asked about their ability to understand explanatory material on visibility, the appropriateness of the characteristics of various visibility improvement programs, and their ability to distinguish visibility issues from other air pollution concerns (e.g., ecological and health effects). In addition, standard socio-demographic information will be collected. The purpose of these exercises is to evaluate the information presentation, reliability, internal consistency, response variability, and other properties of the draft survey. Results will be used to make improvements to the draft survey instrument. The study team will proceed iteratively, modifying the draft survey instrument after each focus group to ensure that the final wording of all questions is clear and unbiased and effectively addresses the relevant issues.


Justification for each specific question in the Focus Group Moderator Script and Discussion Guide follows, organized by topic and question number.


Topic 1. Introduction

There are no questions in this section.


Topic 2. Threats to National Parks

Questions #1 and #2 and associated moderator questions are intended to encourage participants to think about their experience with national parks and wilderness areas and to assess participant perceptions of threats to national parks and wilderness areas. These questions are necessary to acclimate the respondent to the subject matter of the survey. These types of questions will assist the moderator in facilitating the discussion and also appear in the survey.


Topic 3. Introduction to Haze

Moderator questions are intended to gauge respondent understanding of “haze” (e.g., as opposed to other air-quality or weather-related visibility issues) and their interpretation of information in the discussion guide materials. These questions are necessary to investigate the appropriate nature and extent of background information that should be provided to respondents in the survey. Here, these questions provide context for the focus group participants for the subsequent program evaluation questions.


Topic 4. Distribution of Haze

Moderator questions are intended to gauge respondent comprehension of and preference for different methods of presenting the annual distribution of haze. Representation of haze as a distribution, versus an average level for example, is most appropriate and there are several ways this could be accomplished. These questions are necessary for survey design purposes to determine how best to present this information. Depiction of the haze distribution is fundamental to the subsequent valuation questions.


Topic 5. Efforts to Reduce Haze

The program question and associated moderator questions are intended to assess general respondent reaction to and acceptance of a program to reduce haze. In the survey this type of question will be used to detect respondent rejection of the valuation scenario, and thus provide information on certain patterns of choice question responses (e.g., respondents who only choose “no program”). In addition, these questions provide necessary context to respondents for the choice questions in the following section.


Topic 6. Choice Questions

Questions #1 and #2 are sample attribute-based choice questions. These questions are necessary to elicit information on respondents’ values for visibility improvements. Associated moderator questions are intended to assess respondent understanding of and reaction to the choice questions. In the survey statistical analysis of choice question data will yield a model capable of predicting economic values for a wide range of improved visibility scenarios. Here they serve as a test to ensure that the choice question format, choice attributes and levels are functioning properly.



The second step, a pilot study, will be designed to further refine the survey instrument and provide information that will allow the NPS to account for the potential impact of mail survey non-response on benefit estimates. One potential concern with a mail survey is that individuals can examine the contents of the survey before choosing whether or not to respond. This could lead to non-response bias if the individuals who choose to respond are more interested in the survey topic (e.g., improving visibility) than those who choose not to respond. The pilot study will involve a split-sample comparison between a mail and an in-person survey. Respondents will be asked to complete the survey instrument developed during the pre-testing stage. The pilot study will gather information to evaluate the impact of non-response on benefit estimates. The results will ultimately be used to provide a means to adjust the benefit estimates obtained in the main survey for potential non-response bias. The final questions appearing in the survey instrument will depend on the pre-testing results. In general the survey will describe the characteristics of various visibility improvement programs and ask respondents to select a preferred program. The survey will also include socio-demographic questions and other questions designed to evaluate the respondent's motivation in selecting a preferred program.


Justification for each specific question in the Pilot Questionnaire follows, organized by topic and question number.


Section A. Experience with Natural Areas and Policy Priorities

Questions #1 through #5 are necessary to control for respondents’ level of experience visiting national parks and wilderness areas and their intensity of preferences with respect to federal spending and public policy issues. Specifically, Question #3 encourages respondents to reflect on the various services provided by the federal government. Question #4 encourages specific reflection on environmental quality and natural resource management issues. These questions engage respondents on the general subject area of the survey, different policy issues facing the public, and competing uses of limited budget funds. These questions are necessary to acclimate the respondent to the subject matter of the survey and the ultimate choice questions.


Section B. Introduction to Haze

Questions #6 and #7 are necessary to assess respondent familiarity with and attitudes toward haze in national parks and wilderness areas. Question #8 is necessary to determine respondents’ impressions of baseline levels of haze in national parks. Taken together, these questions are necessary to provide appropriate context for the choice questions presented in the following section.


Section C. Haze Reduction Program and Choice Questions

Questions #10, #11, and #12 are the attribute-based choice questions that will provide the data necessary to estimate economic values for visibility improvements. Statistical analysis of choice question data will yield a model capable of predicting economic values for a wide range of improved visibility scenarios. As is customary, multiple repetitions of the question are used to increase the efficiency of the chosen sample size. Question #9 is necessary to detect respondent rejection of the valuation scenario.


Section D. Demographic Questions

Questions #13 - #20 are necessary to assess the basic social and demographic attributes of respondents. This information will be used to determine how preferences for visibility improvements vary with these characteristics.


Chestnut, L. and R. Rowe, Preservation Values for Visibility Protection at the National Parks. Prepared for U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards, and National Park Service, Air Quality Management Division, 1990.


Leggett, C., K. Boyle, R. Carson and R. Unsworth, Valuing Visibility in National Parks: An Overview of the Challenges, Final Report, Prepared for NPS Air Resources Division, July, 2004.


U.S. EPA, Regulatory Impact Analysis for the Final Clean Air Visibility Rule or the Guidelines for Best Available Retrofit Technology (BART) Determinations Under the Regional Haze Regulations, EPA-452/R-05-004, June 2005.



3. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also describe any consideration of using information technology to reduce burden [and specifically how this collection meets GPEA requirements.].


The information collection will not involve use of automated, electronic, mechanical, or other technological collection techniques.



4. Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purposes described in Item 2 above.


An exhaustive review of the visibility valuation literature (Leggett et al., 2004, referenced above) was undertaken to assess the quality and applicability of existing information. Information currently relied upon for policy purposes was collected over 15 years ago and is limited in geographic scope (Chestnut and Rowe, 1990). Important advances in non-market valuation techniques have taken place over this time period. For example, the proposed study will employ attribute-based choice questions (e.g., Louviere et al., 2000 and Holmes and Adamowicz, 2003). This approach provides opportunity to address certain challenges encountered in previous visibility valuation efforts, such as embedded valuation of health or ecological effects, and will generate flexible valuation information that can be applied to a broad array of policy contexts.


Chestnut, L. and R. Rowe, Preservation Values for Visibility Protection at the National Parks. Prepared for U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards, and National Park Service, Air Quality Management Division, 1990.


Holmes, T. and W. Adamowicz, Chapter 6: Attribute-Based Stated Preference Methods, in A Primer on Nonmarket Valuation, P. Champ, K. Boyle and T. Brown, eds., Kluwer, 2003.


Louviere, J., D. Hensher and S. Swait, Stated Choice Methods, Cambridge University Press, 2000.



5. If the collection of information impacts small businesses or other small entities (Item 5 of OMB Form 83-I), describe any methods used to minimize burden.


The information collection will not impact small businesses or other small entities.



6. Describe the consequence to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles to reducing burden.


The NPS seeks current and accurate economic information for purposes of evaluating visibility changes brought about by Clean Air Act provisions, including the Regional Haze Rule. As noted, several limitations of currently available information have been identified, including potential embedded values for non-visibility benefits, a limited sample frame, and a focus on average visibility conditions versus a distribution (Leggett et al., 2004, referenced above). Without this collection, the NPS would be forced to rely on outdated information when evaluating programs and policies affecting visibility in NPS-managed areas, potentially compromising the accuracy and reliability of these evaluations.

In addition, the survey pre-testing and pilot study are necessary to assess whether the materials function adequately prior to full implementation, and to collect information that can be used in an ex post correction of survey non-response bias.



7. Explain any special circumstances that would cause an information collection to be conducted in a manner:

* requiring respondents to report information to the agency more often than quarterly;

* requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

* requiring respondents to submit more than an original and two copies of any document;

* requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;

* in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;

* requiring the use of a statistical data classification that has not been reviewed and approved by OMB;

* that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

* requiring respondents to submit proprietary trade secrets, or other confidential information unless the agency can demonstrate that it has instituted procedures to protect the information's confidentiality to the extent permitted by law.


No special circumstances apply to this information collection.



8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency's notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice [and in response to the PRA statement associated with the collection over the past three years] and describe actions taken by the agency in response to these comments. Specifically address comments received on cost and hour burden.


Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported. [Please list the names, titles, addresses, and phone numbers of persons contacted.]


Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every 3 years — even if the collection of information activity is the same as in prior periods. There may be circumstances that may preclude consultation in a specific situation. These circumstances should be explained.


A 60-day Notice was published in the Federal Register on October 10, 2006 (Vol. 71, No. 195, pgs. 59521 – 59522). A copy of this is included as ATTACHMENT B. ATTACHMENT C contains a copy of the 30-day Federal Register Notice published on September 17, 2007 (Vol. 72, No. 179, pgs. 52909-52910).


Only one comment was received, via e-mail, for this one-time information collection as a result of the 60-day Federal Register notice. This commenter questioned why the visibility study was necessary. The commenter noted that regulations that protect air quality are already in place, but are not stringent enough or inadequately enforced. The commenter also added that the most important air quality-related issue is human health, particularly childrens’ health.


Response: Regulations to protect and improve air quality are currently in place, and new regulations may be proposed in the future. Periodic economic information is necessary to determine whether these regulations are efficient. Visibility is a valued component of air quality, but current information is outdated, and lacks the benefit of recent advances in measuring such values. The information proposed in this collection will assist regulators in making better-informed air policy decisions. Human health related issues are outside the purview of this proposed effort, but are well recognized as the predominant economic benefit of improved air quality.


Comments on study parameters, including the design of the survey pre-testing activities and pilot study, were solicited from members of the study team, which is comprised of economic, visibility science and survey research experts:


Dr. Richard Carson, Professor and Chair, Department of Economics, University of California, San Diego, (858) 534-3384


Dr. Kevin Boyle, Professor and Department Head, Agricultural and Applied Economics, Virginia Polytechnic Institute, (540) 231-2907


John Molenar, Air Resource Specialists, (970) 484-7941


Dr. Robert Mitchell, Professor Emeritus, Clark University, (617) 661-9697


In addition, comments on the initial visibility valuation literature review were solicited from a broad stakeholder group. The following individuals/entities provided comments:


Chuck Layman, Executive Director, Central Regional Air Planning Association


Timothy McClive, Chief Economist, Edison Electric Institute


Naresh Kumar, Electric Power Research Institute


Susan Wierman, Executive Director, Mid-Atlantic Regional Air Management Association


John Hornback, Executive Director, Visibility Improvement State and Tribal Association of the Southeast


Jeff Blend, Air, Energy and Pollution Prevention Bureau, Montana Department of Environmental Quality


The literature review was revised to incorporate these comments.



9. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.


Following standard practice, respondents will be given an honorarium ($75) for participating in the focus groups.


Randomly assigned monetary incentives up to $35 will be provided to pilot survey respondents. The purpose of offering different amounts is to assist in inducing variation in response rates, which will help identify any relationship between response rate and collected valuation information (see, e.g., Rathbun and Baumgartner, 1996 and Warriner et al., 1996).


We understand that OMB prohibits use of monetary incentives in final, fully-implemented surveys. As noted, they will be included only in the pilot survey as a means to enhance variation in response rates.


Rathbun, P.R. and R.M. Baumgartner. 1996. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the 1996 Joint Statistical Meetings. Chicago, Illinois. June.


Warriner, K., J. Goyder, H. Gjertsen, P. Hohner, and K. McSpurren. 1996. "Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment." Public Opinion Quarterly 60 (4): 542-562.



10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.


No assurance of confidentiality will be provided to respondents, since the Department of Interior does not have the statutory authority to protect confidentiality or to exempt the survey from a request under the Freedom of Information Act. Instead, participants will be told that their answers will be used only for statistical purposes. They will also be told that reports prepared will not be associated with any specific individuals. Respondents will be informed further that the researchers will not provide information that identifies respondents to anyone outside the study team, except as law requires. And, contact information will ultimately be destroyed, rendering the survey data anonymous.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers the questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.


No questions of a sensitive nature will be asked.


12. Provide estimates of the hour burden of the collection of information. The statement should:

* Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated. Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates. Consultation with a sample (fewer than 10) of potential respondents is desirable. If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance. Generally, estimates should not include burden hours for customary and usual business practices.

* If this request for approval covers more than one form, provide separate hour burden estimates for each form and aggregate the hour burdens in Item 13 of OMB Form 83-I.

* Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage rate categories. The cost of contracting out or paying outside parties for information collection activities should not be included here. Instead, this cost should be included in Item 14.


  • Approximately 120 volunteer respondents will participate in the pre-testing, made up of 10 respondents in each of the 12 focus groups. Four groups will be held in each of three cities: Atlanta, GA, Chicago, IL and Sacramento, CA. These cities were chosen to provide information from areas with different visibility conditions and proximity to NPS park units. Respondents will be recruited through brief telephone calls from a sample of local telephone listings, with each call lasting approximately three minutes. Individuals who agree to participate in the focus groups will be asked to meet with the survey researchers at a nearby central location where they will participate in the focus group.


  • Up to 1,200 recruitment phone calls are anticipated to be made to recruit the 120 participants. At three minutes each, this amounts to 60 hours. Each focus group will last approximately two hours with approximately 30 minutes required for travel to and from the group. With 120 participants, this amounts to 300 hours. Thus, we estimate the total respondent burden for the focus groups to be 360 hours (60 hours for recruitment calls and 300 hours for focus groups).


  • For the pilot study, 16 neighborhoods will be selected in two metropolitan areas for sampling. Phoenix, AZ and Syracuse, NY were chosen to provide information from two areas with different visibility conditions and proximity to NPS park units. Within each neighborhood, all owner-occupied households will be enumerated and a simple random sample of 100 households will be selected for the survey. Each neighborhood sample will be split into two groups, with 50 households assigned to a mail survey group and 50 households assigned to an in-person survey group.


  • Assuming an overall response rate of 50 percent, approximately 800 respondents will participate in the pilot study, 50 respondents in each of the 16 neighborhoods. Households selected for the mail survey group will be sent a pre-survey notification letter, followed by a survey instrument that will take approximately 20 minutes to complete. Non-respondents will receive a reminder letter and up to two replacement surveys. Households selected for the in-person group will receive an advance phone-call followed by a personal visit to the place of residence. During the visit to the place of residence, interviewers will ask respondents to complete a survey that is identical to the mail survey. In-person interviewers will attempt numerous call-backs if the respondent is not home.


  • We assume that each of the mail survey non-respondents will spend approximately five minutes reading the survey materials and deciding not to respond. Assuming 480 mail survey non-respondents (i.e., a 40% response rate), this amounts to 40 hours. We assume that each of the in-person survey non-respondents spends approximately three minutes speaking to an interviewer (on the telephone or in person) and deciding not to participate. Assuming 320 in-person survey non-respondents (60% response rate), this amounts to 16 hours. If 800 respondents spend 20 minutes completing the survey plus approximately four minutes for the initial contact, the total burden for respondents is 320 hours. Thus, we estimate the total respondent burden for the pilot study to be 376 hours (56 hours for non-respondents and 320 hours for respondents).


  • The total estimate burden for both the focus group and pilot survey is 736 hours (360 hours for the focus group and 376 for the pilot survey).



13. Provide an estimate of the total annual [non-hour] cost burden to respondents or recordkeepers resulting from the collection of information. (Do not include the cost of any hour burden shown in Items 12 and 14).

* The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component. The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information [including filing fees paid]. Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred. Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.

* If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance. The cost of purchasing or contracting out information collection services should be a part of this cost burden estimate. In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.

* Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.


The total annual cost burden to respondents is $0.00



14. Provide estimates of annualized cost to the Federal government. Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information. Agencies also may aggregate cost estimates from Items 12, 13, and 14 in a single table.


The cost of the focus groups and pilot study, based on current budget estimates, is approximately $425,000. The NPS will also contribute to the project through in-kind support (approximately 40 hours or $2,000) in the form of administrative oversight by Susan Johnson of the Air Resources Division. These costs are roughly allocated across proposed study phases as follows. Thus, with a total project cost of $427,000 over three years, the annual cost to the Federal government is $142,333.


Study Phase

Labor

Non-Labor

Atlanta Focus Group

$45,000

$15,000

Chicago Focus Group

$45,000

$15,000

Sacramento Focus Group

$45,000

$15,000

Response Rate Pilot

$105,000

$140,000




NPS Personnel Oversight

$2,000





TOTAL

$242,000

$185,000

15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.


This is a new, one-time collection, accounting for a program change of 736 hours.



16. For collections of information whose results will be published, outline plans for tabulation and publication. Address any complex analytical techniques that will be used. Provide the time schedule for the entire project, including beginning and ending dates of the collection of information, completion of report, publication dates, and other actions.


The results of the survey pre-testing and pilot will be summarized in memoranda and a report, respectively, for purposes of internal review by the study team and NPS to inform subsequent phases of the study. Data tabulation will include response frequencies and measures of central tendency, as appropriate. Responses to attribute-based choice questions will be analyzed using standard discrete-choice modeling techniques (e.g., Louviere et al., 2000 and Holmes and Adamowicz, 2003, referenced above).


The estimated schedule for these two phases of the study is as follows:



Study Phase

2007

2008

2009

Fall

(Oct-Dec)


Win

(Jan-Mar)

Spr

(Apr-Jun)

Sum

(Jul-Sep)

Fall

(Oct-Dec)

Win

(Jan-Mar)

Atlanta Focus Group

X






Chicago Focus Group


X





Sacramento Focus Group


X





Pilot Survey Design



X

X



In-Person / Mail Survey Implementation





X


Data Analysis






X




17. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.


The expiration date for OMB approval will be displayed.


18. Explain each exception to the certification statement identified in Item 19, "Certification for Paperwork Reduction Act Submissions," of OMB Form 83-I.


There are no exceptions to the certification statement.


13


File Typeapplication/msword
Authorrwp
Last Modified ByMegan McBride
File Modified2007-09-17
File Created2007-09-17

© 2024 OMB.report | Privacy Policy