09 16 13 SUPPORTING STATEMENT_clean

09 16 13 SUPPORTING STATEMENT_clean.docx

Willingness to Pay for Improved Water Quality in the Chesapeake Bay (Revised)

OMB: 2010-0043

Document [docx]
Download: docx | pdf





Supporting Statement for Information Collection Request for

Willingness to Pay Survey for Chesapeake Bay Total Maximum Daily Loads: Instrument, Pre-test, and Implementation












TABLE OF CONTENTS


PART A OF THE SUPPORTING STATEMENT 4

1. Identification of the Information Collection 4

1(a) Title of the Information Collection 4

1(b) Short Characterization (Abstract) 4

2. Need For and Use of the Collection 5

2(a) Need/Authority for the Collection 5

2(b) Practical Utility/Users of the Data 6

3. Non-duplication, Consultations, and Other Collection Criteria 6

3(a) Non-duplication 6

3(b) Public Notice Required Prior to ICR Submission to OMB 10

3(c) Consultations 14

3(d) Effects of Less Frequent Collection 17

3(e) General Guidelines 17

3(f) Confidentiality 18

3(g) Sensitive Questions 18

4. The Respondents and the Information Requested 18

4(a) Respondents 18

4(b) Information Requested 20

5. The Information Collected - Agency Activities, Collection Methodology, and Information Management 26

5(a) Agency Activities 26

5(b) Collection Methodology and Information Management 27

5(c) Small Entity Flexibility 28

5(d) Collection Schedule 28

6. Estimating Respondent Burden and Cost of Collection 30

6(a) Estimating Respondent Burden 30

6(b) Estimating Respondent Costs 31

6(c) Estimating Agency Burden and Costs 31

6(d) Respondent Universe and Total Burden Costs 32

6(e) Bottom Line Burden Hours and Costs 32

6(f) Reasons for Change in Burden 33

6(g) Burden Statement 33


PART B OF THE SUPPORTING STATEMENT 35

1. Survey Objectives, Key Variables, and Other Preliminaries 35

1(a) Survey Objectives 35

1(b) Key Variables 35

1(c) Statistical Approach 37

1(d) Feasibility 37

2. Survey Design 38

2(a) Target Population and Coverage 38

2(b) Sampling Design 39

2(c) Precision Requirements 42

2(d) Questionnaire Design 46

3. Pretests 49

4. Collection Methods and Follow-up 50

4(a) Collection Methods 50

4(b) Survey Response and Follow-up 51

5. Analyzing and Reporting Survey Results 51

5(a) Data Preparation 51

5(b) Analysis 52

5(c) Reporting Results 60

REFERENCES 61



List of Attachments

Attachment 1 – Declining baseline version of survey with 2025 reference year

Attachment 2 – Constant baseline version of survey with 2025 reference year

Attachment 3 – Improving baseline version of survey with 2025 reference year

Attachment 4 – Declining baseline version of survey with 2040 reference year

Attachment 5 – Constant baseline version of survey with 2040 reference year

Attachment 6 – Improving baseline version of survey with 2040 reference year

Attachment 7 – Federal Register Notices

Attachment 8 – Preview letter to mail survey recipients

Attachment 9 – Cover letter to mail survey recipients

Attachment 10 – Postcard reminder to mail survey recipients

Attachment 11 – Cover letter to recipients of the second survey mailing

Attachment 12 – Reminder letter to recipients of the second survey mailing

Attachment 13 – Cover letter to recipients of the non-response questionnaire

Attachment 14 – Non-response bias study questionnaire

Attachment 15 – Description of statistical survey design

Attachment 16 – Responses to public comments

Attachment 17 – Description of models used to choose attribute levels

Attachment 18 – Expert consultation on transition time for environmental attributes


PART A OF THE SUPPORTING STATEMENT


1. Identification of the Information Collection

1(a) Title of the Information Collection


Willingness to Pay Survey for Chesapeake Bay Total Maximum Daily Loads: Instrument, Pre-test, and Implementation


1(b) Short Characterization (Abstract)


The Clean Water Act (CWA) directs EPA to coordinate Federal and State efforts to improve water quality in the Chesapeake Bay. In 2009, Executive Order (E.O.) 13508 reemphasized this mandate, directing EPA to define the next generation of tools and actions to restore water quality in the Bay and describe the changes to be made to regulations, programs, and policies to implement these actions. In response, EPA is undertaking an assessment of the costs and benefits of meeting established pollution budgets, called Total Maximum Daily Loads (TMDLs), of nitrogen, phosphorus, and sediment for the Chesapeake Bay. The Chesapeake Bay Watershed encompasses 64,000 square miles in parts of six states and the District of Columbia. While efforts have been underway to restore the Bay for more than 25 years, and significant progress has been made over that period, the TMDLs are necessary to continue progress toward the goal of a healthy Bay. The watershed states of New York, Pennsylvania, Delaware, West Virginia, Virginia, and Maryland, as well as the District of Columbia, have developed Watershed Implementation Plans (WIPs) detailing the steps each will take to meet its obligations under the TMDLs.

EPA has begun a new study to estimate the costs of compliance with the TMDLs and the corresponding benefits. As an input to the TMDLs benefits study, EPA’s National Center for Environmental Economics (NCEE) is seeking approval to conduct a stated preference survey to collect data on households’ use of Chesapeake Bay and its watershed, preferences for a variety of water quality improvements likely to follow from pollution reduction programs, and demographic information. If approved, the survey would be administered by mail, in two phases, to a total 20,280 residents living in the Chesapeake Bay states, Chesapeake Bay Watershed, and other east coast states.


NCEE will use the survey responses to estimate willingness to pay for changes related to reductions in nitrogen, phosphorous, and sediment loadings to the Bay and lakes in the Chesapeake Bay Watershed. The analysis relies on state of the art theoretical and statistical tools for non-market welfare analysis. A non-response bias study will also be administered to inform the interpretation and validation of survey responses.

The total national burden estimate for all components of the survey is 1,915 hours. The burden estimate is based on 900 responses to 3,000 pretest surveys, 5,184 responses to 17,280 main surveys, and 1,080 responses to the combined non-response surveys for the pre-test and main survey. Assuming 18 minutes are needed to complete the mail survey and 5 minutes for the non-response survey the total respondent cost comes to $44,674 for the pre-test and mail survey combined using an average wage rate of $23.33 (United States Department of Labor, 2012).


2. Need For and Use of the Collection

2(a) Need/Authority for the Collection


Benefits from meeting the TMDLs for the Chesapeake Bay will accrue to those who live near the Bay or visit for recreation, those who live near or visit lakes in the watershed, and those who live further away and may never visit the Bay but have a general concern for the environment. While benefits from the first two categories can be measured using hedonic property value methods, recreational demand methods, and other revealed preference approaches, only stated preference methods can capture non-use benefits (i.e., benefits for those who do not use the resource.)

The findings from this study will be used by EPA to estimate the total value of benefits of the nutrient and sediment TMDLs designed to meet the requirements of Executive Order 13508. Specifically, the survey will be used to estimate the public’s willingness to pay for changes in environmental attributes of the Chesapeake Bay and lakes inside the watershed. A suite of hydrological and ecological models will be used to predict how these attributes would change over time under the TMDL and baseline conditions. Model predictions and valuation survey data will be combined to estimate the total economic benefit of the TMDL. Understanding total public values for ecosystem resources, including the more difficult to estimate non-use values, is necessary to determine the full range of benefits associated with reductions in nutrient and sediment loading. Because non-use values may be substantial, failure to estimate such values may lead to improper inferences regarding benefits and costs.

States and their congressional representatives have expressed a desire to know how practices that reduce nutrients and sediment will benefit their constituents (see, for example, page 55 of US Congress 2011). There are limited stated preference studies specifically on the Chesapeake Bay, and no studies specifically addressing the environmental improvements predicted under the TMDLs. This study will provide policy makers with information on how much the public would benefit in return for the cost of the programs.

The project is being undertaken pursuant to section 104 of the Clean Water Act dealing with research. Section 104 authorizes and directs the EPA Administrator to conduct research into a number of subject areas related to water quality, water pollution, and water pollution prevention and abatement. This section also authorizes the EPA Administrator to conduct research into methods of analyzing the costs and benefits of programs carried out under the Clean Water Act.


2(b) Practical Utility/Users of the Data

EPA plans to use the results of the stated preference survey to estimate the willingness to pay for improvements that can be used to assess the net welfare impacts of the Chesapeake Bay TMDLs. Specifically, the Agency will use the survey results to estimate values for improvements in Bay and for reduced algae in watershed lakes under measures taken to meet the TMDLs. In conjunction with associated estimates of the costs of these measures, the EPA plans to use the willingness to pay estimates from the survey in a benefit-cost analysis to calculate expected net benefits of the TMDLs. Analysis of the stated preference survey results, detailed in part B of this Supporting Statement, will follow standard practices outlined in the literature (Freeman 2003; Bennett and Blamey 2001; Louviere et al. 2000). Any subsequent benefit-cost analysis used to assess net welfare impacts will follow standard practice and guidance (e.g., U.S. EPA 2010).

The results of the study will be made available to state and local governments which they may use to better understand the preferences of households in their jurisdictions and the benefits they can expect as a result of meeting the TMDLs. Finally, stakeholders and the general public will be able to use this information to better understand the social benefits of improving water quality in the Chesapeake Bay in comparison to the cost information also being developed by EPA.


3. Non-duplication, Consultations, and Other Collection Criteria

3(a) Non-duplication


There are many studies in the environmental economics literature that quantify benefits or willingness to pay (WTP) associated with various types of water quality and aquatic ecosystem changes. The Chesapeake Bay, as an iconic resource and the subject of a long history of restoration efforts is relatively well studied. However, no study or set of studies provides a comprehensive estimate of non-use values or values associated with specific improvements likely to result from the TMDLs. Further, existing studies do not provide a sufficient basis for using benefits transfer to estimate the value of these improvements. The proposed survey is designed to fill this gap.

The most recent review of valuation studies relevant to the Chesapeake Bay TMDLs is by Cropper and Isaac (2011), who identify studies that estimate WTP for use and non-use values associated with the Bay. Cropper and Isaac also consider how studies not associated with the Bay may be used in a benefits transfer exercise to value improvements resulting from the TMDLs. While these studies all provide insights into particular categories of benefits associated with the Bay, none provides a comprehensive estimate of non-use values or values associated with specific improvements likely to result from the TMDLs.

Three of the studies reviewed in Cropper and Isaac examine the residential amenity value of water quality (Leggett and Bockstael, 2000; Poor et al., 2007; Van Houtven, 2009). Leggett and Bockstael (2000) estimate the impact on housing prices from fecal coliform counts on waterfront properties, finding significant effects. Poor, et al. (2007) study the effect of dissolved inorganic nitrogen and total suspended solids on property values in St. Mary’s County, MD and also find significant effects. These studies, however, reflect the benefits to homeowners on or near the Bay and not benefits for the general population.

Recreational benefits in the Chesapeake Bay have been relatively well-studied, and Cropper and Isaac identify six original studies that estimate the recreational activity value of improved water quality (Bockstael et al., 1988, 1989; Lipton and Hicks, 1999; Hicks and Strand, 2000; Lipton and Hicks, 2003; Lipton, 2004; Massey et al., 2006). Cropper and Isaac also identify three studies that use benefit transfer to estimate recreation benefits (Krupnick, 1988; Morgan and Owens, 2001; Van Houtven, 2009). These nine studies focus on the benefits from recreational fishing, swimming or beach visits, and boating. The limitations of these studies for estimating the benefits of the TMDLs are described in Cropper and Isaac. Key limitations include that only two species (striped bass and flounder) are examined in the recreational fishing studies, and the choice of water quality measure for swimming and boating studies. These limitations make it difficult to use the studies for benefit transfer. More importantly for this ICR, these studies, even taken together, look at a small subset of the kinds of benefits expected from the TMDLs.

Commercial fishing benefits from improved water quality are the subject of three studies identified in Cropper and Isaac (Kahn and Kemp, 1985; Anderson, 1989; Mistiaen et al., 2003). While commercial benefits from the TMDLs are likely to be important, they are distinct from the other types of benefits the proposed stated preference study will estimate.

There is a large gap in the literature on the non-use benefits of improved water quality in the Chesapeake Bay. Cropper and Isaac (2011) identified only two original research studies that estimate non-use benefits of water quality improvements in the Chesapeake Bay. The first study, Bockstael, et al. (1988, 1989), estimates willingness to pay to make the Bay “swimmable” for those respondents who considered that it was not acceptable for swimming. There is no clear way to link “swimmability” to the improvements expected from the TMDLs, making it difficult to use this study for valuation. Further, the study sample was limited to the Washington-Baltimore area. A second study, Lipton et al. (2004), estimated willingness to pay of non-users for restoring oyster reefs in the Chesapeake Bay. While oyster reefs are likely to benefit from the TMDLs, this is only a small part of the expected environmental improvements. The sample in this study was broader, including most mid-Atlantic states, but did not include Pennsylvania and was not a random sample.

An alternative to directly estimating Chesapeake Bay non-use benefits is to use existing studies that are similar but focus on other locations. Van Houtven (2009) suggests this approach, based on a meta-analysis of stated preference studies that value changes in a water quality index. This approach can produce value estimates for improvements in the index, and it may be feasible to map changes resulting from the TMDLs into a comparable index. However, as noted by Cropper and Isaac (2011), the water bodies used in the meta-analysis differ greatly from the Chesapeake Bay, and the values reported from Van Houtven (2009) are substantially different from those in Bockstael, et al. (1988), raising questions about the accuracy of a benefits transfer exercise. The limitations of existing Bay-specific studies, and the inability to effectively apply benefits transfer, suggest that an original study is needed to estimate non-use benefits from the TMDLs.

One additional group of researchers, Hicks et al. (2008), used a choice experiment to survey knowledgeable stakeholders (e.g., recreational anglers, charter boat operators) with close connection to the Chesapeake Bay and its restoration about their preferences for a hypothetical Bay restoration package in the context of a prior Chesapeake Bay restoration effort, “Chesapeake 2000.” They account for a variety of outcomes related to reduced sediment and nutrient loads: number of seafood consumption advisories, number of beach closures, oyster biomass, blue crab biomass, shad population, and acres of wetlands. While perhaps the most closely related among available studies, the choice experiment setup as well as the type and scale of the surveyed population prevent its direct use for analysis of benefits of the TMDLs. Specifically, cost was not included as an attribute in the choice experiment, preventing EPA from estimating WTP for restoration activities based on these results. Furthermore, only knowledgeable stakeholders were surveyed; preferences for water quality improvements achieved under the Chesapeake Bay TMDLs among this group may substantially differ from those of the general and/or non-user population.

The proposed stated preference study addresses one additional gap: benefits throughout the Chesapeake Bay Watershed that result from TMDLs implementation. Based purely on proximity, residents farther out in the watershed are less likely to have direct use value for in-the-Bay benefits of TMDLs implementation, yet may face substantial implementation costs, particularly in the upper portions of the watershed. BMPs designed to reduce nutrient and sediment loadings to the Bay will also improve water quality in thousands of lakes and reservoirs in the watershed; since many watershed residents have relatively more direct access to these amenities, not counting these benefits is likely to substantially underestimate total benefits of the TMDLs. These effects are not well-considered in the existing economics literature. For example, Cropper and Isaac (2011) identified only one appropriate study (von Haefen, 2003) related to ancillary benefits from BMP implementation.

To conclude, while specific aspects of uses of the Bay have been well studied, an original stated preference study is needed to properly estimate benefits of the Chesapeake Bay TMDLs due to the following reasons: (1) valuation endpoints used in the existing studies tend to be incompatible with endpoints used to measure water quality improvement under the Chesapeake Bay TMDLs (e.g., water clarity); (2) necessary data to transfer benefits estimates from existing studies to the Chesapeake Bay are lacking, particularly regarding use and non-use data for watershed ecosystems like lakes; (3) the overall population likely to benefit from action take under the Chesapeake Bay TMDLs is broader than the populations sampled in prior studies, which would likely result in incorrect value estimates; and (4) although studies separating use and non-use components of total value estimate that non-use value is substantial, available studies are based on discrete endpoints (e.g., “swimmability” in the Bay itself, oyster reef acreage) do not reflect the full range of TMDL-related Bay and watershed improvements (e.g., ancillary improvements in watershed lakes, Bay water clarity, and Bay populations of fish and shellfish). While these concerns might be ameliorated by using results from environmental economics studies based in other estuaries of the United States, EPA believes that studies based in other estuaries are unlikely to accurately or completely capture willingness to pay for TMDL-related improvements in the Chesapeake Bay Watershed given the unique character of this water resource and the goods and services it provides. Cropper and Isaac reach a similar conclusion, writing, “we strongly suggest that a new stated preference study be conducted to elicit willingness to pay for water quality improvements more closely linked to the TMDLs” (Cropper and Isaac, 2011, p. 20).


3(b) Public Notice Required Prior to ICR Submission to OMB


First Round of Public Comment:

In accordance with the Paperwork Reduction Act (44 U.S.C. 3501 et seq.), EPA published a notice in the Federal Register on May 24, 2012, announcing EPA’s intent to submit this application for a new Information Collection Request (ICR) to the Office of Management and Budget (OMB), and soliciting comments on aspects of the information collection request. A copy of the Federal Register notice (77 FR 31006) is attached at the end of this document (See Attachment 7). Because certain supporting documents were not available in the docket for public review during the first 30 days of the comment period, EPA re-opened the comment period for an additional 30 days beginning on July 26 (77 FR 43822; Attachment 7). Also see docket # EPA-HQ-OA-2012-0033.

Three sets of substantive comments were received in response to the Federal Register Notice -- one each from the Utility Water Act Group (UWAG), Food and Water Watch (FWW), and a coalition of 18 organizations representing various business, construction, manufacturing, housing, agriculture, forestry and energy interests (C18). These organizations provided many useful comments that have since been incorporated into the survey design. Complete comments and responses are in Attachment 16. Brief response summaries to the major comments are provided below:


Stated preference is not a generally accepted valuation approach (UWAG, C18)


Response – Stated preference studies have been conducted by leading academic scholars at Harvard University, University of North Carolina, University of Maryland, Cornell University, University of California, Berkeley, UCLA, and other top research universities across the country and around the world. In addition, studies have been published in premier economic journals (see, for example, Cummings and Taylor 1999; Evans, Poulos and Smith 2011; Johnston 2006; Layton and Brown 2000; Parsons and Thur 2008; Viscusi, Huber, and Bell 2008). Stated preference studies are used by a wide range of federal agencies and countries including the U.S., Canada, the UK, and Australia to assess the benefits of regulations and federal activities (for U.S. examples see NOAA 2002; U.S. EPA 2008, 2009; U.S. Bureau of Reclamation 2012).1 The use of stated preference studies is consistent with EPA’s peer-reviewed Guidelines for Preparing Economic Analyses (U.S. EPA 2010) and OMB’s guidance on Regulatory Impact Analysis (Circular A-4 2003). While stated preference has its critics, EPA believes there is a wide range of general acceptance towards stated preference methods both within academic circles and for public policy analysis (see a recent issue of the Journal of Economic Perspectives for three papers on the topic, Carson 2012, Hausman 2012, and Kling, et al. 2012).


Stated preference surveys are subject to biases

Hypothetical bias – Refers to the fact that stated preference surveys pose hypothetical situations, and people are not actually asked to make the payments indicated in the survey. (UWAG, FWW, C18)


ResponseEPA recognizes that hypothetical bias is a potential concern in stated preference surveys and takes this concern seriously. In general, stated preference methods have “been tested and validated through years of research and are widely accepted by federal, state, and local government agencies and the U.S. courts as reliable techniques for estimating nonmarket values” (Bergstrom and Ready 2009, p. 26). A recent meta-analysis of the stated preference literature also concludes that hypothetical bias may not always be a significant concern (Murphy, et al. 2005).

To reduce the potential for hypothetical bias in this survey EPA has consulted with experts and drawn from peer reviewed literature to address it in the survey design. For example, the survey explicitly incorporates elements that allow mitigation of hypothetical bias, such as the use of reminders about budget constraints (akin to the cheap talk language in Cummings and Taylor 1999; List 2001). These features of survey design are shown to minimize hypothetical bias in experimental settings. The text used in this survey has undergone thorough testing with participants in focus groups and cognitive interviews. EPA believes that the steps taken during survey development and testing have largely mitigated the potential for hypothetical bias. See Section 2d of Part B this ICR for more information on how we address hypothetical bias.

Non-response bias – Refers to situations in which people who choose to complete and return the survey are systematically different than those who do not. This could lead to an unrepresentative sample of respondents and bias the willingness to pay estimates upward. (UWAG, C18)


Response – EPA recognizes the potential for non-response bias and the impacts it could have on the data analysis. First, EPA is taking steps to obtain the highest possible response rate, thereby mitigating non-response bias. Specifically, EPA is using focus group-tested design choices to encourage participation. EPA is also following the Dillman tailored design method (Dillman 2008) for mail surveys which includes an introduction letter preceding the survey, a reminder post card, and second mailing of the survey, and a reminder letter following the second survey.


EPA will also administer a non-response bias study (Attachment 14) in both the pre-test and full survey in order to examine whether or not respondents are systematically different from non-respondents (see OMB 2006). In the non-response bias study, households that do not return the survey will be randomly sampled to receive a short questionnaire by first class mail imprinted with a stamp requesting the recipient to “Please Respond Within 2 weeks”. The questionnaire will elicit basic demographic information as well as a few short questions regarding awareness and the reasons they did not complete the survey. Responses to these questions will be used to examine whether respondents are systematically different from non-respondents. See Section 2c of Part B of this Information Collection Request for a description of the non-response bias study.


Yea-saying – Yea-saying refers to the tendency of respondents to agree with questions regardless of content. (UWAG)


Response –Survey and study design choices can mitigate yea-saying. The use of mail surveys rather than face-to-face interviews has been shown to decrease the social pressure that may influence a respondent to provide a response deemed desirable (Blamey and Bennett 2001, Dillman 2008). This survey also employs a conjoint choice framework, where respondents must consider the trade-offs between a status quo and two policy options. Respondents are asked to make a discrete choice among three unranked options rather than stating a simple “yes” or “no.” These options vary in terms of the levels of five environmental attributes (plus cost). In this choice experiment framework it has been shown that the likelihood for yea-saying and strategic responses is less prominent (Blamey and Bennett 2001, Collins and Vossler 2009).


In addition, in order to identify such respondents EPA includes debriefing questions at the end of the survey to identify respondents who might believe that protecting the environment is important no matter the cost. The addition of debriefing questions is a suggested practice and regarded as an excellent opportunity to bring clarity to the respondents' choices (Krupnick and Adamowicz 2007). Sensitivity analysis will be used to examine these debriefing questions and determine if and how a yea-saying phenomenon may have influenced responses. See Section 5b of Part B of this ICR for more information on how we address yea-saying.


It is too difficult to quantify and measure complex environmental commodities (UWAG, FWW, C18)

Response – EPA agrees that it challenging to measure complex environmental commodities. Standard survey design cognitive interviews were followed in developing the survey. EPA conducted 10 focus groups and 72 cognitive interviews with individuals within and outside the Chesapeake Bay Watershed in order to test their level of understanding of the materials included in the survey (OMB Control Number 2090-0028). We used standard cognitive to identify the most salient environmental commodities that will be affected by the TMDLs. Limiting the survey to those policy outcomes (i.e., water clarity, striped bass, oysters, blue crabs, and lake water quality) is conservative but we can be confident in the benefits we do capture from the survey.


Baseline scenarios are inaccurate (UWAG, C18)


Response: The estimates of the environmental conditions under the baseline and program scenarios are based on scientific models developed by EPA, NOAA and others, as cited in the survey. We recognize, however, the points that are raised by the reviewers and have made several modifications to the survey design. First, we label the programs as Option A, Option B, and Option C, where Option A represents the predicted conditions in 2025 without new programs, while Options B and C represent new programs with additional costs. Second, in the experimental design respondents are randomly assigned to one of three different projections for the Option A (i.e., a status quo scenario): a decreasing baseline, an increasing baseline, or a constant baseline. As described in Section 5b of Part B of this Information Collection Request, these different baseline scenarios capture the potential range of possible future conditions, absent new programs.


EPA wishes to emphasize that the responses to the conjoint choice experiments, and hence the baseline and policy scenarios themselves, do not directly estimate the benefits of the TMDLs. Rather, the responses to the hypothetical choice options and baselines will be used to estimate a “benefits function” or curve. This benefits function reflects the benefits from generic improvements in the environmental and water quality attributes specified. The benefits function will then be used to estimate the incremental benefits of the TMDLs relative to the most accurate baseline as predicted by the EPA and NOAA models. An advantage of this benefits function approach is that the total benefits can be updated if the management practices chosen by the states to meet the TMDLs change in the future.


Impact of program on water quality is unknown (UWAG, C18)


Response: The choice experiment design in the stated preference survey allows EPA to estimate benefits from a range of policy outcomes. This will allow EPA to update the benefit estimates as the order of implementation of the management practices becomes known.


Surveys include improvement to lakes that are not in the Chesapeake Bay Watershed (C18)


Response: Thank you for raising this point. EPA never intended to include lakes outside the Chesapeake Bay Watershed in the benefit estimate. We have made several changes to the survey instrument to make it clear that only lakes in the watershed should be considered. First, we have enhanced the map at the beginning of the survey to identify major cities within and outside the watershed and added the Finger Lakes to the map (which are now clearly marked as being outside the watershed). This helps orient respondents who are considering whether or not they “use” (e.g., engage in recreation activities) lakes in the watershed. Second, we clearly describe the watershed as including lakes, and state that water bodies outside of the watershed will not be affected by the programs. One follow-up question is specifically aimed to test the respondents’ level of understanding on this issue.


Respondents may not know if they live in the Chesapeake Bay Watershed (C18)


Response: In addition to providing an enhanced map of the watershed, EPA will identify respondents who reside within or outside of the watershed and explicitly inform them of this in the introductory cover letters. See Attachments 8 and 9 for examples of the cover letters.


Adding benefits from the Stated Preference study to those from Revealed Preference studies will double-count some benefits (C18)


Response: EPA agrees and does not intend to add the total monetized benefit results from this study with results from other studies, such as those that use revealed preference methods. The results from this study can be used to isolate values for non-users or used alone as a measure of total monetized benefits.


Link between survey attributes and TMDLs is not established (UWAG, C18)


Response: In response to peer review comments from academic experts in stated preference methods, EPA is now only modeling willingness to pay for improvements in bay water clarity, striped bass, blue crab, oyster populations, and the quality of lakes in the watershed. This was previously referred to as the “endpoint” version of the survey. These attributes were chosen based on extensive focus groups and interviews as the environmental features that are most salient to the general public. Furthermore, EPA and NOAA models predict that these features will be affected by the TMDLs. The stated preference survey outlined in this ICR does not estimate the benefits of the TMDLs directly; rather this survey is designed to value generic status quo and policy options that result in changes in the environmental attributes. As part of the experimental design, respondents are presented with hypothetical changes in these attributes and cost. In other words, the hypothetical levels associated with each of the attributes and costs in the survey vary across respondents. This allows us to identify the parameters and estimate a range of values associated with different scenarios. The parameters estimated from respondents’ choices to these hypothetical scenarios will then be used to estimate the benefits of the TMDLs incremental to the baseline.


The costs of the options on the survey do not reflect the true costs of the TMDL (UWAG)


Response: The variation in costs across programs is not intended to reflect the costs of the TMDL, but rather the likely range of values respondents hold for the options, as found in extensive focus groups and interviews. The parameters estimated from respondents’ choices to these hypothetical scenarios will then be used to estimate the benefits of the TMDL incremental to the baseline.


A complementary study of the costs of the TMDL is being conducted by EPA’s Chesapeake Bay Program Office and will be issued by EPA after a peer-review is complete.



Second Round of Public Comment:

Upon submitting the revised Information Collection Request to OMB for review and approval, EPA initiated a second round of public review with the publication of Federal Register Notice 78 FR 9045 (see Attachment 7). Public comments were invited for a 30-day period, commencing on February 7, 2013 and closing on March 11, 2013. Two sets of comments were received, one from UWAG and another from the coalition of organizations, augmented to include an additional five groups (C23). Complete comments and responses are in Attachment 16. Comments that resulted in changes or additions to the ICR supporting documentation are summarized below:


EPA must include in the docket for this ICR all models that it relies upon to support statements in the surveys.


Response: We have added Attachment 17 to the docket that describes how attributes in the choice questions were modeled and includes documentation for all models used to predict attribute levels under baseline and policy conditions.


EPA must include the peer review reports and documentation of the focus groups that are referenced in the supporting statement.


Response: The peer review reports as well as focus group and interview reports have been posted to docket number EPA-HQ-OA-2012-0033-0018 on Regulations.gov.

Full water quality benefits from the TMDL will not be achieved by 2025 as indicated in the survey and may take decades to be fully realized. (C23)


Response: EPA is aware that some management practices specified in the Watershed Implementation Plans will not reach their full effectiveness for many years after implementation and EPA will be explicit about those time lags in the benefit analysis.  How to address such time lags is an important and often-encountered challenge in stated preference study design and an active area of research.


It is generally accepted practice in the stated preference literature to provide stylized information on the timing of the benefits, estimate WTP for a certain outcome, and then perform ex-post discounting and sensitivity analysis to account for longer time lags and uncertainty in the environmental outcomes (e.g., Alberini et al. 2004, Banzhaf et al. 2006, Cameron and DeShazo 2013).  In part, this reflects a choice to reduce outcome uncertainty that will be implicit, but not separately observable, in survey responses.  Uncertainty in outcomes and differences in timing can then be reflected explicitly in the application of the results. 


       Still, there are reasons to favor describing a longer time frame for the realization of benefits associated with policy actions in the survey instrument for this case.   First, using a shorter time frame requires strong assumptions regarding respondents’ discount rates and their perception of the transition of the survey attributes to long term levels.  In addition, using a shorter timeframe for environmental improvements would be changing aspects of the policy that may be welfare relevant and could therefore affect willingness to pay.


In light of these factors and to ensure the most rigorous analysis possible, EPA will employ a split sample design.  Consistent with TMDL requirements, all surveys will make clear that practices are put in place by 2025, but the year for which improvements are characterized, the “reference year,” will vary.  Half of the sample will receive the original version of the survey in which 2025 is the reference year for the attribute levels.  The other half of the sample will receive a survey that uses 2040 as the reference year.  The 2025 and 2040 reference years for the attribute changes were chosen based on communication with several experts that have been consulting on the larger Chesapeake Bay TMDL benefit cost analysis. The question posed to the experts regarding the time to transition to long term levels and their responses are included in Attachment 18. EPA will discount WTP estimates from the 2025 version of the survey to make them comparable to 2040 estimates and provide a range generated by two valid but different approaches to stated preference study design.



Third Round of Public Comment

EPA opted for a third round of public comment upon making available for public review additional documents underlying the development of this information collection request including: The Peer Review Report, the Focus Group and Cognitive Interview Report and the Description of Hydrological, Biochemical, and Ecosystem Models (Attachment 17 of the revised Supporting Statement). EPA recognized that these documents may provide useful information to interested parties regarding the development and design of the survey instruments proposed for this project. Public comments were invited for a 30-day period beginning on June 27, 2013. The comment period closed on July 29, 2013. Three sets of public comments were received one each from UWAG, the coalition of industry groups (now C20) and the Natural Resources Defense Council. One comment described below resulted in a minor change to the survey instrument and significant revisions to supporting documentation, namely Attachment 17 describing the supporting models. Complete comments and responses are detailed n Attachment 16.


Predictions for the year 2025 [or 2040] are not based on “monitoring data from the Chesapeake Bay Watershed and Estuary Models” as indicated in a footnote on the survey instrument. Neither of these models predicts the changes in striped bass, blue crab and oyster populations necessary for the surveys.


Response: The referenced Chesapeake Bay Fisheries Ecosystem Model uses output from the Chesapeake Bay Watershed Models to project a range of attribute levels for striped bass, blue crabs, and oysters. The footnote on the survey has been revised accordingly. In addition, Attachment 17 has been revised to clarify how the various models inform the range of attribute levels that will appear in the choice experiment questions.

3(c) Consultations


Consultations with Scholars: Prior to commencing the survey design phase of this project, EPA co-sponsored a workshop on October 31 and November 1, 2011 at Resources for the Future on the costs and benefits of protecting and restoring the Chesapeake Bay. The main purpose of the workshop was to gather scholars who are working on estimating the costs and benefits of water quality improvements in the Bay to exchange ideas. The agenda included a wide range of presentations on the costs and benefits of improving water quality in the Bay and its watershed, the possibility of ecosystem service payments and trading to ameliorate some of the costs, ancillary benefits that may arise, water quality and ecological modeling for the Bay, and policy options for approaching protection and restoration. Although the workshop topics were clearly broader than stated preference techniques, the agenda did include presentations by EPA on the estimation of benefits using stated preference methods. Participants included academics from major universities within the Chesapeake Bay Watershed (MD, VA, DE, WV, PA); representatives from several NGOs including the Chesapeake Bay Foundation and the Chesapeake Bay Trust; as well as participants from several government agencies (NOAA, DOI, and USDA).

On September 11, 2012, EPA participated in a quarterly meeting of the Chesapeake Bay Program’s (CBP) Scientific and Technical Advisory Committee (STAC). As described on its website,2 STAC provides scientific and technical guidance to the Chesapeake Bay Program on measures to restore and protect the Chesapeake Bay and serves as a liaison between the region's scientific community and the CBP. Through professional and academic contacts and organizational networks of its members, STAC ensures close cooperation among and between the various research institutions and management agencies represented in the Chesapeake Bay Watershed. EPA’s presentation of the progress made to date on the development of the stated preference survey and the goals of the project were well received.

On January 22, 2013 EPA sponsored a meeting of seven experts to discuss the Chesapeake Bay Fisheries Ecosystem Model (CBFEM).  The experts on the panel were Walter Boynton, Professor, Chesapeake Biological Laboratory, University of Maryland Center for Environmental Science; Denise Breitburg, Adjunct Professor, Smithsonian Environmental Research Center, Department of Biology, University of Maryland; Kim de Mutsert, Assistant Professor, Department of Environmental Science and Policy, George Mason University; Robert Diaz, Faculty Emeritus, Virginia Institute of Marine Sciences, The College of William & Mary; Edward Houde, Professor, Chesapeake Biological Laboratory, University of Maryland Center for Environmental Science; Michael Kemp, Professor, Horn Point Laboratory, University of Maryland Center for Environmental Science; Elizabeth North, Associate Professor, Horn Point Laboratory, University of Maryland Center for Environmental Science.

On April 30, 2013 EPA solicited input from a panel of experts on the length of time it would take the attributes levels referenced on the survey to reach long term levels as a result of the nutrient and sediment reductions required to meet the TMDL. The email correspondence is documented in Attachment 18.


Consultations with Respondents: As part of the planning and design process for this collection, EPA conducted a series of 10 focus groups and 72 cognitive interviews. Eight of the ten focus groups were in venues located inside the Chesapeake Bay Watershed, in locations close to Bay itself and further upstream. Two other focus groups were conducted in North Carolina, a state that is entirely outside of the Chesapeake Bay Watershed. Of the cognitive interviews, 57 were conducted at locations in the watershed, and 15 were conducted in parts of Pennsylvania that were outside the Chesapeake Bay Watershed. While early focus group sessions were used to narrow the list of attributes to be highlighted in a survey and the kinds of information respondents would need to answer the questions, later sessions and cognitive interviews were employed to test the draft survey materials. These consultations with potential respondents were critical in identifying sections of the questionnaire that were redundant and lacked clarity and in producing a survey instrument meaningful to respondents. The later focus group sessions and the cognitive interviews were also helpful in estimating the expected amount of time respondents would need to complete the survey instrument. While completion times varied, most participants required approximately 15 to 20 minutes to complete the final draft surveys. The focus group sessions and cognitive interviews were conducted under OMB Control # 2090-0028. Focus group and interview reports are posted to docket number EPA-HQ-OA-2012-0033-0018 on Regulations.gov.


Consultations with other Government Agencies: EPA has been working closely with ecosystem modelers in NOAA’s Chesapeake Bay Office and National Marine Fisheries Service’s Office of Habitat Conservation. Specifically, NOAA’s modelers have provided assistance with the ecosystem based fishery models "Ecopath with Ecosim" and "Atlantis." These consultations have been instrumental in examining the ecological impacts of reducing nutrient and sediment loads to the Bay. The ecosystem-based fishery models have provided useful background for the survey instrument itself and will allow EPA to more accurately estimate the values people place on the various attributes of the Chesapeake Bay highlighted in the survey.

.

Consultations with Peer Reviewers: The survey instrument has undergone peer review by three leading scholars specializing in stated preference surveys for estimating benefits associated with environmental improvements: Dr. Kevin Boyle, Professor, Department of Agricultural and Applied Economics, Virginia Tech University; Dr. John Whitehead, Professor and Chair, Department of Economics, Walker College of Business, Appalachian State University; and Dr. Robert Johnston, Director, George Perkins Marsh Institute, Professor, Department of Economics, Clark University. The correspondence with the peer reviewers and their reports are documented on Regulation.gov docket number EPA-HQ-OA-2012-0033-0018.

EPA also intends to subject the stated preference study in its entirety to additional technical and quality reviews once the study is completed.  The peer review of the stated preference effort will include a public review period and will be conducted in coordination with a peer review of the larger benefit cost analysis of the Chesapeake Bay TMDL.


Survey Design Team: Dr. Christopher Moore at the U.S. Environmental Protection Agency serves as the project manager for this study. Dr. Moore is assisted by Dr. Chris Dockins, Dr. Dennis Guignet, Dr. Kelly Maguire, Dr. Nathalie Simon and Ms. Sarah Marrinan, all with the U.S. EPA’s National Center for Environmental Economics. Dr. Alan Krupnick, Senior Fellow and Director, Center for Energy Economics and Policy at Resources For the Future, Dr. Maureen Cropper, Senior Fellow at Resources for the Future and Professor of Economics at the University of Maryland, and Mr. William Isaacs, Research Assistant at Resources for the Future, provide review of the survey development, focus group materials, and cognitive interview materials. Dr. Elena Besedin, Senior Economist at Abt Associates Inc, and Ryan Stapler, M.S., Senior Analyst at Abt Associates Inc. provide contractor support.

Dr. Alan Krupnick is Director of Resources for the Future’s Center for Energy Economics and Policy, as well as Director of Research and a Senior Fellow at RFF, and specializes in analyzing environmental and energy issues, in particular, the benefits, costs and design of pollution and energy policies. He was lead author for Toward a New National Energy Policy: Assessing the Options study, examining the costs and cost-effectiveness of a range of federal energy policy choices in both the transportation and electricity sectors. His primary research methodology is in the development and analysis of stated preference surveys. Dr. Krupnick has been a consultant to state governments, federal agencies, private corporations, the Canadian government, the European Union, the World Health Organization, and the World Bank. He is a regular member of expert committees from the National Academy of Sciences and the U.S. EPA.

Dr. Maureen Cropper, Professor of Economics at the University of Maryland and Senior Fellow at Resources for the Future, has made major contributions to environmental policy through her research, teaching, and public service. Her research has focused on valuing environmental amenities, estimating consumer preferences for health and longevity improvements, and the tradeoffs implicit in environmental regulations. Previously at the World Bank, her work focused on improving policy choices in developing countries through studies of deforestation, road safety, urban slums, and health valuation. From 1994 through 2006, she served on the U.S. Environmental Protection Agency's Science Advisory Board, where she chaired the Advisory Council for Clean Air Act Compliance Analysis and the Environmental Economics Advisory Committee. She is a research associate of the National Bureau of Economic Research and a member of the National Academy of Sciences.

Dr. Elena Y. Besedin, a senior economist at Abt Associates Inc., specializes in the economic analysis of environmental policy and regulatory programs. Her work to support EPA has concentrated on analyzing economic benefits from reducing risks to the environment and human health and assessing environmental impacts of regulatory programs for many EPA program offices. She has worked extensively on valuation of non-market benefits associated with environmental improvements of aquatic resources. Dr. Besedin’s empirical work on non-market valuation includes design and implementation of stated and revealed preference studies and benefit transfer methodologies.

3(d) Effects of Less Frequent Collection


The survey is a one-time activity. Therefore, this section does not apply.


3(e) General Guidelines


The survey will not violate any of the general guidelines described in 5 CFR 1320.5 or in EPA’s ICR Handbook.


3(f) Confidentiality


All responses to the survey will be kept confidential to the extent provided by law. To ensure that the final survey sample includes a representative and diverse population of individuals, the survey questionnaire will elicit basic demographic information, such as age, number of children under 18, type of employment, and income. However, the detailed survey questionnaire will not ask respondents for personal identifying information, such as names or phone numbers. Instead, each survey response will receive a unique identification number. Prior to taking the survey, respondents will be informed that their responses will be kept confidential to the extent provided by law. The name and address of the respondent will not appear in the resulting database, preserving the respondents’ identity. The survey data will be made public only after it has been thoroughly vetted to ensure that all other potentially identifying information has been removed.


3(g) Sensitive Questions


The survey questionnaire will not include any sensitive questions pertaining to private or personal information, such as sexual behavior or religious beliefs.

4. The Respondents and the Information Requested

4(a) Respondents

Eligible respondents for this stated preference survey are individuals 18 years of age or older who reside in one of 17 east coast U.S. states and the District of Columbia (hereafter states). Before selection, the population of households in the 18 states will be stratified by three mutually-exclusive study regions: states immediately bordering Chesapeake Bay (Maryland, Virginia, and the District of Columbia); states in the Chesapeake Bay Watershed but not immediately bordering the Bay itself (Delaware, New York, Pennsylvania, and West Virginia); and eastern states outside of the watershed (Vermont, New Hampshire, New Jersey, Massachusetts, Connecticut, Rhode Island, Maine, North Carolina, South Carolina, Georgia, and Florida) but within 100 miles of the Atlantic Ocean. The last stratum is restricted to households in coastal states as residents in these areas are more likely to be familiar with coastal estuarine issues and may be aware of the relationship between the Bay and coastal fisheries.

Households will be selected randomly from the U.S. Postal Service Delivery Sequence File (DSF), which covers over 97% of residences in the U.S. The survey households that will be sampled from the DSF include citystyle addresses and PO boxes, and covers singleunit, multiunit, and other types of housing structures. As described in Part B of this Supporting Statement, we assume that 92% of the addresses will be eligible and will receive the survey. EPA will send a copy of the mail survey to a random stratified sample of 9,934 households in two phases. The first phase, a pretest, will be sent to 543 addresses, resulting in 500 households receiving the survey. The second phase, encompassing full survey administration, will be administered to an additional 9,391 addresses, resulting in 8,640 households receiving the survey. In each phase, we anticipate a response rate of 30 percent, resulting in 150 and 2,592 completed surveys, respectively.

Table A1 shows the stratification design for the geographic regions included in this study. More detail on planned sampling methods and the statistical design of the survey can be found in Part B of this supporting statement.


Table A1: Geographic Stratification Design

Region

States Included

Phase 1: Pretest

Phase 2: Full Survey

Sample

Sizea

Percentage of Sample

Sample

Sizea

Percentage of Sample

Chesapeake Bay

MD, VA, DC

300

33%

1,728

33%

Chesapeake Bay Watershed

DE, NY, PA, WV

300

33%

1,728

33%

Additional East Coast states

CT, FL, GA, MA, ME, NC, NH, NJ, RI, SC, VT

300

33%

1,728

33%

Total for Sample Regions


900

100%

5,184

100%

a Sample sizes presented in this table include total expected completed surveys.


900 responses are expected in the pretest and households will receive one of the three scenarios and one of the two reference years for conditions described above. The pretest will be divided across regional samples as described in Table A1. An additional 5,184 responses are expected upon administration of the full survey for a total of 6,084 responses. 5,184 responses to the full survey are required to estimate the main effects under an experimental design model. Part B of this document provides detail on sampling methodology.

4(b) Information Requested

(i) Data items, including recordkeeping requirements

EPA developed the survey based on the findings of a series of ten focus groups and 72 cognitive interviews conducted as part of survey instrument development (OMB control # 2090-0028). Focus groups provided valuable feedback which allowed EPA to iteratively edit and refine the questionnaire, and eliminate or improve imprecise, confusing, and redundant questions. In addition, later focus groups and cognitive interviews provided useful information on the approximate amount of time needed to complete the survey instrument. This information informed our burden estimates. Focus groups and cognitive interviews were conducted following standard approaches in the literature, as outlined by Desvousges et al. (1984), Desvousges and Smith (1988), Johnston et al. (1995), Schkade and Payne (1994), Kaplowicz et al. (2004), and Opaluch et al. (1993).

EPA has determined that all questions in the survey are necessary to achieve the goal of this information collection, i.e., to collect data that can be used to support an analysis of the total benefits of Chesapeake Bay TMDLs implementation.

Households randomly selected from the U.S. Postal Service DSF database will be mailed a copy of the survey. The total number of households recruited from each stratum will be equally divided among six different versions of the survey. Each version of the survey presents a different baseline representation of ecological and water quality conditions in the Chesapeake Bay Watershed, which is then used in the choice experiment (e.g., attributes) and will use either 2025 or 2040 as the reference year for future conditions. Evaluating a policy that is expected to be fully effective more than 10 years from now raises some challenges in specifying baseline conditions. There is uncertainty regarding how population growth and changes in land use patterns will affect water quality. There is also the question of what practices that are not in place now would be in place in 2025 or 2040, depending on the reference year, if the TMDL were not implemented. Therefore, EPA is stratifying its sample design to reflect three possible scenarios representing future conditions without TMDLs implementation: (1) water quality remains the same in the future as it does today (constant baseline) (2) water quality in the Bay would decline in the future, relative to today (declining baseline), and (3) water quality in the Bay would improve in the future, relative to today (improving baseline). Given that there will be some uncertainty regarding the specifics of the “actual” baselines and improvements, the resulting valuation estimates will allow flexibility in estimating WTP for a wide range of different circumstances. It also increases the potential that the results of the study can be used in other studies as part of a benefit transfer exercise, consistent with EPA’s Guidelines for Preparing Economic Analyses (US EPA 2010). The full texts of Versions A (constant baseline, 2025 reference year), B (declining baseline, 2025 reference year), and C (improving baseline, 2025 reference year) of the mail survey are provided in Attachments 1, 2, and 3. The full texts of Versions D (constant baseline, 2040 reference year), E (declining baseline, 2040 reference year), and F (improving baseline, 2040 reference year) of the mail survey are provided in Attachments 4, 5, and 6.

There are no standard protocols for defining the effects of ecological changes in stated preference surveys. However, recent analyses distinguish between “inputs” (environmental features or conditions that, via natural process are converted into different features) and “endpoints,” or biophysical outputs that are more “directly and economically meaningful” to firms and households (Boyd and Krupnick, 2009). This survey defines effects as outcomes. Changes in the Chesapeake Bay that result from actions taken under the TMDLs are characterized by expected changes in the populations of striped bass, blue crabs, and oysters, as well as changes in the clarity of Chesapeake Bay waters. Changes in lakes and reservoirs in the Chesapeake Bay Watershed are characterized by average algae levels, with lakes being classified as either “low” or “high” algae lakes. These attributes were found to be salient and meaningful to respondents in focus groups and interviews, and can be estimated from available models.

The following is an outline of the major sections of the survey. The survey versions are identical except for the baseline information provided.

Familiarity with the Chesapeake Bay Watershed and Watershed Issues. This section provides a map and narrative description of the Chesapeake Bay and watershed and distinguishes between the Chesapeake Bay itself and lakes in the watershed. Question 1 asks respondents whether they have heard of the Chesapeake Bay prior to the survey. Question 2 asks how often respondents see the Chesapeake Bay and lakes in the watershed, and question 3 asks respondents whether they have visited these water bodies to participate in recreational activities. This section also provides respondents information about nutrient and sediment pollution in the Chesapeake Bay, thereby promoting understanding of the context for subsequent stated preference questions. Question 4 checks the respondent’s prior knowledge about the effect of these pollutants on water quality. Information and questions in this section are designed to elicit the respondent’s familiarity with the Chesapeake Bay and watershed lakes and to identify respondents as recreational users, other users, or non-users of the Chesapeake Bay and watershed lakes. This section begins to prepare respondents to answer the stated preference questions by motivating respondents to consider how they use or do not use the Chesapeake Bay watershed. Responses to questions 1 through 4 can also be used to test if certain respondent characteristics influence responses to the referendum questions.

Current and Future Conditions in the Chesapeake Bay and Watershed Lakes. The survey next provides information about how nutrients and sediments affect water clarity and populations of striped bass, blue crab, and oysters in the Chesapeake Bay. This information includes a chart with current levels for these attributes and baseline levels predicted for the year 2025 or 2040, depending on the reference year, under current programs to manage nutrients and sediment (i.e., if no further action is taken). This section also describes how different amounts of nutrients entering lakes can affect algae growth and therefore the appearance of these lakes and the types of fish those conditions favor. The information is based on Carlson’s Trophic State Index (Carlson 1977; Carlson and Simpson 1996). Lakes are characterized as having algae levels that are either “low” or “high,” and a reduction in nutrients is associated with a larger percentage being in the low category, and information is provided about the number of lakes in the “low” category today and in the year 2025 or 2040, depending on the reference year, under current programs. Respondents are asked in question 5 how predicted conditions for the Bay and watershed lakes compare to their own expectations. This information and associated questions are designed to promote understanding of the water quality attributes used in subsequent choice experiments. This section encourages respondents to consider the meaning of each attribute and consider their general preferences for these qualities, prior to considering specific policy options.

Additional Pollution Reduction Programs for the Chesapeake Bay Watershed. This section informs respondents that federal and state agencies are considering additional programs to further reduce nutrients and sediment in the Chesapeake Bay Watershed and that conditions would improve over time, reaching long term levels in 2025 or 2040, depending on the reference year,. This information sets up the following choice experiment, putting respondents in the frame of mind that different programs will have different effects on the Bay. This section also describes how any additional programs would affect the respondent’s cost of living. The means by which cost of living would increase described in the survey are not intended to reflect the true impacts of implementing the TMDL. The payment vehicle was chosen based on peer reviewed literature (e.g. Nielsen 2011) and focus group testing. Nor are the amounts of the hypothetical cost of living increases presented with each option intended to reflect the true household costs of the TMDL. Respondents are reminded that there are other items on which their household could spend their income, rather than spending these funds to pay for changes required under pollution reduction programs. This approach is commonly used in stated preference surveys (see Mitchell and Carson 1989) to remind respondents that, given scarce household budgets, paying for pollution reduction programs reduces their ability to purchase substitute goods. Question 6 asks respondents if they currently pay any environmentally-related taxes or fees as part of their utility bill. This was found to be relevant in focus groups and the response can be used as a control variable in the statistical analysis.

Deciding Future Actions. This section provides instructions and an example of how one responds to the choice questions. This section also provides text encouraging respondents to only consider the environmental outcomes presented in the choice questions, thus minimizing the potential consideration of omitted variables, and reducing omitted variable bias. This page prepares the respondents for the choice questions that follow. It includes “cheap talk” text (e.g., Cummings and Taylor, 1999) emphasizing that even though this exercise is hypothetical, respondents should consider their choices as if they were real. Additional text is included to further emphasize the consequentiality of one’s choices. No questions are asked in this section.

Voting for Programs to Improve the Condition of the Chesapeake Bay and Lakes in the Watershed. The questions in this section are the key part of the survey. Questions 7, 8, and 9 are “choice experiment” or “choice modeling” questions (Adamowicz et al. 1998; Bennett and Blamey 2001) that ask respondents how they would vote if presented with three hypothetical regulatory options. One of these options is always a “status quo” choice (labeled “Option A” in the survey). Each of the multi-attribute options is characterized by (a) feet of average visibility in the Chesapeake Bay, (b) striped bass population, (c) blue crab population, (d) oyster population, (e) number of watershed lakes with relatively low algae, and (f) a permanent cost of living increase for the respondent’s household. Environmental attribute levels for all three scenarios are described both in numerical terms and in percentage increase or decrease from present day conditions. The cost attribute is defined in terms of both annual cost and monthly cost. In the “declining baseline” versions of the survey, the “status quo” option references attribute levels that are lower than the present day. In the “constant baseline” versions of the survey, the “status quo” option references attribute levels that are the same as present day (i.e., 0% change). In the improving baseline version of the survey, the “status quo” option references attribute levels that are higher than the present day.

Following standard choice experiment methods, respondents choose the regulatory options that they prefer based on their preferences. The status quo option is always available, something that is necessary for appropriate welfare estimation (Adamowicz et al. 1998). Advantages of choice experiments, and the many examples of the use of such approaches in the literature, are discussed in later sections of this ICR. Following standard approaches (Opaluch et al. 1993, 1999; Johnston et al. 2002a; 2002b, 2003b), respondents are instructed to answer each of the three choice questions independently, and not to add up or compare programs across different pages. This is included to avoid biases associated with sequence aggregation effects (Mitchell and Carson 1989).

Debriefing Questions. Questions 10 and 11 ask respondents to rate how their votes were affected by various factors, and why they voted for or against the regulatory programs. Question 10 asks respondents the extent to which they agree or disagree with a series of statements that may describe how they considered the information provided on the programs, confidence in their answers, and attitudes toward regulation and the environment. Some questions in this section are also included in the non-response bias study, which allows for an examination of whether there are any statistical differences between respondents’ and non-respondents’ attitudes towards the environment and government regulations, as well as other factors that may impact the likelihood that they respond to the survey (as reflected by their degree of agreement with the statement “It is often difficult for me to find time to take surveys”). Question 11 asks respondents about potential reasons they might have voted for or against regulator programs. Consistent with established practice, the responses to these questions will be used to examine and control known survey issues, including hypothetical bias, yea-saying, scenario rejection, and protest responses. They also provide insight into motivations for voting for or against programs.

Recreational Experience and Time Preferences. Questions12, 13, and 14 ask about recreational experiences on the Chesapeake Bay and at other water bodies within the watershed. Question 12 asks respondents how many times they have visited an outdoor recreation site on the Chesapeake Bay. Question 13 asks them the name of the site they have visited most often in the last 12 months, its location, how long it took to get there, and what kind of recreation they engaged in at the site. Question 14 asks about the number of visits to lakes, streams, or rivers within the Chesapeake Bay Watershed. These questions will be used to more clearly identify users of the Chesapeake Bay and other water bodies and to inform estimation of revealed preferences for recreation. In order to get information on discount rates, question 15 asks whether respondents would make a one-time payment for a device that saved them a fixed amount each month over an extended time period. The one-time payment will vary across surveys at values of $20, $50, $100, and $200.

Demographics. Questions 16-23 ask respondents to provide basic demographic information, including sex, age, household composition (number of children under 18), income, race and ethnicity, highest level of education, , and employment industry. This information will be used in the analysis of survey results, as well as in the non-response analysis. Responses to these questions will be used to estimate the influence of demographic variables on respondents’ voting choices, and ultimately, their WTP for water quality improvements.


The non-response bias study survey will be administered to households who received the main survey but did not return it. The following is an outline of the major sections of the non-response bias study survey, each of which is very similar to the corresponding sections of the main survey described above. Questions in this survey are also asked in the main survey in order to facilitate comparison of respondents and non-respondents to the main survey.


Familiarity with the Chesapeake Bay Watershed and Watershed Issues. This section provides a map and narrative description of the Chesapeake Bay and watershed and distinguishes between the Chesapeake Bay itself and lakes in the watershed. The maps and information are identical to the main survey, but the non-response bias study survey does not provide information on nutrient and sediment pollution. Questions 1 through four are identical to the main survey: Question 1 asks respondents whether they have heard of the Chesapeake Bay prior to the survey. Question 2 asks how often respondents see the Chesapeake Bay and lakes in the watershed. Question 3 asks respondents whether they have visited these water bodies to participate in recreational activities. Question 4 checks the respondent’s prior knowledge about the effect of these pollutants on water quality.


Attitudinal and Demographic Questions. There is only a single four-part question in this section, question 5, that asks the extent to which the respondents agree or disagree with statements regarding the importance of improving waters in the Chesapeake Bay Watershed, the general role of government regulations and spending, whether they feel they should pay for programs to improve water quality, and the time they generally have available for surveys. These are all also asked on the main survey, which will allow EPA to statistically compare responses across respondents and non-respondents. Comparison of responses to the statement “It is often difficult for me to find time to take surveys” will allow for an examination of whether other factors, potentially unrelated to attitudes and valuation towards the environment, impact the likelihood an individual completes and returns the main survey.

Demographics. Questions 6-13 ask respondents to provide basic demographic information, including sex, age, household composition (number of children under 18), income, race and ethnicity, highest level of education, and employment industry. This information is identical to the demographic questions asked in the main survey.


(ii) Respondent activities

EPA expects individuals to engage in the following activities during their participation in the valuation survey:

  • Review the background information provided in the beginning of the survey document.

  • Complete the survey questionnaire and return it by mail.

A typical subject participating in the mail survey is expected to take 18 minutes to complete the survey. These estimates are derived from focus groups and cognitive interviews in which respondents were asked to complete a survey of similar length and detail to the current survey.


Respondents engaged in the non-response survey will:

  • Review the short, imprinted questionnaire provided via first class mail and return the completed survey by mail.3

Response time for the non-response survey is expected to be approximately 5 minutes. These estimates are based on agency experience with a similar survey.



5. The Information Collected - Agency Activities, Collection Methodology, and Information Management

5(a) Agency Activities


The survey is being developed, conducted, and analyzed by EPA’s National Center of Environment Economics with contract support provided by Abt Associates Inc. (EPA contract No. EP-W-11-003).

Agency activities associated with the survey consist of the following:

  • Developing the survey questionnaire and related materials as well as sampling design.

  • Randomly selecting survey participants from the U.S. Postal Service DSF database.

  • Printing of survey.

  • Mailing of preview letter to notify the household that it has been selected.

  • Mailing of surveys.

  • Mailing of postcard reminders.

  • Resending the survey to households not responding to the first survey mailing.

  • Mailing the follow-up letter reminding households to complete the second survey mailing.

  • Conducting a non-response bias study to reach non-respondents.

  • Data entry and cleaning.

  • Analyzing survey results.

  • Analyzing the non-response bias study results.

  • If necessary, EPA will use results of the non-response bias study to adjust weights of respondents to account for non-response and minimize the bias.

EPA will primarily use the survey results to estimate the social value of changes in ecosystem quality, as part of the Agency’s analysis of the benefits of the TMDLs.


5(b) Collection Methodology and Information Management

EPA plans to implement the proposed survey as a mailed choice experiment questionnaire. First, EPA will use the U.S. Postal Service DSF database to identify households which will receive the mail questionnaire. Prior to mailing the survey, EPA will send the selected households a preview letter notifying them that they have been selected to participate in the survey and briefly describing the purpose of the study. The survey will be mailed one to two weeks after the preview letter accompanied by a cover letter explaining the purpose of the survey. The preview and cover letters are included as Attachments 8 and 9, respectively.

EPA will take multiple steps to promote response. All households will receive a reminder postcard approximately one week after the initial questionnaire mailing. The postcard reminder is included as Attachment 10. Approximately three weeks after the first round of survey mailing, all households that have not responded will receive a second copy of the questionnaire with a revised cover letter (see Attachment 11). A week after the second survey is mailed, a letter will be sent to remind households to complete the survey. The letter reminder is included as Attachment 12. Based on this approach to mail data collection, it is anticipated that approximately 30 percent of the selected households will return the completed mail survey (Dillman 2008). Since the desired number of completed surveys is 5,184, it will be necessary to mail surveys to 18,783 households, assuming that only 92 percent (17,280) of the addresses will be valid.

Data quality will be monitored by checking submitted surveys for completeness and consistency, and by asking respondents to assess their own responses to the survey. Question 10 includes an inquiry of whether respondents were confident in their answers. Questions 10 and 11 are designed to assess the presence or absence of potential response biases by asking respondents to indicate their reasoning and rate the influence of various factors on their responses to the choice experiment questions. Responses to the survey will be stored in an electronic database. This database will be used to generate a data set for a regression model of total values for ecosystem improvements achieved under the Chesapeake Bay TMDLs.

To protect the confidentiality of survey respondents, the survey data will be released only after it has been thoroughly vetted to ensure that all potentially identifying information has been removed.


5(c) Small Entity Flexibility


This survey will be administered to individuals, not businesses. Thus, no small entities will be affected by this information collection.


5(d) Collection Schedule

The schedule for implementation of the survey is shown in Table A2.




Table A2: Schedule for Survey Implementation

Pretest Activities

Duration of Each Activity

Printing of questionnaires

Weeks 1 to 3

Mailing of Preview Letters

Week 4

Mailing of survey

Week 5

Postcard reminder (one week after initial survey mailing)

Week 6

Initial Data Entry

Week7

Mailing of 2nd survey to non-respondents

Week 9

Letter reminder (one week after 2nd survey mailing)

Week 10

Telephone non-response interviews (if used)

Weeks 12 to 13

Mailing of non-response bias study survey

Week 12

Data entry

Weeks 5 to 15

Cleaning of data file

Week 16

Delivery of data

Week 17

Full Survey Implementation


Printing of questionnaires

Weeks 22 to 24

Mailing of Preview Letters

Week 25

Mailing of survey

Week 26

Postcard reminder (one week after initial survey mailing)

Week 27

Initial Data Entry

Week28

Mailing of 2nd survey to non-respondents

Week 29

Letter reminder (one week after 2nd survey mailing)

Week 30

Telephone non-response interviews (if used)

Weeks 32 to 33

Mailing of non-response bias study survey

Week 32

Data entry

Weeks 26 to 36

Cleaning of data file

Week 37

Delivery of data

Week 38





6. Estimating Respondent Burden and Cost of Collection

6(a) Estimating Respondent Burden


Subjects who participate in the survey and follow-up interviews during the pre-test and main surveys will expend time on several activities. EPA will use similar materials in both the pre-test and main stages; it is reasonable to assume the average burden per respondent activity will therefore be the same for subjects participating during either pre-test or main survey stages. Both the pre-test and main survey stages will include the survey questionnaire and a non-response follow-up.

Based on focus groups and cognitive interviews, EPA estimates that on average each respondent mailed the survey will spend 18 minutes (0.3 hours) reviewing the introductory materials and completing the survey questionnaire. EPA will administer the pre-test survey to 3000 households; assuming that 900 respondents will complete and return the survey, the national burden estimate for respondents to the pre-test survey is 270 hours. During the main survey stage, EPA will administer the mail survey to 17,280 households; the national burden estimate for these survey respondents is 1,555 hours assuming that 5,184 respondents will complete and return the survey.

EPA plans to conduct a non-response bias study that uses a short questionnaire sent by first class mai. The survey will be imprinted with a stamp requesting the recipient to “Please return within 2 weeks.” This questionnaire is included as Attachment 14. The sample for the non-response bias study will consist of households that did not respond to the survey questionnaire. It is anticipated that approximately 20 percent of households in the non-response bias study sample will return the completed non-response bias study survey. EPA plans to complete 180 total non-response bias study surveys during pre-testing, and 900 total non-response bias study surveys during main implementation. Based on expected response rates, it will be necessary to invite approximately 900 households to participate in the pre-test non-response bias study, and 4,500 households to participate in the main non-response bias study.

EPA estimates that the short questionnaire will take 5 minutes to complete (0.083 hours). Thus the national burden estimate for all non-response bias study pre-test respondents (180 participants) is 15 hours. Using the same average burden per respondent of 5 minutes, the national burden estimate for non-response bias study respondents in the main survey implementation stage is 75 hours for the anticipated 900 total non-response bias study participants.

These burden estimates reflect a one-time expenditure in a single year.


6(b) Estimating Respondent Costs

(i) Estimating Labor Costs

According to the Bureau of Labor Statistics, the average hourly wage for private sector workers in the United States is $23.33 (2012$) (U.S. Department of Labor, 2012). Assuming an average per-respondent burden of 0.3 hours (18 minutes) for individuals mailed the survey and an average hourly wage of $23.33, the average cost per respondent is $7.00. Of the 20,280 individuals receiving the mail survey during either pre-test or main implementation, 6,084 are expected to return their completed survey. The total cost for all individuals that return surveys would be $42,582.

Assuming an average per-respondent burden of 0.083 hours (5 minutes) for each non-response study participant and an average hourly wage of $23.33, the average cost per non-response study participant is $1.94. Of the 5,400 individuals receiving the non-response follow up mailing during either pre-test or main implementation, 1,080 are expected to return their completed survey. The total cost for all participants in the two non-response study phases would be $2,091.

EPA does not anticipate any capital or operation and maintenance costs for respondents.


6(c) Estimating Agency Burden and Costs


Agency costs arise from staff costs, contractor costs, and printing costs. EPA staff have expended 2,200 hours developing and testing the survey instrument to date and is expected to spend and additional 2,000 hours printing the survey instrument, analyzing data, writing reports, reviewing intermediate products, writing reports and managing the project more generally. The total EPA staff costs are shown in Table A3 below and total $ 368,379.


Table A3: Agency Burden Hours and Costs

GS Level

Hours

Hourly Rate

Hourly Rate with Benefits

Total

12

700

$ 40.66

$ 65.06

$ 45,539

13

1400

$ 48.35

$ 77.36

$ 108,304

14

700

$ 57.13

$ 91.41

$ 63,986

15

1400

$ 67.21

$ 107.54

$ 150,550

--

4,400

--

--

$ 368,379


Abt Associates will be providing contractor support for this project with funding of $490,955 from EPA contract EP-W-11-003, which provides funds for the purpose of analyzing economic benefits of the Chesapeake Bay TMDLs. Abt Associates Inc. staff and its consultants are expected to spend 2,434 hours pre-testing the survey questionnaire and sampling methodology, conducting the mail survey, conducting the non-response survey, and tabulating and analyzing the survey results. The cost of this contractor time is $490,955.

Agency and contractor burden is 6,834 hours, with a total cost of $859,334 excluding the costs of survey printing.

Printing and mailing of the survey is expected to take 133 hours and cost $84,724. Thus, the total Agency and contractor burden would be 6,967 hours and would cost $944,058.


6(d) Respondent Universe and Total Burden Costs


EPA expects the total cost for survey respondents to be $44,673 (2012$), based on a total burden estimate of 1,915 (across both pre-test and main stages, each with primary survey administration and non-response follow up surveys) at an hourly wage of $23.33.


6(e) Bottom Line Burden Hours and Costs


The following tables present EPA’s estimate of the total burden and costs of this information collection for the respondents and for the Agency. The bottom line burden for these two together is $961,803.


Table A3: Total Estimated Bottom Line Burden and Cost Summary for Respondents

Affected Individuals

Burden (hours)

Cost (2012$)

Pre-test Survey Respondents

270

$6,299

Pre-test Non-response Bias Study Respondents

15

$348

Main Survey Respondents

1,555

$36,283

Main Non-response Bias Study Respondents

75

$1,743

Total for All Survey Respondents

1,914

$44,673





Table A5: Total Estimated Burden and Cost Summary for Agency

Affected Individuals

Burden (hours)

Cost (2012$)

EPA Staff

4,400

$368,379

Survey Printing

133

$84,724

EPA's Contractors for the Mail Survey

2,434

$490,955

Total Agency Burden and Cost

6,967

$944,058



6(f) Reasons for Change in Burden


This is a new collection. The survey is a one-time data collection activity.


6(g) Burden Statement


EPA estimates that the public reporting and record keeping burden associated with the mail survey will average 0.3 hours per respondent (i.e., a total of 1,825 hours of burden divided among 900 pre-test respondents and 5,184 main survey respondents). Households included in the non-response study are expected to average 0.083 hours per mail survey participant (i.e., a total of 90 hours of burden divided among 180 pre-test non-response bias study participants and 900 main non-response bias study participants). This results in a total burden estimate of 1,915hours across the pre-test and main mail surveys and non-response studies. Burden means the total time, effort, or financial resources expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal agency. This includes the time needed to review instructions; develop, acquire, install, and utilize technology and systems for the purposes of collecting, validating, and verifying information, processing and maintaining information, and disclosing and providing information; adjust the existing ways to comply with any previously applicable instructions and requirements; train personnel to be able to respond to a collection of information; search data sources; complete and review the collection of information; and transmit or otherwise disclose the information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB control numbers for EPA's regulations are listed in 40 CFR part 9 and 48 CFR chapter 15.

To comment on the Agency's need for this information, the accuracy of the provided burden estimates, and any suggested methods for minimizing respondent burden, including the use of automated collection techniques, EPA has established a public docket for this ICR under Docket ID No. EPA-HQ-OA-2012-0033, which is available for online viewing at www.regulations.gov, or in person viewing at the Office of Water Docket in the EPA Docket Center (EPA/DC), EPA West, Room 3334, 1301 Constitution Ave., NW, Washington, DC. The EPA/DC Public Reading Room is open from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal holidays. The telephone number for the Reading Room is 202-566-1744, and the telephone number for the Office of the Administrator Docket is 202-566-1752.

Use www.regulations.gov to obtain a copy of the draft collection of information, submit or view public comments, access the index listing of the contents of the docket, and to access those documents in the public docket that are available electronically. Once in the system, select “search,” then key in the docket ID number, EPA-HQ-OA-2012-0033


PART B OF THE SUPPORTING STATEMENT


1. Survey Objectives, Key Variables, and Other Preliminaries

1(a) Survey Objectives

The overall goal of this survey is to examine the total value of benefits (including non-use values) for improvements in water quality in the Chesapeake Bay and its Watershed. Water quality improvements are expected to follow nitrogen, phosphorous, and sediment load reductions set forth in recent Chesapeake Bay Total Maximum Daily Load (TMDL) requirements. EPA has designed the survey to provide data to support the following specific objectives:

  • To estimate the total values, including non-use values that individuals place on improving water quality in the Chesapeake Bay and lakes in the Watershed.

  • To understand how individuals value improvements in the Chesapeake Bay and lakes in the Watershed, including: water clarity; adult populations of striped bass, blue crab, and oysters; and lake conditions.

  • To understand how the above values depend on the future baseline level of water quality in the Chesapeake Bay and its Watershed.

  • To understand how values depend on the reference year for conditions in the Bay.

  • To understand how values vary with respect to individuals’ attitudes, awareness, and demographic characteristics.

Understanding total public values for water quality improvements is necessary to determine the full range of benefits associated with reductions in nutrient (nitrogen and phosphorous) and sediment loads to the Chesapeake Bay. While direct use values can be estimated using a variety of methods, non-use values can only be assessed via stated preference survey methods. Because non-use values may be substantial, failure to recognize such values may lead to improper inferences regarding policy benefits (Freeman 2003).



1(b) Key Variables


The key questions in the survey ask respondents whether or not they would vote for policies that would result in improvements in water quality in the Chesapeake Bay and lakes in the Watershed in exchange for an increase in their cost of living. The choice experiment framework allows respondents to view pairs of multi-attribute policies associated with total maximum daily loads to the Chesapeake Bay. Respondents are asked to choose one of three options. Two of these options correspond to “additional programs” that yield improvements in some or all of the environmental attributes specified, and the third option is the status quo (i.e., maintain current programs with no additional household costs). The survey design follows well-established choice experiment methodology and format (Adamowicz et al. 1998; Louviere et al. 2000; Bennett and Blamey 2001; Bateman et al. 2002).

The survey focuses on environmental and ecological “endpoints.” In other words, it asks respondents about changes in attributes that directly enter into their household production and utility function. Specifically, the survey presents changes in the following attributes: (a) water clarity, (b) adult striped bass population, (c) adult blue crab population, (d) oyster abundance, (e) conditions of freshwater lakes in the Watershed, and (e) the cost of living. As discussed by Boyd and Krupnick (2009), these endpoints are aspects of the environment that people experience, make choices about, and have a tangible meaning.

The study design includes three treatment levels in which environmental conditions without additional action are either declining (i.e., “declining baseline”), unchanged (i.e., “constant baseline”), or improving (“improving baseline”), relative to the conditions today. As discussed in Part A of this ICR, section 4(b) (i), the three survey versions are included because there is uncertainty regarding how population growth and changes in land use patterns will affect water quality in the future. In addition, some practices that are not currently in place may be implemented by 2025 even in the absence of the TMDL. The attribute levels chosen for the three baseline scenarios and the policy options are the result of a multi-agency modeling effort that is described in Attachment 17. Given that there will be some uncertainty regarding the specifics of the “actual” baselines and improvements, the resulting valuation estimates will allow flexibility in estimating WTP for a range of different circumstances. It also increases the potential that the results of the study can be used in other studies as part of a benefit transfer exercise, consistent with EPA’s Guidelines for Preparing Economic Analyses (US EPA 2010).

While EPA intends to administer all three baseline versions of the survey in all three geographic strata, budget considerations may cause that plan to be scaled back so that not all versions are administered in all strata.

The survey describes the attribute levels in the choice questions as “long term levels” after a transition that would begin as soon as the management practices are in place. There is some uncertainty surrounding the amount of time it would take each of the attributes to reach long term levels as a result of the management practices and that time is likely to vary across attributes. To bound this uncertainty and capture the different transition times for different attributes, EPA will use a split sample design in which half the households contacted will receive surveys that describe the changes as reaching long term levels in 2025 while the other half will receive surveys that describe changes reaching long term levels in 2040. The two survey versions will be used to estimate a range of willingness to pay that reflects the two reference years. The 2025 and 2040 reference years for the attribute changes were chosen based on communication with several experts that have been consulting on the larger Chesapeake Bay TMDL benefit cost analysis. The question posed to the experts regarding the time to transition to long term levels and their responses are included in Attachment 18.

The analysis of choice questions will use data on how the respondent votes, the amount of the cost of living increase, the degree of improvement in the environmental attributes, as well as the reference year for those improvements, to estimate values for changes in those attributes. Variables for socio-economic characteristics and attitudes will also be included in the analysis.



1(c) Statistical Approach


A statistical survey approach in which a randomly drawn sample of households is asked to complete the survey is appropriate for estimating the values associated with improvements in the Chesapeake Bay and its Watershed. A census approach is impractical because of the extraordinary cost of contacting all households. The relevant population includes households not only residing near the Chesapeake Bay, but more distant states on the east coast. Specifically, the sample population includes households in the Bay states (MD, VA, DC), Watershed states (DE, NY, PA, WV) and other East Coast states (CT, FL, GA, MA, ME, NC, NH, NJ, RI, SC, VT). An alternative approach, where individuals self-select into the sample, is not sufficiently rigorous to provide a useful estimate of the total value of water quality and habitat improvements. Therefore the statistical survey is the most reasonable approach to estimate the total value of the Chesapeake Bay TMDLs.

Much of the work in developing the survey instrument was conducted by the EPA, and EPA will also directly conduct much of the analysis of the survey results. EPA has retained Abt Associates Inc. (55 Wheeler Street, Cambridge, MA 02138) under EPA contract EP-W-11-003 to assist in the questionnaire design, sampling design, administration of the survey, and analysis of the survey results.


1(d) Feasibility


Following standard practice in the stated preference literature (Adamowicz et al. 1998; Batemen, et al, 2002; Bennett and Blamey 2001; Johnston et al. 1995; Louviere et al. 2000), EPA conducted a series of 10 focus groups and an initial set of 26 cognitive interviews (OMB control # 2090-0028). Based on findings from these activities, EPA made various improvements to the survey instrument to reduce the potential for respondent bias, reduce respondent cognitive burden, and increase respondent comprehension of the survey materials. In addition, EPA solicited peer review of the survey instruments by three specialists in academia, as well as input from other experts (see section 3c in Part A). Recommendations and comments received as part of that process have been incorporated into the design of the survey instrument and the revised survey was subsequently tested in an additional 46 cognitive interviews.

Because of the steps taken during the survey development process, EPA does not anticipate that respondents will have difficulty interpreting or responding to any of the survey questions. Furthermore, since the survey will be administered as a mail survey, it will be easily accessible to all respondents. EPA therefore believes that respondents will not face any obstacles in completing the survey, and that the survey will produce useful results. EPA has dedicated sufficient staff time and resources to the design and implementation of this survey, including funding for contractor assistance under EPA contract No. EP-W-11-003. Given the timetable outlined in Section A 5(d) of this document, the survey results should be available for timely use in the final benefits analysis for the Chesapeake Bay TMDLs.



2. Survey Design

2(a) Target Population and Coverage


To assess values for improvements in Chesapeake Bay water quality for both users and non-users of the resource, the target population is individuals who are 18 years of age or older and reside in the District of Columbia or one of 17 east coast U.S. states: Maryland, Virginia, Delaware, New Jersey, New York, Pennsylvania, West Virginia, Vermont, New Hampshire, Massachusetts, Connecticut, Rhode Island, Maine, North Carolina, South Carolina, Georgia, or Florida. These were chosen based on their immediate proximity to the Bay (Maryland, Virginia, District of Columbia) and/or lakes, streams and rivers in its Watershed (Delaware, New York, Pennsylvania, West Virginia). Households in these areas are more likely to hold “use” values for improvements to the Chesapeake Bay and its Watershed than those farther away. The remaining states (i.e., Vermont, New Hampshire, New Jersey, Massachusetts, Connecticut, Rhode Island, Maine, North Carolina, South Carolina, Georgia, and Florida) lie within 100 miles of the Atlantic Ocean. Residents of these states are more likely to be familiar with estuarine issues. At the same time, the greater distance between these states and the Chesapeake Bay will improve the survey’s ability to isolate values for non-users and to test how quickly the values decrease or “decay” with distance from the Bay region.


2(b) Sampling Design

(i) Sampling Frame

The sampling frame for this survey is the United States Postal Service Computerized Delivery Sequence File (DSF), the standard frame for address-based sampling (Iannacchione, 2011; Link et al, 2008). The DSF is a non-duplicative list of residential addresses where U.S. postal workers deliver mail; it includes city-style addresses and P.O. boxes, and covers single-unit, multi-unit, and other types of housing structures with known business excluded. In total the DSF is estimated to cover 97% of residences in the U.S., with coverage gradually increasing over the last few years as rural addresses are being converted to city-style, 911-compatible addresses4. The universe of sample units is defined as this set of residential addresses, and hence is capable of reaching all individuals who are 18 years of age or older living at a residential address in the 17 target states and the District of Columbia. Samples from DSF are taken indirectly, as USPS cannot sell mailing addresses or otherwise provide access to DSF. Instead, a number of sample vendors maintain their own copies of the DSF, and through verifying them with USPS, update the list quarterly. The sample vendors can also augment the mailing addresses with additional information (household demographics, landline phone numbers, etc.) from external sources.

For discussion of techniques that EPA will use to minimize non-response and other non-sampling errors in the survey sample, refer to Section 2(b)(II), below.


(ii) Sample Sizes


The target responding sample size for the main survey is 5,184 completed household surveys across two Bay condition reference years. This sample size was chosen to provide statistically robust regression modeling while minimizing the cost and burden of the survey. Given this sample size, the level of precision (see section 2(c)) achieved by the analysis will be more than adequate to meet the analytic needs of the benefits analysis for the Chesapeake Bay TMDLs. For further discussion of the level of precision required by this analysis, see Section 2(c)(i) below.

The sample design features three geographic strata based upon proximity to the Chesapeake Bay and its Watershed: Bay States, Watershed States, and East Coast States. Relative to the geographic distribution of households across the East Coast, EPA plans to over-sample households in states adjacent to Chesapeake Bay and within the Chesapeake Bay Watershed. EPA believes this approach is appropriate because households in these areas will incur the costs of Chesapeake Bay water quality improvements, and therefore this group is most likely to receive use-value benefits from the improvements. Within each survey region, the household sample will be allocated in proportion to the geographic distribution of households within states in the three regions. The target number of responding households in each region and state is given below (Table B1). For discussion of the required sample size by state, please refer to Attachment 15.


Table B1. Total Households and Expected Number of Completed Surveys for

Each Study Region.

Sampling Stratum and State

Total Number of Households

Expected Number of

Completed Surveys

Bay States Stratum, total

5,479,176

1,728

District of Columbia

266,707

84

Maryland

2,156,411

680

Virginia

3,056,058

964

Watershed States Stratum, total

13,442,787

1,728

Delaware

342,297

44

New York

7,317,755

940

Pennsylvania

5,018,904

646

West Virginia

763,831

98

Other East Coast States Stratum, total

25,431,478

1,728

Connecticut

1,371,087

94

Florida

7,420,802

504

Georgia

3,585,584

244

Maine

557,219

38

Massachusetts

2,547,075

174

New Hampshire

518,973

34

New Jersey

3,214,360

218

North Carolina

3,745,155

254

Rhode Island

413,600

28

South Carolina

1,801,181

122

Vermont

256,442

18

Total

44,353,441

5,184

Source: U.S. Census Bureau (2012). 2010 Census Summary File 1. Retrieved May 31, 2012 from http://factfinder2.census.gov/.




(iii) Stratification Variables


The population of households in the eastern United States is stratified by the geographic boundaries of three study regions in Table B1: states adjacent to Chesapeake Bay (“Bay States”), states which contain the Chesapeake Bay Watershed (“Watershed States”), and additional East Coast states (“East Coast States”). Bay States include MD, VA, and DC; Watershed States include DE, NY, PA, and WV; and East Coast States include VT, NH, NJ, MA, CT, RI, ME, NC, SC, GA, and FL.

The sample will be allocated in equal proportions of 33% for each stratum, thus leading to the highest sampling rate in the Bay State stratum, and the lowest sampling rate in the Other East Coast States stratum. The expected number of completed interviews in each stratum is presented in Table B2. This allocation is designed to minimize the variance of the main effects under an experimental design model as introduced in Section 4(a) of Part A. As a result of stratification, the analysis will produce estimates of the geographic distribution of values for the Chesapeake Bay water quality improvements with greater precision.


(iv) Sampling Method

Using the stratification design discussed above, sample households will be randomly selected from the U.S. Postal Service DSF database. Assuming 92% of the sampled addresses are eligible and 30% of eligible households will return a completed mail survey, 18,783 households will be sampled from the DSF.5

For obtaining population-based estimates of various parameters, each responding household will be assigned a sampling weight. The weights will be used to produce estimates that:

  • are generalizable to the population from which the sample was selected;

  • account for differential probabilities of selection across the sampling strata;

  • match the population distributions of selected demographic variables within strata; and

  • allow for adjustments to reduce potential non-response bias.

These weights combine:

  • a base sampling weight which is the inverse of the probability of selection of the household;

  • a within-stratum adjustment for differential non-response across strata; and

  • a non-response weight.

Post-stratification adjustments may be made to match the sample to known population values (e.g., from Census data).

There are various models that can be used for non-response weighting. For example, non-response weights can be constructed based on estimated response propensities or on weighting class adjustments. Response propensities are designed to treat non-response as a stochastic process in which there are shared causes of the likelihood of non-response and the value of the survey variable. The weighting class approach assumes that within a weighting class (typically demographically-defined), non-respondents and respondents have the same or very similar distributions on the survey variables. If this model assumption holds, then applying weights to the respondents reduces bias in the estimator that is due to non-response. Several factors, including the difference between the sample and population distributions of demographic characteristics, and the plan for how to use weights in the regression models will determine which approach is most efficient for both estimating population parameters and for the stated-preference modeling.

To estimate total value for the quantified environmental benefits of the Chesapeake Bay TMDLs, data will be analyzed statistically using a standard random utility model framework.


(v) Multi-Stage Sampling


Multi-stage sampling will not be necessary for this survey.


2(c) Precision Requirements

(i) Precision Targets

Table B2 presents expected sample sizes for each geographic stratum. The maximum acceptable sampling error for predicting response probabilities (i.e., the likelihood of choosing a given alternative) in the present case is ±10%, assuming a true response probability of 50% associated with a utility indifference point. Given the survey population size, this level of precision requires a minimum sample size of approximately 96 observations. The number of observations (i.e., completed surveys) required to obtain large sample properties for the choice experiment design provide more than sufficient observations to obtain this required precision for population parameters. For each Bay condition reference year across all regions, a sample of 2,592 households (completed surveys) will provide estimates of population percentages with a level of precision ranging from 1.1% at the 50% incidence level to 0.7% at the 10% incidence level (Table B2).


Table B2. Sample size and accuracy projections

Geographic division

Population size

Expected sample size

(completed surveys)

Expected weights

Standard error, 50% incidence

Standard error, 10% incidence

Bay States

5,479,176

1,728

3,171

0.017

0.010

Watershed

13,442,787

1,728

7,779

0.017

0.010

Other East Coast

25,431,478

1,728

14,717

0.017

0.010

Overall

44,353,441

5,184

8,556

0.011

0.007

Source for household population size: U.S. Census Bureau (2012). 2010 Census Summary File 1. Retrieved May 31, 2012 from http://factfinder2.census.gov/. The margin of error is 1.96 times the standard error.


(ii) Power analysis

Power analysis in this section is performed for a one-sample t-test of proportions for study as a whole. The accuracy of the WTP estimates, and hence the power to detect differences in WTP, depends on the true values of the parameters of the logistic model used in WTP estimation, and hence can only be conducted post-hoc after the parameter estimates are obtained. Given the stratified nature of the survey, the variance of a z-test statistic comparing the null incidence p0 with the alternative incidence p1 is


where CV is the coefficient of variation of post-stratified weights within a stratum is the design effect due to variable weights, or ‘DEFF’), n=864 is the target number of completed surveys for a given stratum for each of two Bay condition reference years, and Wh is the proportion of the population in stratum h=1 (Bay States), 2 (Watershed), 3 (Other East Coast):


Thus,   . For a given power level (e.g., 80%), the effect size that can be determined by solving


for p1. This is an extension of the standard power analysis for stratified samples.


Table B3 lists effect sizes using the most typical values for significance level (α=5%) and power (β=80%), and for various scenarios concerning variability of weights within strata (which will be caused by differential non-response). Across these scenarios, the margin of error on an estimate of population proportions6 ranges from 1.72 to 2.84 percentage points at the 95% confidence level.


Table B3. Power analysis.

Effect size detectable with power 80% by a test of size 5%

p0 = 50%

p0 = 10%

CV = 0.32, within-strata DEFF due to weights = 1.1

2.61%

1.58%

CV = 0.55, within-strata DEFF due to weights = 1.3

2.84%

1.72%




(iii) Non-Sampling Errors


Several non-sampling errors may be encountered in stated preference surveys. First, protest responses can occur when individuals reject the survey format or question design, even though they may value the resources being considered (Mitchell and Carson 1989). To help identify protest responses EPA has included several survey debriefing questions, e.g., whether the respondent would oppose any government program that imposes more regulation and spending (see section 4(b) (i) in Part A of this ICR for details). The use of such methods to identify protest responses is well-established in the literature (Bateman et al. 2002). Moreover, researchers (e.g., Bateman et al. 2002) suggest that a choice experiment format, such as that proposed here, may ameliorate such responses (as opposed to say, a contingent valuation format).

Non-response bias is another type of non-sampling error that can potentially occur in stated preference surveys. Non-response bias can occur when households choose not to participate in a survey (i.e., not return the mail survey, in this case) or do not answer all relevant questions on the survey instrument. EPA has designed the survey instrument to maximize the response rate. EPA will also follow Dillman’s (2008) mail survey approach (see subsection 4(b) for details). If necessary, EPA will use appropriate weighting or other statistical adjustments to correct for any bias due to non-response.

To determine whether there is any evidence of significant non-response bias in the completed sample, EPA will conduct a non-response bias study. This will enable EPA to identify potential differences between respondents to the mail survey and those who received a questionnaire but did not return it.


Non-response Bias Study

In order to ascertain if and how respondents and non-respondents differ EPA will conduct a non-response bias study in which a short survey will be administered to a random sample of households that receive the main survey but do not complete and return it. The short questionnaire will ask a few awareness, attitudinal and demographic questions that can be used to statistically examine differences, if any, between respondents and non-respondents. It will take respondents about 5 minutes to complete the non-response bias study survey. The short questionnaire will be implemented using a first-class mailing. The samples for the non-response follow up will be allocated proportionately to the number of the original mailings in the geographic division (state).

Previous survey research (Dillman 2009) suggests prepaid financial tokens are one of the greatest contributions to an increased response rate. It has been demonstrated that a financial token may pull in responders that may otherwise not be interested in participating in the survey (Groves et al. 2006); an issue that is of particular relevance to non-response bias. Moreover, Dillman explains (2009, p 241) that a financial token paid upfront rather than after survey completion turns the action from financial exchange to a social exchange, overcoming the problem of establishing a price point for completion of a survey. Two dollars has been shown to be one of the most effective levels of incentive and are widely used in survey implementation (Lesser et al. 2001). Therefore the mailing will include $2 in cash as an unconditional incentive for completion of the short questionnaire to encourage response from this population.

With this additional incentive EPA expects a 20% response rate. Based on the response rate observed in the non-response survey pretest study, EPA will send the non-response survey via first class mail to enough households to obtain the expected target sample of 900 completed non-response questionnaires for both Bay condition reference years combined. The return envelop will be imprinted with a stamp requesting the recipient to “Please return within 2 weeks.” Table B4 illustrates the target sample size of the non-response survey across survey regions.


Table B4: Expected Number of Completed Non-response

Bias Study Surveys

Region

Number of Households Expected to Return Non-Response Bias Study Survey

Bay

360

Watershed

360

Other East Coast

360

Total

1,080


For each of the two Bay condition reference years, EPA will statistically compare the mean responses of those who complete the main survey to individuals who complete the non-response survey. The target sample size of 900completed non-response surveys across both reference years (or 450 per reference year) will enable EPA to reject the hypothesis of no difference in population percentages between respondents and non-respondents with 80% power when there is a difference of 10.1% according to a two-sided statistical test at the base incidence of 50%, or a difference of 6.9% at the base incidence of 10%.

A subset of the questions from the main questionnaire (see section 2(d)) was selected for the non-response bias study survey, as discussed below.

  • Familiarity with the Chesapeake Bay Watershed and Watershed Issues (pg. 2). After a brief introduction, four questions are presented that inquire about individuals’ awareness and use of the Chesapeake Bay and Watershed Lakes. Questions 1 asks whether an individual has previously heard of the Chesapeake Bay. Question 2 and 3 inquire about participants use and familiarity with the Chesapeake Bay and Watershed Lakes. Question 4 is meant to capture awareness of water quality issues caused by nutrients and sediments. It is likely that awareness and use of an environmental commodity are correlated with individuals’ willingness-to-pay (WTP) for improvements (e.g., Johnston et al. 2005), and so it is important to assess whether there are systematic differences in these responses across respondents to the main survey and those to the non-response follow-up questionnaire.

  • Attitudes towards Environment and Regulations (pg. 3). The items in question 5 inquire about attitudes toward water quality improvements in the Chesapeake Bay Watershed, costs to one’s household, and government regulations. Comparing responses to these questions across the main survey study and the non-response bias study will allow EPA to assess whether non-respondents did not complete the main survey for reasons related to the survey topic. In contrast, the last item in question 5 inquires about respondents’ ability or propensity to take surveys in general, comparing responses to this item will help EPA assess whether non-response was related to factors that are likely uncorrelated with WTP.

  • Demographics (pg. 4). By including demographic questions in both the survey and non-response follow-up survey, statistical comparisons of household characteristics can be made across the samples of responding and non-responding households. These data can also be compared to household characteristics from the sample frame population, which are available from the 2010 Census.

EPA will use two-sided statistical tests to compare responses across the sample of respondents to the main survey and those who completed the non-response questionnaire. There are two types of biases that can arise from non-respondents (Whitehead, 2006). The statistical comparisons above will allow EPA to assess the presence of nonresponse bias, and if found, a weighting procedure, as discussed in section (iv) Sampling Method, can be applied. An inherent assumption in this weighting approach is that, within weight classes, respondents and non-respondents are similar. This assumption may not hold in the presence of selection bias, which is when respondents and non-respondents differ due to unobserved influences associated with the survey topic itself, and perhaps correlated with WTP. EPA will assess the potential for selection bias by comparing responses to the familiarity and attitude questions discussed above. If the results of such comparisons suggest a potential selection bias, EPA will examine and discuss the implications towards WTP estimates.



2(d) Questionnaire Design


The information requested by the survey is discussed in Section 4(b)(I) of Part A of the supporting statement. The full texts of the three draft questionnaires are provided in Attachments 1, through 6.

Several categories of questions are included in the survey. The reasons for including each of these categories are discussed below:

  • Familiarity with the Chesapeake Bay Watershed and Watershed Issues (pgs. 1 to 2). Responses to these questions provide information on whether respondents visited or viewed the Chesapeake Bay and lakes in the Watershed. These questions will allow EPA to identify respondents as users of the resource or non-users. Additionally, respondents who have increased contact with the aquatic resources are expected to have higher values for improvements in environmental conditions (Johnston et al. 2005). The questions in this section will identify whether respondents’ recreational experiences have included the Chesapeake Bay or lakes in the Watershed or both. This section also provides respondents with some background information on nutrient and sediment pollution issues in the Chesapeake Bay Watershed. Responses to question 4 will be used to assess respondents’ a priori knowledge about these issues, and potentially test for knowledge effects on respondent answers to the program choice questions. Respondents who are more familiar with these pollutants may have different values, although the direction of such effects is unknown a priori.

  • Current and Future Conditions in the Chesapeake Bay and Watershed Lakes (pgs. 3 to 4). In this section respondents are presented with the environmental attributes that are applied in the later choice questions: bay water clarity; striped bass, blue crab, and oyster populations; and the number of Watershed lakes with relatively low algae levels. Respondents are given the current levels of these attributes, along with the levels predicted for either 2025 or 2040 under one of three baseline assumptions: declining, constant, or improving. Responses to question 5 gauge respondents’ expectations regarding future environmental conditions in the Chesapeake Bay and Watershed lakes (absent additional pollution control actions).

  • Additional Pollution Programs for the Chesapeake Bay Watershed (pg 5 to 6). This section introduces the concept that additional pollution reduction programs are being considered. Question 6 inquires about respondents’ knowledge of whether they currently pay any environmentally related taxes or fees. Responses to this question can be used to examine whether respondents already paying for environmental improvements are less willing to pay additional amounts for further environmental improvements. This section also provides text introducing the payment vehicle for the later choice questions. Additional text is included to further emphasize the consequentiality of one’s choices. In order to minimize consideration of omitted variables, this section also provides text encouraging respondents to only consider the environmental outcomes presented in the choice questions, thus reducing the potential for omitted variable bias.

  • Deciding Future Actions (pgs 7 to 8). This section provides instructions and an example of how one responds to the choice questions. This section also includes “cheap talk” text emphasizing that even though this exercise is hypothetical, respondents should consider their choices as if they were real.

  • Voting for Programs to Improve the Condition of Chesapeake Bay and lakes in the Watershed (pgs. 9 to 11). The questions in this section are the key component of the survey. Respondents’ choices among alternatives with specific environmental quality improvements and household cost increases are the main data that allow estimation of willingness to pay. The questions are presented in a choice experiment, where respondents choose their preferred option: option A (status quo), option B, and option C. A split sample design will be employed in which one of two reference years for environmental conditions will be used (2025 or 2040). Results will be compared across reference years to determine their effect on values for each of the environmental outcomes. This elicitation format has been successfully used by a number of previous valuation studies (e.g., Adamowicz et al. 1998; Bateman et al. 2002; Bennett and Blamey 2001; Louviere et al. 2000; Johnston et al. 2002a, 2005, 2012; Opaluch et al. 1993).

  • Debriefing Questions (pg. 12). These questions ask respondents about their motivations as to why they chose certain choice options over others, and whether they accepted the hypothetical scenario when making their choices. These questions will help to identify respondents who incorrectly interpreted the choice questions or did not believe the choice scenarios to be credible. In other words, the responses to these questions will be used to identify potentially invalid responses, such as: protest response (e.g., protest any government program), scenario rejection, omitted variable considerations (e.g., availability and/or price of seafood, economy and employment, recreation in lakes outside the Watershed, etc.), and symbolic (warm glow) responses. Some questions in this section are also included in the non-response bias survey, which allows for an examination of whether there are any statistical differences between respondents and non-respondents. The responses to some of the questions will also be used to identify motivations behind respondents’ choices, including: altruism, option value, bequest, etc.

  • Recreational Experience and Time Preferences (pg 13). Responses to these questions elicit recreational experience data to test if certain respondent characteristics influence responses to the choice questions. These questions will also allow EPA to identify resource non-users for comparison to users of the Bay and Watershed lakes. In so doing, we will be able to gauge the relative importance of user status on willingness to pay estimates and ultimately on overall benefits. The questions will also identify frequency of use and whether the respondent’s recreational experiences have included lakes in the Watershed, the Chesapeake Bay itself, or both. Question 15 is included to elicit information on respondents’ internal discount rate.

  • Demographics (pg. 14). Responses to these questions will be used to estimate the influence of demographic variables on respondents’ voting choices, and ultimately, their values to improve environmental quality in the Chesapeake Bay and lakes in the Watershed. By including this page in both the survey and non-response follow-up survey, statistical comparisons of household characteristics can be across the samples of responding and nonresponding households. These data can also be compared to household characteristics from the sample frame population, which are available from the 2010 Census.


3. Pretests


EPA conducted extensive testing of the survey instrument during a set of 10 focus groups and 72 cognitive interviews (OMB Control Number 2090-0028). Individuals in these focus groups participated in discussions about the Chesapeake Bay and its Watershed. They also completed draft survey questionnaires and provided comments and feedback about the survey format and content, their interpretations of the questions, and other issues relevant to stated preference estimation. Individual cognitive interviews with survey respondents were conducted using think-aloud or verbal protocol analyses (see Schkade and Payne 1994 for a discussion). These discussions were used to develop a survey that provides respondents with the necessary information to complete the questionnaire, develop choice scenarios that are incentive compatible, and minimize the burden placed on respondents while collecting the necessary information. Particular emphasis in these survey discussions was on testing for the presence of potential biases associated with poorly-designed stated preference surveys, including hypothetical bias, strategic bias, symbolic (warm glow) bias, framing effects, embedding biases, methodological misspecification, and protest responses (Mitchell and Carson 1989). Based on focus group and cognitive interview responses, EPA made various improvements to the questionnaire including making changes to ameliorate and minimize these biases in the final survey instrument. Participants in the focus group discussions and protocol interviews were offered and incentive of $75 for their participation.

EPA intends to implement this survey in two stages: a pretest and a main study. First, EPA will administer the pretest to a sample of 3,261 households using a mail survey and the Dillman Total Design Method (Dillman 2008). Assuming 92% of the sampled addresses are eligible and 30% of eligible households return the survey (see Helm 2012; Mansfield, et al. 2012; Johnston, et al. 2012), EPA estimates that this will result in 900 returned and completed pretest surveys. Households in the pretest will be selected from each of the three geographic strata. Each selected household will be sent one version of the survey to complete (improving, constant, or declining baseline; 2025 or 2040 reference year for conditions) and will be assigned that version on a random basis. Responses and preliminary findings to this pilot study will be used to inform EPA regarding the response rates and the quality of survey data. EPA will evaluate pilot responses and determine whether any changes to the survey instruments or implementation approach are needed before proceeding with the administration of the main survey.

EPA will use results from the pretest to validate the survey design. Specifically, the pretest results will be used to:

  • Compare the actual and expected response rates. Based on typical mail survey response rates for surveys of this type, the expected response rate is approximately 30% (Helm 2012; Mansfield, et al. 2012; Johnston, et al. 2012).

  • Assess whether demographic characteristics of the respondents are significantly different from the average demographic characteristics in the study region.

  • Examine the proportion of respondents choosing the status quo. If no one is choosing the status quo, it often indicates that the cost levels are too low. Pure random selection would result in 33% of survey respondents choosing status quo. If less than 15 - 20% of responses choose the status quo in the pilot study then EPA would consider increasing the cost levels.

  • Identify unusual patterns, such as the vast majority of respondents always choosing Option B. (E.g., if 2/3 of respondents (66%) choose Option B it might indicate that there is a systematic bias).

  • Determine whether responses suggest that appropriate tradeoffs are being made, and that protest and other invalid responses are minimal.

  • Examine response rates for individual survey questions and evaluate whether adjustments to survey questions are required to promote a higher response rate.

If required, EPA will make the appropriate adjustments to the questionnaire, sampling frame or attribute levels (e. g., increase or reduce the number of surveys mailed to households, or increase costs to households in the choice questions).


4. Collection Methods and Follow-up

4(a) Collection Methods


The survey will be administered as a mail survey. Respondents will be asked to mail the completed survey back to EPA. Several considerations justify the use of a mail survey over alternative modes (i.e., telephone and internet surveys; in-person interviews). Foremost, EPA believes respondents will face fewer obstacles in completing the mail survey compared to other collection methods. In particular, the format of key survey questions (i.e., the multi-attribute choice experiments) necessitates a visually-oriented survey, and possible substitutes to a mail survey (i.e., in-person interviews) are cost-prohibitive given the target responding sample size. Additionally, best practices for mail survey administration methods are well-established and widely accepted (Dillman 2008); EPA has designed its collection methods using these best practices and believes the mail survey will be relatively easily accessible to respondents compared to other less well-tested methods (i.e., internet surveys).



4(b) Survey Response and Follow-up


The estimated response rate for the mail survey is 30% (Helm 2012; Mansfield, et al. 2012; Johnston, et al. 2012). That is, 30% of the eligible households that are sent the mail survey are expected to return a completed survey. To obtain the highest response rate possible, EPA will follow Dillman’s (2008) mail survey approach. A preview letter will be sent prior to a household receiving the survey (Attachment 8). The survey will then be sent, accompanied by a cover letter (Attachment 9), explaining the purpose of the survey, emphasizing the importance of their responses, and reminding the respondent whether they live inside or outside of the Chesapeake Bay Watershed. To improve the response rate, all of these households will receive a reminder postcard approximately one week after the initial questionnaire mailing (Attachment 10). Then, approximately three weeks after the reminder postcard, all those who have not responded will receive a second copy of the questionnaire with a revised cover letter (Attachment 11). The following week, a letter reminding them to complete the survey will be sent (Attachment 12).




5. Analyzing and Reporting Survey Results

5(a) Data Preparation


Since the survey will be administered as a mail survey, survey responses will be entered into an electronic database after they are returned. EPA will also clean the data to ensure that the data are entered in a consistent manner and any inconsistencies are addressed. Specifically, we will use the Double Entry data entry method for closed-ended responses. The Double Entry method consists of data being keyed twice and compared. Discrepancies are reconciled upon completion of the second entry. After all responses have been entered, the database contents will be converted into a format suitable for use with a statistical analysis software package.


5(b) Analysis


Once the survey data has been converted into a data file, it will be analyzed using statistical analysis techniques. The following section discusses the model that will be used to analyze the stated preference data from the survey.


Analysis of Stated Preference Data

The model for analysis of stated preference data is grounded in the standard random utility model of Hanemann (1984) and McConnell (1990). This model is applied extensively within stated preference research, and allows well-defined welfare measures (i.e., willingness to pay) to be derived from choice experiment models (Bennett and Blamey 2001; Louviere et al. 2000). Within the standard random utility model applied to choice experiments, hypothetical program alternatives are described in terms of attributes that focus groups reveal as relevant to respondents’ utility, or well-being (Johnston et al. 1995; Adamowicz et al. 1998; Opaluch et al. 1993). One of these attributes would include a monetary cost to the respondent’s household.

Applying this standard model to choices among programs to improve environmental quality in the Chesapeake Bay, a standard utility function Ui(.) includes environmental attributes of pollution reduction programs and the net cost of the program to the respondent. Following standard random utility theory, utility is assumed known to the respondent, but stochastic from the perspective of the researcher, such that:


(1) Ui(.) = U(Xi, D, Y-Fi) = v(Xi, D, Y-Fi) + εi


where:

Xi = a vector of variables describing attributes of pollution reduction program i and the baseline conditions if no further action is taken;

D = a vector characterizing demographic and other attributes of the respondent.

Y = disposable income of the respondent.

Fi = mandatory additional cost faced by the household under program i;

v(.) = a function representing the empirically estimable component of utility;

εi = stochastic or unobservable component of utility, modeled as an econometric error.


Econometrically, a model of such a preference function is obtained by methods designed for limited dependent variables, because researchers only observe the respondent’s choice among alternative programs, rather than observing values of Ui(.) directly (Maddala 1983; Hanemann 1984). Standard random utility models are based on the probability that a respondent’s utility from program i, Ui(.), exceed the utility from alternative programs j, Uj(.), for all potential programs ji considered by the respondent. In this case, the respondent’s choice set of potential programs also includes maintaining the status quo. The random utility model presumes that the respondent assesses the utility that would result from each pollution reduction program i (including the “No Action” or status quo option), and chooses the program that provides the highest utility.

When faced with k distinct programs defined by the program attributes, the respondent will choose program i if the anticipated utility from program i exceeds that of all other k-1 programs. Drawing from (1), the respondent will choose program i if:


(2) (v(Xi, D,Y-Fi) + εi) ≥ (v(Xj, D, Y-Fj) + εj) j≠i.


If the εi are assumed independently and identically drawn from a type I extreme value (Gumbel) distribution, the model may be estimated as a conditional logit model, as detailed by Maddala (1983), Greene (2003). This model is most commonly used when the respondent considers more than two options in each choice set (e.g., Program A, Program B, No Further Action), and results in an econometric (empirical) estimate of the systematic component of utility v(.), based on observed choices among different programs. Based on this estimate, one may calculate welfare measures (willingness to pay) following the well-known methods of Hanemann (1984), as described by Freeman (2003). Following standard choice experiment methods (Adamowicz et al. 1998; Bennett and Blamey 2001), each respondent will consider questions including three potential choice options (i.e., Option A [status quo], Option B, Option C))—choosing the program that provides the highest utility as noted above. Following clear guidance from the literature, a “no further action” or status quo option is always included in the visible choice set, to ensure that WTP measures are well-defined (Louviere et al. 2000).

Three choice questions are included within the same survey to increase information obtained from each respondent. This is standard practice within choice experiment and dichotomous choice contingent valuation surveys (Poe et al. 1997; Layton 2000). While respondents will be instructed to consider each choice question as independent of other choice questions, it is nonetheless standard practice within the literature to allow for the potential of correlation among questions answered within a single survey by a single respondent. That is, responses provided by individual respondents may be correlated even though responses across different respondents are considered independent and identically distributed (Poe et al. 1997; Layton 2000; Train 1998).

There are a variety of approaches to accommodate such potential correlation. Models to be assessed include random effects and random parameters (mixed) discrete choice models, common in the stated preference literature (Greene 2003; McFadden and Train 2000; Poe et al. 1997; Layton 2000). Within such models, selected elements of the coefficient vector are assumed normally distributed across respondents, often with free correlation allowed among parameters (Greene 2002). If only the model intercept is assumed to include a random component, then a random effects model is estimated. If both slope and intercept parameters may vary across respondents, then a random parameters model is estimated. Such models will be estimated using standard maximum likelihood for mixed conditional logit techniques, as described by Train (1998), Greene (2002) and others. Mixed logit model performance of alternative specifications will be assessed using standard statistical measures of model fit and convergence, as detailed by Greene (2002, 2003) and Train (1998).


Advantages of Choice Experiments

Choice experiments (also called choice modeling) following the random utility model outlined above are favored by many researchers over other variants of stated preference methodology (Adamowicz et al. 1998; Bennett and Blamey 2001), and may be viewed as a “natural generalization of a binary discrete choice CV [contingent valuation]” (Bateman et al. 2002, p. 271). Advantages of choice experiments include a capacity to address choices over a wide array of potential policies, grounded in well-developed random utility theory, and the similarity of the discrete choice context to familiar referendum or voting formats (Bennett and Blamey 2001). Compared to other types of stated preference valuation, choice experiments are better able to measure the marginal value of changes in the characteristics or attributes of environmental goods, and avoid response difficulties and biases (Bateman et al. 2002). For example, choice experiments may reduce the potential for ‘yea-saying’ and symbolic biases (Blamey et al. 1999; Mitchell and Carson 1989), as many pairs of multi-attribute policy choices (e.g., Option A [status quo], Option B, Option C) will offer no clearly superior choice for a respondent wishing to express solely symbolic environmental motivations. For similar reasons, choice experiments may ameliorate protest responses (Bateman et al. 2002).

Choice experiments are well-established in the stated preference literature (Adamowicz et al. 1998; Bennett and Blamey 2001; Louviere et al. 2000) and are commonly applied to assess values for ecological resource improvements of a type quite similar to those at issue in Chesapeake Bay TMDLs. Examples of the application of choice experiments to estimate values associated with changes in aquatic environmental quality and habitat include Hoehn et al. (2004), Johnston et al. (2002b), and Opaluch et al. (1999), among others. EPA has drawn upon these and other examples of successful choice experiment design to provide a basis for survey design in the present case.

Additionally, choice experiments permit a straightforward assessment of the impact of resource scope and scale on respondents’ choices. This will enable EPA to easily conduct scope tests and other assessments of the validity of survey responses (Bateman et al. 2002, p. 296-342).

A final key advantage of choice experiments in the present application is the ability to estimate respondents’ values for a wide range of different potential outcomes of Chesapeake Bay pollution reduction programs, differentiated by their attributes. The proposed choice experiment survey versions will allow respondents to choose among a wide variety of hypothetical program options, some with larger and others with smaller changes in the presented attributes (including household cost). That is, because the survey is to be implemented as a choice experiment survey, levels of attributes in choice scenarios will vary across respondents (Louviere et al. 2000).

The ability to estimate values for a wide range of different policy outcomes is a fundamental property of the choice experiment method (Bateman et al. 2002; Louviere et al. 2000; Adamowicz et al. 1998). The experimental design (see below) will allow for survey versions showing a range of different baseline and resource improvement levels, where these levels are chosen to (almost certainly) bound the levels expected under actual pollution reduction programs. Given that there will almost certainly be some uncertainty regarding the specifics of the actual baselines and improvements, the resulting valuation estimates will allow flexibility in estimating values for a wide range of circumstances.


Comment on Survey Preparation

Following standard practice in the stated preference literature (Johnston et al. 1995; Desvousges and Smith 1988; Desvousges et al. 1984; Mitchell and Carson 1989), all survey elements and methods were subjected to extensive development and pretesting in focus groups to ameliorate the potential for survey biases (Mitchell and Carson 1989), and to ensure that respondents have a clear understanding of the policies and goods under consideration, such that informed choices may be made that reflect respondents’ underlying preferences. Following the guidance of Arrow et al. (1993), Johnston et al. (1995), and Mitchell and Carson (1989), focus groups were used to ensure that respondents are aware of their budget constraints and the scope of the environmental quality improvements under consideration.

Survey development also included individual cognitive interviews conducted using think-aloud or verbal protocol analyses (Schkade and Payne 1994). Individuals in these interviews completed draft survey questionnaires and provided comments and feedback about the survey format and content, their interpretations of the questions, and other issues relevant to stated preference estimation. Based on their responses, EPA made various improvements to the survey questionnaire including how the attributes are described and labeled, including an example choice question before asking respondents to complete theirs, and the appearance of the choice questions. Results from focus groups and cognitive interviews provided evidence that respondents answer the stated preference survey in ways appropriate for stated preference WTP estimation, and that respondents were evaluating trade-offs between program attributes and the household cost. The number of focus groups and cognitive interviews used in survey design, 10 focus groups and 72 cognitive interviews, exceed the numbers used in typical applications of stated preference valuation. Moreover, EPA incorporated cognitive interviews as detailed by Kaplowicz et al. (2004).

EPA will pretest the mail survey. Making the same assumptions about eligibility and response rates as for the main survey (i.e., 92% and 30%, respectively), EPA will mail out 3,261 pre-test surveys (181 or 182 to each of 18 cells of the main survey, with the target number of 50 completes per cell). The main goal of the pretest is to assess whether the questionnaire is likely to produce quality data necessary estimate willingness to pay for water quality improvements in the Chesapeake Bay and lakes in the Watershed. More specifically, EPA will use pretest results to (1) assess respondents’ ability to understand background information and respond to the choice questions; (2) evaluate potential for protest responses, warm glow effects, and hypothetical bias; (3) verify the expected response rate; and (4) evaluate the range and levels chosen for the cost attribute.

To test the feasibility of the non-response follow up study in the main survey, as proposed in Section 3.2(c)(iii), a small scale non-response bias study will be conducted after the pre-test. EPA anticipates to mail a copy of the pilot non-response bias study survey to 900 households. These households will be randomly selected from the set of non-responding households from the pre-test, defined as those which did not return a completed survey response, and for which EPA did not receive a USPS delivery failure notification. EPA will send to each of the selected households a non-response survey via first-class mail (including the $2 incentive). The package will be imprinted with a stamp requesting the recipient to “Please return within 2 weeks.” Assuming a 20% response rate, this implies that approximately 180 households will be included in the pilot non-response bias study sample. The results of the pre-test non-response bias study will be used to refine the non-response questionnaire and to assess the expected response rate among non-respondents.


Econometric Specification

Based on prior focus groups, expert review, and attributes of the policies under consideration, EPA anticipates that five attributes will be incorporated in the vector of variables describing attributes of the pollution reduction programs (vector Xi), in addition to the attribute characterizing unavoidable household cost Fi. These attributes will characterize improvements in water clarity (x1), adult blue crab abundance (x2), adult striped bass abundance (x3), oyster abundance (x4), and lake condition (x5). These variables will allow respondents’ choices to reveal the potential impact of Chesapeake Bay environmental quality improvements on utility.

Although the literature offers no firm guidance regarding the choice of specific functional forms for v(.) within choice experiment estimation, in practice, linear forms are often used (Johnston et al. 2003), with some researchers applying more flexible (e.g., quadratic) forms (Cummings et al. 1994). Standard linear forms are anticipated as the simplest form to be estimated by EPA, from which more flexible functional forms (e.g., quadratic) can be derived and compared. EPA anticipates estimating all models within the mixed logit framework outlined above. Model fit will be assessed following standard practice in the literature (e.g., Greene 2003; Maddala 1983). Since they are common in practice and theory the functional forms considered for this analysis are presented and discussed in many existing sources (e.g., Hoehn 1991, Cummings et al. 1994, Johnston et al. 1999, and Johnston et al. 2003).

For example, for each choice occasion, the respondent may choose Option A (status quo), Option B, or Option C. Assuming that the model is estimated using a standard approximation for the observable component of utility, an econometric specification of the desired model (within the overall multinomial logit model) might appear as:


v() = 0 + 1(change in water clarity) + 2(change in blue crab abundance) + 3(change in striped bass abundance) + 4(change in oyster abundance) + 5(change in lake condition) + 6(Cost)


This sample specification allows one to estimate the relative “main effects” of program attributes on utility. Specifications such as this allow WTP to be estimated for a wide-range of potential program outcomes.


Experimental Design

Experimental design for the choice experiment surveys will follow established practices. Fractional factorial design will be used to construct choice questions with an orthogonal array of attribute levels, with questions randomly divided among distinct survey versions (Louviere et al. 2000). Based on standard choice experiment experimental design procedures (Louviere et al. 2000), the number of questions and survey versions were determined by, among other factors: a) the number of attributes in the final experimental design and complexity of questions, b) pretests revealing the number of choice experiment questions that respondents are willing/able to answer in a single survey session, and c) the number of attributes that may be varied within each question while maintaining respondents’ ability to make appropriate neoclassical tradeoffs.

Based on the models proposed above and recommendations in the literature, EPA anticipates an experimental design that allows for an ability to estimate main effects of program attributes (Louviere et al. 2000). Choice sets (Bennett and Blamey 2001), including variable level selection, were designed by EPA based on the goal of illustrating realistic policy scenarios that “span the range over which we expect respondents to have preferences, and/or are practically achievable” (Bateman et al. 2002, p. 259), following guidance in the literature. This includes guidance with regard to the statistical implications of choice set design (Hanemann and Kanninen 1999) and the role of focus groups in developing appropriate choice sets (Bennett and Blamey 2001).

Based on these guiding principles, the following experimental design framework is proposed by EPA. A description of the statistical design is presented in Attachment 15. The experimental design will allow for estimation of main effects based on a choice experiment framework. Each treatment (survey question) includes Option A (status quo) and two choice options (Option B and Option C), each characterized by five attributes and a cost variable. Hence, there is a total of eighteen attributes for each treatment (including six implicit attributes for the status quo option). Based on focus groups and pretests, and guided by realistic ranges of attribute outcomes, EPA allows for three different potential levels for environmental attributes and six different levels of annual household cost for Options B and C. It also allows for three different potential levels for environmental attributes under the status quo or “No Action” option (Option A), the first set reflecting a declining baseline, the second reflecting a constant baseline, and the third reflecting an improving baseline. The “No Action” option included for each question will be characterized by a household cost of $0. Additionally, the study design consists of two different versions of the survey, where the reference year for when the environmental conditions are realized is either 2025 or 2040.


The attribute combinations can be summarized as follows:

  • Water Clarity A (3 levels)

  • Water Clarity B, Water Clarity C (3 levels)

  • Blue Crab Abundance A (3 levels)

  • Blue Crab Abundance B, Blue Crab Abundance C (3 levels)

  • Striped Bass Abundance A (3 levels)

  • Striped Bass Abundance B, Striped Bass Abundance C (3 levels)

  • Oyster Abundance A (3 levels)

  • Oyster Abundance B, Oyster Abundance C (3 levels)

  • Lake Condition A (3 levels)

  • Lake Condition B, Lake Condition C (3 levels)

  • Cost B, Cost C (6 levels)


For a more detailed discussion of attribute levels assigned across survey versions, refer to Attachment 15.

Following standard practice, EPA will constrain the design to remove dominant/dominated pairs, where one option dominates the other in all attributes. Respondents have been found to react negatively and often protest when offered such choices. Given that such choices provide negligible statistical information compared to choices involving non-dominant/dominated pairs, they are typically avoided in choice experiment statistical designs. For example, Hensher and Barnard (1990) recommend eliminating profiles including dominating or dominated profiles, because such profiles generally provide no useful information. Following this guidance, EPA will constrain the design to eliminate such dominant/dominating pairs.




5(c) Reporting Results


The results of the survey will be made public as part of the benefits analysis for the Chesapeake Bay TMDLs. Provided information will include summary statistics for the survey data, extensive documentation for the statistical analysis, and a detailed description of the final results. The survey data will be released only after it has been thoroughly vetted to ensure that all potentially identifying information has been removed.




REFERENCES


Adamowicz, W., Boxall, P., Williams, M., and Louviere, J. (1998). Stated preference approaches for measuring passive use values: Choice experiments and contingent valuation. American Journal of Agricultural Economics, 80(1), 64-75.


Anderson, E. A. (1989). Economic benefits of habitat restoration: seagrass and the Virginia hard-shell blue crab fishery. North American Journal of Fisheries Management, 9(2), 140-149.


Bateman, I. J., Carson, R. T., Day, B., Hanemann, M., Hanley, N., Hett, T., Jones-Lee, M., Loomes, G., Mourato, S., Ozdemiroglu, E., Pierce, D.W., Sugde, R., and Swanson, J. (2002). Economics valuation with stated preference surveys: A manual. Northampton, MA: Edward Elgar.


Bennett, J., and Blamey, R. (2001). The choice modelling approach to environmental valuation. Northampton, MA: Edward Elgar.


Bergstrom, J.C. and Ready, R.C. (2009). What Have We Learned from Over 20 Years of Farmland Amenity Valuation Research in North America? Review of Agricultural Economics 31(1), 21–49.


Blamey, R., and Bennett, J. (2001). Yea-saying and validation of a choice model of green product choice. In J. Bennett and R. Blamey (Eds.), The Choice Modelling Approach to Economic Valuation. Northampton, MA: New Horizons in Environmental Economics. pp. 178-181.


Blamey, R. K., Bennett, J. W., and Morrison, M. D. (1999). Yea-Saying in Contingent Valuation Surveys. Land Economics 75(1), 126-141.


Bockstael, N. E., McConnell, K. E., and Strand, I. E. (1989). Measuring the benefits of improvements in water quality: The Chesapeake Bay. Marine Resource Economics, 6, 1-18.


Bockstael, N.E., McConnell, K.E., and Strand, I.E. (1988). Benefits from Improvements in Chesapeake Bay Water Quality, Volume III. Washington, DC: U.S. Environmental Protection Agency.


Boyd, R., and Krupnick, A. (2009, Sept). The definition and choice of environmental commodities for nonmarket valuation. RFF DP 09-35, Resources for the Future discussion paper.


Carlson, R. E. (1977). A trophic state index for lakes. Limnology and Oceanography, 22(2), 361–369.


Carlson R.E., and Simpson J. (1996). A coordinator’s guide to volunteer lake monitoring methods. Madison, WI: North American Lake Management Society.


Carson, R.T. (2012). Contingent Valuation: A Practical Alternative When Prices Aren’t Available. Journal of Economic Perspectives, 26(4), 27-42.


Collins, J. P., and Vossler, C. A. (2009). Incentive compatibility tests of choice experiment value elicitation. Journal of Environmental Economics and Management, 58(2), 226-235.


Cropper, M., and Isaac, W. (2011). The benefits of achieving the Chesapeake Bay TMDLs (Total Maximum Daily Loads): A scoping study. Washington, D.C.: Resources for the Future.


Cummings, R.G., and Taylor, L.O. (1999). Unbiased value estimates for environmental goods: A cheap talk design for the contingent valuation method. The American Economic Review, 89(3), 649-665.


Desvousges, W.H., and Smith, V.K. (1988). Focus groups and risk communication: The science of listening to data. Risk Analysis, 8, 479-484.


Desvousges, W.H., Smith, V.K., Brown, D.H., and Pate, D.K. (1984). The role of focus groups in designing a contingent valuation survey to measure the benefits of hazardous waste management regulations. Research Triangle Park, NC: Research Triangle Institute.


Dillman, D.A. (2008). Mail and internet surveys: The tailored design method. New York: John Wiley and Sons.


Dillman, D. A., Smyth, J. D., and Christian, L. M. (2009). Internet, mail, and mixed mode surveys: The tailored design method. (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc


Evans, M. F., Poulos, C., and Smith, V.K. (2011). Who Counts in Evaluating the Effects of Air Pollution Policies on Households: Non-market Valuation in the Presence of Dependencies. Journal of Environmental Economics and Management, 62(10), 65-79.


Freeman, A.M., III. (2003). The measurement of environmental and resource values: Theory and methods. Washington, DC: Resources for the Future.


Greene, W.H. (2002). NLOGIT version 3.0 reference guide. Plainview, NY: Econometric Software, Inc.


Greene, W.H. (2003). Econometric analysis. 5th ed. Upper Saddle River, NJ : Prentice Hall.


Groves, R. M., Couper, M. P., Presser, S., Singer, E., Tourangeau, R., Acosta, G. P., et al. (2006) Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720-736.


Hanemann, W.M. (1984.) Welfare evaluations in contingent valuation experiments with discrete responses. American Journal of Agricultural Economics, 66(3), 332-41.


Hausman, Jerry. (2012). Contingent Valuation: From Dubious to Hopeless. Journal of Economic Perspectives, 26(4), 43-56.


Helm, E. (2012, June 05). Stated preference (sp) survey – survey methods and model results, memorandum to the section 316(b) existing facilities rule record. Retrieved from http://water.epa.gov/lawsregs/lawsguidance/cwa/316b/upload/316bmemo.pdf


Hicks, R., Kirkley, J. E., McConnell, K. E., Ryan, W., Scott, T. L., and Strand, I. (2008). Assessing stakeholder preferences for Chesapeake Bay restoration options: A stated preference discrete choice-based assessment (pp. 1-56). Annapolis, MD: NOAA Chesapeake Bay Office, National Marine Fisheries Service and Virginia Institute of Marine Science.


Hoehn, J.P., Lupi, F., and Kaplowitz, M.D. (2004). Internet-Based Stated Choice Experiments in Ecosystem Mitigation: Methods to Control Decision Heuristics and Biases. In Proceedings of Valuation of Ecological Benefits: Improving the Science Behind Policy Decisions, a workshop sponsored by the US EPA National Center for Environmental Economics and the National Center for Environmental Research.


Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556-575.


Johnston, R. J. (2006). Is Hypothetical Bias Universal? Validating Contingent Valuation Responses Using a Binding Public Referendum. Journal of Environmental Economics and Management, 52, 469-481.


Johnston, R.J., Opaluch, J.J., Mazzotta, M.J., and Magnusson G. (2005). Who are resource non-users and what can they tell us about non-use values? Decomposing user and non-user willingness to pay for coastal wetland restoration. Water Resources Research, 41(7), doi:10.1029/2004WR003766.


Johnston, R.J., Schultz, E.T., Segerson, K., Besedin, E.Y., and Ramachandran, M. (2012.) Enhancing the content validity of stated preference valuation: The structure and function of ecological indicators. Land Economics, 88(1), 102-120.


Johnston, R.J., Magnusson, G., Mazzotta, M., and Opaluch, J.J. (2002a). Combining Economic and Ecological Indicators to Prioritize Salt Marsh Restoration Actions. American Journal of Agricultural Economics 84(5), 1362-1370.


Johnston, R.J., Swallow, S.K., Allen, C.W., and Smith, L.A. (2002b). Designing multidimensional environmental programs: Assessing tradeoffs and substitution in watershed management plans. Water Resources Research, 38(7), IV1-13.


Johnston, R.J., Swallow, S.K., Tyrrell, T.J., and Bauer, D.M. (2003). Rural amenity values and length of residency. American Journal of Agricultural Economics, 85(4), 1000-1015.


Johnston, R.J., Weaver, T.F., Smith, L.A., and Swallow, S.K. (1995). Contingent valuation focus groups: insights from ethnographic interview techniques. Agricultural and Resource Economics Review, 24(1), 56-69.


Just, R.E., Hueth, D.L., and Schmitz, A. (2004). The welfare economics of public policy: A practical approach to project and policy evaluation. Northampton, MA: Edward Elgar Publishing.


Kaplowicz M., Lupi, F., and Hoehn, J. (2004). Chapter 24: Multiple methods for developing and evaluating a stated-choice questionnaire to value wetlands. In Presser, Rothget, Coupter, Lesser and Martin (Eds.). Methods for Testing and Evaluating Survey Questionnaires. New York: John Wiley and Sons.


Kemp, W. M., Boynton, W. R., Adolf, J. E., Boesch, D. F., Boicourt, W. C., Brush, G., Cornwell, J.C., Fisher, T.R., Glibert, P.M., Hagys, J.D., Harding, L.W., Houde, E.D., Kimmel, D.G., Miller, W.D., Newell, R.I.E., Romani, M.R., Smith, E.M., and Stevenson, J. C. (2005). Eutrophication of Chesapeake Bay: historical trends and ecological interactions. Marine Ecology Progress Series, 303, 1-20.


Kahn, J.R., and Kemp, W.M. (1985). Economic losses associated with the degradation of an ecosystem: The case of submerged aquatic vegetation in Chesapeake Bay. Journal of Environmental Economics and Management, 12, 246–263.


Kling, C. L., Phaneuf, D. J., and Zhau, J. (2012). From Exxon to BP: Has Some Number Become Better than No Number. Journal of Economic Perspectives, 26(4), 3-26.


Krupnick, A. (1988). Reducing bay nutrients: An economic perspective. Maryland Law Review, 47, 453–480.


Krupnick, A., Parry, I., Walls, M., Knowles, T., and Hayes, K. (2010). Toward a New National Energy Policy: Assessing the Options, Resources for the Future, Washington, D.C. (November).


Krupnick, A., and Adamowicz, W.L. (2001). Supporting questions in stated choice studies. In B.J. Kanninen (Ed.), Valuing Environmental Amenities Using Stated Choice Studies: A Common Sense Approach to Theory and Practice. Dordrecht, The Netherlands: Springer. pp. 53-57.


Layton, D.F. (2000.) Random coefficient models for stated preference surveys. Journal of Environmental Economics and Management, 40(1), 21-36.


Layton, D.F. and Brown, G. (2000). Heterogeneous Preferences Regarding Global Climate Change. Review of Economics and Statistics, 82(4), 616-24.


Leggett, C. and Bockstael, N. E. (2000). Evidence of the effects of water quality on residential land prices. Journal of Environmental Economics and Management, 39, 121-144.


Lesser, V.M., Dillman, D.A., Carlson, J., Lorenz, F., Mason, R., and Willits, F. (2001). Quantifying the influence of incentives on mail survey response rates and nonresponse bias. Paper presented at the annual meeting of the American Statistical Association, Atlanta, GA. Retrieved from http://www.sesrc.wsu.edu/dillman/.


Link, M.W. Battaglia, M.P., Frankel, M.R., Osborn, L., and Mokdad, A. H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72(1), 6-27.


Lipton, D. and Hicks, R. (2003). The cost of stress: low dissolved oxygen and economic benefits of recreational striped bass (Morone saxatilis) fishing in the Patuxent River. Estuaries, 26(2A), 310-315.


Lipton, D.W., and Hicks, R. (1999). Boat Location Choice: The Role of Boating Quality and Excise Taxes. Coastal Management, 27(1), 81-90.


Lipton, D. (2004). The value of improved water quality to Chesapeake Bay boaters. Marine Resource Economics, 19, 265-270.


List, J.A. (2001). Do Explicit Warnings Eliminate the Hypothetical Bias in Elicitation Procedures? Evidence from Field Auctions for Sportscards. American Economic Review, 91(5), 1498-1507.


Louviere, J.J., Hensher, D.A., and Swait, J.D. (2000). Stated preference methods: Analysis and application. Cambridge, UK: Cambridge University Press.


Maddala, G.S. (1983). Limited-dependent and qualitative variables in econometrics. Econometric Society Monographs, No.3. Cambridge: Cambridge University Press.


Mansfield, C., Van Houtven, G., Hendershott, A., Chen, P., Porter, J., Nourani, V., and Kilambi, V. (2012). Klamath River Basin restoration nonuse value survey, Final report. Sacramento, CA: Prepared for the US Bureau of Reclamation. Retrieved from: http://klamathrestoration.gov/sites/klamathrestoration.gov/files/DDDDD.Printable.Klamath%20Nonuse%20Survey%20Final%20Report%202012%5B1%5D.pdf


Massey, D.M., Newbold, S.C., and Genter, B. (2006). Valuing water quality changes using a bioeconomic model of a coastal recreational fishery. Journal of Environmental Economics and Management, 52, 482–500.


McConnell, K.E. (1990). Models for referendum data: The structure of discrete choice models for contingent valuation. Journal of Environmental Economics and Management, 18(1), 19-34.


McFadden, D., and Train, K. (2000). Mixed multinomial logit models for discrete responses. Journal of Applied Econometrics, 15(5), 447-470.


Miller, W., Robinson, L.A., and Lawrence, R. (eds.). (2006). Valuing Health for Regulatory Cost-Effectiveness Analysis, Washington, DC: National Academies Press.


Mistiaen, J.A., Strand, I.E., and Lipton, D. (2003). Effects of environmental stress on blue crab (Callinectessapidus) harvests in Chesapeake Bay tributaries. Estuaries, 26, 316–322.


Mitchell, R.C., and Carson, R.T. (1989). Using surveys to value public goods: The contingent valuation method. Washington, D.C.: Resources for the Future.


Morgan, C., and Owens, N. (2001). Benefits of Water Quality Policies: The Chesapeake Bay, Ecological Economics, 39(2), 271-284,


Murphy, J.C., Allen, P.G., Stevens, T.H., and D. Weatherhead. 2005. A Meta-Analysis of Hypothetical Bias in Stated Preference Valuation. Environmental and Resource Economics 30, 313-325.


Nielsen, J.S. 2011. Use of the Internet for willingness-to-pay surveys: A comparison of face-to-face and web-based interviews. Resource and Energy Economics, 33(1), 119-129.


NOAA. 2002. Stated Preference Methods for Environmental Management: Recreational Summer Flounder Angling in the Northeastern United States. https://www.st.nmfs.noaa.gov/st5/RecEcon/Publications/NE_2000_Final_Report.pdf. (Accessed November 7, 2012.)


Office of Management and Budget (OMB). 2003. Circular A-4. http://www.whitehouse.gov/omb/circulars_a004_a-4 (Accessed November 7, 2012).


Office of Management and Budget (OMB). 2006. Guidance on Agency Surveys and Statistical Information Collections: Questions and Answers When Designing Surveys for Information Collections. http://www.whitehouse.gov/omb/inforeg/pmc_survey_guidance_2006.pdf


Opaluch, J.J., Grigalunas, T.A., Mazzotta, M., Johnston, R.J., and Diamantedes, J. (1999). Recreational and resource economic values for the Peconic Estuary. Prepared for the Peconic Estuary Program. Peace Dale, RI: Economic Analysis Inc.


Opaluch, J.J., Swallow, S.K., Weaver, T., Wessells, C., and Wichelns, D. (1993). Evaluating impacts from noxious facilities: Including public preferences in current siting mechanisms. Journal of Environmental Economics and Management, 24(1), 41-59.


Pagano, M., and Gauvreau, K. (2000). Principles of biostatistics. 2nd ed. Belmont, CA: Duxbury.


Parsons, G.R., and Thur, S.M. (2008). Valuing Changes in the Quality of Coral Reef Ecosystems: A Stated Preference Study of SCUBA Diving in the Bonaire National Marine Park. Environmental and Resource Economics, 40(4), 593-608.


Poe, G.L., Welsh, M.P., and Champ, P.A. (1997). Measuring the difference in mean willingness to pay when dichotomous choice contingent valuation responses are not independent. Land Economics, 73(2), 255-267.


Poor, P., Pessagno, K., and Paul, R. (2007). Exploring the hedonic value of ambient water quality: A local watershed-based study. Ecological Economics, 60, 797-806.


Schkade, D.A., and Payne, J.W. (1994). How people respond to contingent valuation questions: A verbal protocol analysis of willingness to pay for an environmental regulation.” Journal of Environmental Economics and Management, 26, 88-109.


Train, K. (1998). Recreation Demand Models with Taste Differences Over People. Land Economics, 74(2), 230-239.


U.S. Bureau of Reclamation. (2012). Klamath River Basin Restoration Nonuse Value Survey. Final Report. Prepared by RTI International. RTI Project Number 0212485.001.010.


U.S. Census Bureau. (2012). 2010 Census Summary File 1. Retrieved May 31, 2012 from http://factfinder2.census.gov/.


U.S. Congress. House. (2011) Conservation, Energy, and Forestry Subcommittee of the Committee on Agrictulture. Hearing to review the Chesapeake Bay TMDL, agricultural conservation practices, and their implications on national watersheds. 112th Cong., 1st sess. Washington: GPO, 2011. Print.


U.S. Department of Labor, Bureau of Labor Statistics. (2012). Table 1: Civilian workers, by major occupational and industry group. Retrieved May 2012 from: http://www.bls.gov/news.release/pdf/realer.pdf.


U.S. Environmental Protection Agency (U.S. EPA). (2008). Final Ozone NAAQS Regulatory Impact Analysis. EPA EPA-452/R-08-003. (Accessed November 7, 2012.)


U.S. EPA. (2009a). Environmental Impact and Benefits Assessment for the Final Effluent Guidelines and Standards for the Construction and Development Category. EPA-821-R-09-012. (Accessed November 7, 2012.)


U.S. EPA. (2009b). ICR Handbook: EPA’s Guide to Writing Information Collection Requests under the Paperwork Reduction Act of 1995. Office of Environmental Information.


U.S. EPA. (2010). Guidelines for Preparing Economic Analyses. (EPA 240-R-10-001). U.S. EPA, Office of the Administrator, Washington, DC, December 2010.


Van Houtven, G. L. (2009). Changes in ecosystem services associated with alternative levels of ecological indicators. In: Risk and Exposure Assessment for Review of the Secondary National Ambient Air Quality Standards for Oxides of Nitrogen and Oxides of Sulfur. Research Triangle Park, NC: RTI International.


Viscusi, W.K., Huber J., and Bell, J. (2008). The Economic Value of Water Quality. Environmental and Resource Economics, 41(2), 169-187.


Von Haefen, R.H. (2003). Incorporating observed choice into the construction of welfare measures from random utility models. Journal of Environmental Economics and Management, 45, 145–165.


Whitehead, John C. (2006) . A practitioner’s primer on the contingent valuation method. In: Anna Alberini and James R. Kahn (Eds.) . Handbook on Contingent Valuation. Edward Elgar Publishing. Northampton, Massachusetts, pp. 137-162. pp. 66-91.



1 Miller, et al. 2006 is a useful reference of the application of stated preference methods to valuing health risk reductions across federal agencies.

2 http://www.chesapeake.org/stac/index.php

3 The survey will be imprinted with a stamp requesting the recipient to “Please return within 2 weeks”

4 For example, in rural areas, Rural Route box addresses have been converted to physical street addresses.

5 A 92% eligibility rate is based on Link, et al. (2008). A 30% response rate is based on Helm (2012,), Mansfield, et al. (2012), and Johnston, et al. (2012).

6 i.e., Whether it is true that the target population contains 50% women (p­0=50%), or 10% women (p0=10%).


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorC Moore
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy