2558ss01 revised 03-28-18

2558ss01 revised 03-28-18.docx

Willingness to Pay Survey to Evaluate Recreational Benefits of Nutrient Reductions in Coastal New England Waters (New)

OMB: 2080-0084

Document [docx]
Download: docx | pdf


Supporting Statement for Information Collection Request for

Willingness to Pay Survey to Evaluate Recreational Benefits of Nutrient Reductions in Coastal New England Waters


January 26, 2017

Revision in response to OMB Comments: March 28, 2018





TABLE OF CONTENTS


List of Attachments 3

PART A OF THE SUPPORTING STATEMENT 4

1. Identification of the Information Collection 4

1(a) Title of the Information Collection 4

1(b) Short Characterization (Abstract) 4

2. Need for and Use of the Collection 6

2(a) Need/Authority for the Collection 6

2(b) Practical Utility/Users of the Data 7

3. Non-duplication, Consultations, and Other Collection Criteria 8

3(a) Non-duplication 8

3(b) Public Notice Required Prior to ICR Submission to OMB 11

3(c) Consultations 11

3(d) Effects of Less Frequent Collection 13

3(e) General Guidelines 13

3(f) Confidentiality 14

3(g) Sensitive Questions 14

4. The Respondents and the Information Requested 14

4(a) Respondents 14

4(b) Information Requested 18

5. The Information Collected - Agency Activities, Collection Methodology, and Information Management 19

5(a) Agency Activities 19

5(b) Collection Methodology and Information Management 20

5(c) Small Entity Flexibility 20

5(d) Collection Schedule 21

6. Estimating Respondent Burden and Cost of Collection 22

6(a) Estimating Respondent Burden 22

6(b) Estimating Respondent Costs 23

6(c) Estimating Agency Burden and Costs 23

6(d) Respondent Universe and Total Burden Costs 23

6(e) Bottom Line Burden Hours and Costs 24

6(f) Reasons for Change in Burden 24

6(g) Burden Statement 24

PART B OF THE SUPPORTING STATEMENT 26

1. Survey Objectives, Key Variables, and Other Preliminaries 26

1(a) Survey Objectives 26

1(b) Key Variables 26

1(c) Statistical Approach 27

1(d) Feasibility 28

2. Survey Design 28

2(a) Target Population and Coverage 28

2(b) Sampling Design 29

2(c) Precision Requirements 33

2(d) Questionnaire and Mail Materials Design 38

3. Pretests 44

4. Collection Methods and Follow-up 45

4(a) Collection Methods 45

4(b) Survey Response and Follow-up 45

5. Analyzing and Reporting Survey Results 46

5(a) Data Preparation 46

5(b) Analysis 46

5(c) Reporting Results 50

REFERENCES 50



List of Attachments

Attachment 1 – Draft survey instrument: general recreation

Attachment 2 – Draft invitation letter

Attachment 3 – Responses to public comments on Federal Register Notices


PART A OF THE SUPPORTING STATEMENT


1. Identification of the Information Collection

1(a) Title of the Information Collection


Willingness to Pay Survey to Evaluate Recreational Benefits of Nutrient Reductions in Coastal New England Waters


1(b) Short Characterization (Abstract)


New England’s coastal social-ecological systems are subject to chronic environmental problems, including water quality degradation that results in important social and ecological impacts. Researchers at the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD), Atlantic Ecology Division (AED) are piloting an effort to better understand how reduced water quality due to nutrient enrichment affects the economic prosperity, social capacity, and ecological integrity of coastal New England communities. This research is part of two major research efforts within the EPA: (1) Task 4.61 of ORD’s Sustainable and Healthy Communities Research Program (Integrated Solutions for Sustainable Communities: Social-Ecological Systems for Resilience and Adaptive Management in Communities - A Cape Cod Case Study), and (2) Task 3.04A of the Safe and Sustainable Waters Research Program (National Water Quality Benefits: Economic Case Studies of Water Quality Benefits), which is part of a three-office effort within EPA (Office of Research and Development, Office of Policy, and Office of Water) to quantify and monetize the benefits of water quality improvements across the nation.

As part of these two research efforts, we propose to conduct a survey that will allow us to estimate changes in recreation demand and values due to changes in nutrients in northeastern U.S. coastal waters. Our initial geographic focus for these efforts will be Cape Cod, Massachusetts (“the Cape”; Barnstable County), and New England residents within 100 miles of the Cape. We focus on Cape Cod and its surrounding coastal areas both in order to limit the scope of the work to remain feasible within our research budget, and to coordinate this socio-economic analysis with extensive ecological research being conducted on the Cape by ORD researchers, researchers at EPA’s Region 1 office, and other external research groups. Cape Cod is also in the midst of an extensive regional planning effort related to its coastal waters, and this research can provide helpful socio-economic information to decision makers about the use of those waters. Because the 100-mile radius from Cape Cod, to which the researchers would generalize results, includes a large area of southern New England and the largest population centers in New England, the results will be relevant to understanding coastal recreation and water quality perceptions of a large portion of the residents of southern New England.

One of the key water quality concerns on Cape Cod, and throughout New England, is nonpoint sources of nitrogen, which lead to ecological impairments in estuaries with resultant socio-economic impacts. The towns on the Cape are currently in the process of creating plans to address their total maximum daily load (TMDL) thresholds for nitrogen-impaired coastal embayments. There are over 40 coastal embayments and subembayments on the Cape. To date, the EPA has approved 12 TMDLs for embayments on Cape Cod with others pending review (Cape Cod Commission, 2015). The Massachusetts Estuaries Project estimates that wastewater accounts for 65% of the nitrogen sources on the Cape (Cape Cod Commission, 2015). Because Cape Cod’s wastewater is primarily handled by onsite septic systems (85% of total Cape wastewater flows), the main sources are spread across the Cape and are affected by individual household-level decisions as well as community-level decisions. Coordinated through the Cape Cod Commission and based on the Cape’s Clean Water Act Section 208 Plan, communities across the Cape have been tasked with developing a watershed-based approach for addressing water quality to improve valued socio-economic and ecological conditions. The decisions needed to meet water quality standards are highly complex and involve significant cross-disciplinary challenges in identifying, implementing, and monitoring social and ecological management needs. We will focus on understanding recreational uses as valued ecosystem services on the Cape (including beachgoing, swimming, fishing, shellfishing, and boating).

As part of these efforts, EPA’s ORD/AED is seeking approval to conduct a revealed preference survey to collect data on people’s saltwater recreational activities; how recreational values are related to water quality; how perceptions of water quality relate to objective measures; the connections between perceptions of water quality, recreational choices and values, and sense of place; and demographic information. If approved, the survey will be administered using a mixed-mode approach that includes a mailed invitation to a web survey with an optional paper survey for people who are unable or unwilling to answer the web survey. The survey will be sent to 8,400 households (in addition to a pretest of 370 households) in counties where more than 25% of the county’s geographic boundaries fall within 100 miles of the Cape as measured from a beach in Bourne, Massachusetts, which is the first town on Cape Cod heading east. This area includes coastal counties of New Hampshire, the eastern half of Massachusetts, all of Rhode Island, and the eastern part of Connecticut. Table A1 lists the included counties, and Figure A1 shows the sample area on a map. In addition, we will oversample residents of Cape Cod. We will send 750 surveys to this group. Thus, the total sample for the main survey will be 9,150.

ORD will use the survey responses to estimate willingness to pay for changes related to reductions in nutrient and pathogen loadings to coastal New England waters. The analysis relies on state of the art theoretical and statistical tools for non-market welfare analysis. A non-response bias analysis will also be conducted to inform the interpretation and validation of survey responses.

The total national burden estimate for all components of the survey is 563 hours. The burden estimate is based on 90 responses to 370 pretest surveys, and 2,163 responses to 9,150 main surveys. Assuming 15 minutes are needed to complete the survey, the total respondent cost comes to $19,618 for the pre-test and main survey combined, using an average wage rate for New England of $34.83 (United States Department of Labor, 2016).


2. Need for and Use of the Collection

2(a) Need/Authority for the Collection


Within EPA, this work will provide data to two of EPA’s ORD research programs: the Safe and Sustainable Water Resources Research (SSWR) Program and the Sustainable and Healthy Communities (SHC) Research Program. One of the four objectives identified in the SSWR research plan is to quantify benefits of water quality, because the values of many ecosystem services of water systems have not been estimated, or existing estimates are not up to date or comprehensive with regard to geographic and policy scope. Therefore, more effectively valuing the benefits of water quality improvements will aid in the protection or restoration of water quality (U.S. EPA, 2015a). This research also falls within the fourth objective of the SHC research plan: to develop the causal relationships between human well-being and environmental conditions (U.S. EPA, 2015b). Recreation benefits are a cultural ecosystem service obtained from the protection of natural resources, including coastal systems that contribute to human well-being (Millennium Ecosystem Assessment, 2005). This research will improve EPA’s ability to characterize recreational benefits of improved water quality in coastal communities.

Currently, very little is known about recreational uses and values for and attitudes towards waterbodies in New England’s coastal communities that, like Cape Cod, are facing problems of nutrient overenrichment primarily driven by non-agricultural nonpoint sources. This limits the EPA’s ability to assess the full economic and social impacts of nutrient overenrichment. There is also little known about how people’s perceptions of water quality relate to actual water quality measures. The proposed survey will focus on recreational uses of coastal waters. In particular, it will focus on calculating economic values for, and attitudes and perceptions towards, water quality and water-contact recreation. Specifically, the survey instrument will elicit revealed preferences that can be used to estimate non-market economic values associated with recreational uses of coastal and estuarine waterbodies and the related attitudes towards those waterbodies that could affect their elasticity of demand and future uses. Data obtained by the survey are intended to be analyzed in a random-utility model for valuing water-contact recreation, as well as ancillary modeling of water quality perceptions and hypotheses related to people’s sense of place.

The primary purpose of this study is to conduct research on several topics: participation in coastal recreation, including recreation that includes water contact; people’s perceptions of water quality and the factors that most strongly influence those perceptions; the relationship of water quality perceptions to objective measures; and the use of sense of place metrics in economic valuation models. The survey is being proposed by the EPA Office of Research and Development, and is not associated with any regulatory ruling of EPA. Thus, decisions were made in the study design from a perspective of making research contributions. We do not intend that results from this study will be used for any specific policy or rule, but we understand that it may provide valuable input to future decisions. While the primary purpose of the study is to examine these research questions and contribute to understanding of the connections between people and coastal water quality, we anticipate that the findings from this study will be of interest to regional and state partners and communities that are implementing water quality policies and actions for coastal waters regarding the potential recreational benefits of such policies and actions (for example, implementation of required TMDLs, green infrastructure solutions, or other actions).

Specifically, the survey will be used to estimate recreational users’ values for changes in water quality in coastal New England waters. Water quality models will be used to predict how water quality is likely to change under various policy scenarios and baseline conditions. Model predictions and valuation survey data will be combined to estimate recreational economic benefits under different hypothetical policy scenarios. In sum, the primary objective of the collection is to conduct research on important topics regarding coastal recreation and water quality. We are not conducting this research to inform decisions about a specific policy or policies, but to provide information that will further the understanding of our research questions and potentially provide value to future decisions.

We will integrate the economic model with information about the differences in perceptions of water quality and sense of place of the respondents in order to better understand how people respond to and benefit from nutrient reductions. Sense of place is the imbuement of meaning into a physical setting. This is often characterized for a particular setting in terms of a) physical characteristics, b) patterns of interactions and behaviors, c) non-evaluative descriptive meaning, and d) evaluative meaning of attachment, dependence, satisfaction, and identity (Stedman et al., 2006). By combining valuation methods from environmental economics with social science approaches, sense of place specifically, we seek to characterize the social-ecological system in a richer fashion, as well as test the sense of place elicitation methods in explaining differences in economic values associated with changes in water quality.


2(b) Practical Utility/Users of the Data

The primary reason for the proposed survey is exploratory research. A continuing problem for communities dealing with natural resource management problems is the issue of how to integrate natural resource valuation into a feasible decision-making process. One of the primary reasons for conducting economic valuation studies is to improve the way communities frame choices regarding the allocation of scarce resources and to clarify the trade-offs between alternative outcomes. This problem is particularly relevant for coastal recreation in New England, especially on Cape Cod. Despite the deep cultural importance of coastal recreation to New England residents, there is a remarkable lack of valid empirical economic and social studies quantifying this importance to the general public who live in the region.

There are many challenges to managing water quality in New England waters, and decision-makers are often faced with trade-offs when allocating resources to accommodate these uses. The goal of this project is to obtain estimated economic values for, and attitudes and perceptions towards, water quality and water-contact recreation. These estimates of the public value and attitudes will be useful in numerous policy contexts and will support numerous government agencies and community organizations that seek to integrate the value of recreation into their strategic water quality policy and financial decisions. Analysis of the revealed preference survey results, detailed in Part B of this Supporting Statement, will follow standard practices outlined in the literature (Parsons, 2014; Phaneuf & Smith, 2005).

The results of the study will be made available to EPA regional offices and state and local governments which they may use to better understand the preferences of households in their jurisdictions and the benefits they can expect as a result of actions to improve coastal water quality. Additionally, stakeholders and the general public will be able to use this information to better understand the social benefits of improving water quality in coastal waters.


3. Non-duplication, Consultations, and Other Collection Criteria

3(a) Non-duplication


To the best of our knowledge, this study is unique and does not duplicate other efforts. This is the first revealed preference recreational use study related to water quality for Cape Cod/southern New England in over 20 years, and the research will be important to address similar issues in other coastal communities (e.g., Long Island, NY, which is facing similar issues and decisions). This research intends to design a survey for developing a random-utility model for recreation site choice in New England waterbodies that integrates the differences in sense of place and water quality for coastal recreation users.

There are many studies in the environmental economics literature that quantify benefits, or willingness to pay (WTP), associated with water recreation, but few that address WTP as a function of nutrient impacts in coastal waters and fewer still that are recent and relevant for New England. In addition to EPA’s own survey of the literature, EPA contracted a literature review including valuation studies of recreation and water quality (WA 2-35, Contract EP-C-13-039). Of those revealed preference studies of recreation that include water quality, many are freshwater based (Murray et al., 2001; Yeh et al., 2006; Egan, 2009 Feather, 1994; Melstrom & Jayasekera, 2016). Melstrom & Jayasekera (2016), and Feather (1994) deal only with fishing, while the others address more general freshwater beach or lake visits.

There is a set of papers that address bacteria and beach closures in saltwater (Bockstael et al., 1987; Parsons et al., 2009; Hilger & Hanemann, 2008). Bockstael et al. (1987) was conducted in coastal Massachusetts and is a seminal work in recreation demand modeling, but is dated at this point and, like the others, does not address nutrient-related water quality issues. Kaoru et al. (1995) addresses nutrients in coastal waters, again estimating changes in WTP for fishing only. This study uses nutrient loading from point sources in the National Coastal Pollutant Discharge Inventory within ten miles from a fishing site as the proxy for water quality. While it grapples with similar issues to our study, it is over 20 years old, for a different geographic region and for only one specialized coastal activity.

The two works that are closest to ours in the economic valuation literature are Opaluch et al. (1999) and Phaneuf (2002). The latter estimates changes in WTP for water quality improvements in watersheds as a function of watershed-scale water quality metrics including EPA’s Index of Watershed Indicators as well as direct measures of pH, phosphorus, dissolved oxygen and ammonia. The study includes both inland and coastal watersheds and a range of activities, although the results are reported only in terms of WTP for a trip of any activity type to a given watershed in North Carolina under improvement and access scenarios. Opaluch et al. (1999) estimated per trip values for swimming, boating, fishing, and wildlife viewing on the East End of Long Island. They also estimated changes in WTP for swimming trips with changes in nitrogen, bacteria, brown tides, and secchi depth. The estimates are based not on a random utility model (RUM) approach, but on a multiple site count-data model, which relied on assumptions that make welfare measures inconsistent with economic demand theory (Phaneuf and Smith, 2005). Although they are dated, these estimates are still, to this day, the most relevant values for coastal recreation trip days as a function of nutrient related water quality changes for the region. Our planned study will update these estimates in time, methods and geographic extent.

Costs and benefits of water quality improvements/impairments will accrue to different populations. While statistical methods exist to handle these issues in theory—mixed logit and latent variable techniques, for example—using sense of place methods to test the differences in demand for the varying populations of a community in a policy-relevant application would allow us to quantify the disparate impacts of water quality on non-market benefits in a formal way. This location-specific study will be compared to benefit transfer, functional benefit transfer and other higher-level approximation methods in water quality benefit estimation in order to address the appropriateness and transferability of these methods.

Sense of place is an important metric for understanding the implications of recreation and attitudes towards natural resources. Sense of place is the measurement of the meaning that individuals set on a particular geographic area. That meaning is sometimes defined by natural resources, culture, or both (Stedman et al., 2006; Stedman et al., 2004). Sense of place can be a useful indicator for determining the sustainable use of different types of ecosystem services. It serves as a measurement of different landscape features that may be culturally important to people (de Groot et al., 2010). It is listed specifically in the Millennium Ecosystem Assessment (2005) of the United Nations as one of the nonmaterial benefits people obtain from ecosystem services.

There have been studies in other locations investigating the sense of place or place attachment of different communities related to natural resources, including work done for permanent and seasonal residents in Utah (Matarrita-Cascante et al., 2010), second home owners on northern Wisconsin lakes (Jorgenson & Stedman, 2001; 2006), and Arctic residents in Norway (Kaltenborn, 1998). There is also an existing body of literature that connects sense of place or place attachment to recreational opportunities, including users in wilderness areas (Williams et al., 1992) and the Appalachian Trail (Kyle et al., 2003; 200) as well as Smith et al. (2016)’s use of place attachment in exploration of shifting demand for winter recreation in Minnesota. This body of work provides important insights for survey design and the use of sense of place for improving the understanding of attitudes towards natural resources and recreation, but only Kaltenborn (1998) focuses on coastal communities, which are of a different socio-economic and ecological typology. Kaltenborn’s (1998) work is specifically focused on the residents of an archipelago in the high Arctic, which is very geographically and culturally different from our work. In particular, there has also been no effort to connect sense of place with recreational use or water quality in coastal New England. The only related Cape Cod-specific work is that of Cuba and Hummon (1993a; 1993b) who focused their research on understanding how Cape Cod residents who had recently moved constructed a sense of home based on their dwelling, community, and region. While this work provides important context for some of the user groups, it does not connect with the Cape’s waterbodies, recreation, water quality, or other valued commodities.

There are two other works that are relevant to New England coastal recreation, but do not address water quality or non-market benefits. NOAA’s National Marine Fisheries Service (NMFS) is currently processing the results from a national coastal recreation survey that is concerned with market impacts of recreation spending in coastal counties (Steinback & Kosaka, 2016, OMB Control Number: 0648-0652). In designing our survey, we contacted the economists, Scott Steinback and Rosemary Kosaka (personal communication, 2016), who are overseeing the effort, and reviewed the data collected. The NMFS survey complements ours by collecting detailed participation rates and effort estimates about which water-based activities people engage in and how often. It also collects recreational expenditure data to estimate the market economic impacts of coastal recreation. Because we will have access to these data, it will allow us to avoid re-collecting detailed effort estimates, saving space and time on our survey instrument. This complementary survey will allow our survey to focus on collecting data for a single choice occasion RUM of recreation demand including water quality as a determinant. Our survey results will complement the market impacts that will be calculated in the NOAA survey with non-market values for water recreation trips.

The other related work is based on an opt-in sample of coastal recreation conducted by Point 97, SeaPlan, and the Surfrider Foundation (Bloeser et al., 2015). This study collected locations by activity along the coast of New England from a web-mapping survey instrument. The data may help inform geographic areas to use as site choices for our RUM, but does not contain the information to estimate non-market benefits of coastal recreation that would be representative of the general population.

To conclude, while recreation and water quality has been studied, our study adds important information to the literature for the following reasons:

  1. It is relevant to nutrient pollution in coastal waters.

  2. It estimates values for multiple recreational activities beyond fishing.

  3. It collects New England regional estimates.

  4. It provides up-to-date WTP estimates for coastal activities.

  5. It incorporates sense of place concepts to explain varying preferences.


3(b) Public Notice Required Prior to ICR Submission to OMB


The first of the two Federal Register Notices opened on November 9, 2016 and closed on January 9, 2017. The second Federal Register Notice opened on November 13, 2017 and closed on December 13, 2017. See responses to public comments in Attachment 3.



3(c) Consultations


Preliminary consultations have been conducted with several stakeholder organizations related to this effort. Information collected with the survey may be of interest to other federal, state, and local agencies that regulate water quality, promote tourism, and engage with coastal communities. Further, the collection may be of interest to non-profit organizations and other researchers focused on the economy, communities, and environment of Cape Cod and the greater New England area. AED has made concerted efforts to keep interested parties informed of the progress of this project and to solicit feedback, and will continue to do so going forward.

AED has contracted Professor George Parsons, a topic expert from the University of Delaware, to review the survey instrument and research design. We also are working closely with the recreation group of EPA’s three-office national water quality benefits effort, which includes ORD, EPA’s National Center for Environmental Economics, and EPA Office of Water economists. We have also reached out to and consulted with local environmental economics professors, Kathleen Bell, University of Maine, Stephen Swallow, University of Connecticut, and Emi Uchida, University of Rhode Island.

In addition to consultations with local experts, two early presentations have been given on this work to solicit feedback from experts in the environmental economics and social science fields. The presentations were given at the International Symposium for Society and Resource Management (ISSRM) and the Northeast Agricultural and Resource Economics Association (NAREA) Annual Meetings. ISSRM is the annual symposium of the International Association for Society and Natural Resources (IASNR). As described on its website (IASNR, 2016), IASNR is a professional association that brings together diverse social sciences to focus on research of the environment. NAREA is an affiliate organization of the Agricultural and Applied Economics Association that promotes education and research on economic and social problems related to natural resource use and the environment (NAREA, 2016).

As part of the planning and design process for this collection, EPA conducted a series of seven focus groups located within the study area – four in Rhode Island, two in Massachusetts, and one in Connecticut. While early focus group sessions were used to learn about people’s coastal recreational activities, attributes of locations they care about, and the kinds of information respondents would need to answer the questions, later sessions were employed to test the draft survey materials. These consultations with potential respondents were critical in identifying sections of the questionnaire that were redundant and lacked clarity and in producing a survey instrument meaningful to respondents. The later focus group sessions were also helpful in estimating the expected amount of time respondents would need to complete the survey instrument. The focus group sessions were conducted under EPA ICR # 2205.17, OMB # 2090-0028.

As noted in the non-duplication section, EPA reached out to NOAA/NMFS. In designing our survey, we contacted the economists, Scott Steinback and Rosemary Kosaka, who are overseeing NMFS’ coastal recreation data collection effort and reviewed the data collected. We determined that the NMFS survey complements ours by collecting detailed participation rates and effort estimates.


Survey Design Team: Dr. Marisa Mazzotta at the U.S. Environmental Protection Agency, Office of Research and Development, serves as the project manager for this study. Dr. Mazzotta is assisted by Dr. Nathaniel Merrill, Dr. Kate Mulvaney, and Ms. Sarina Lyon, all with the U.S. EPA’s Office of Research and Development. Dr. George Parsons, Professor at the School of Marine Science and Policy, University of Delaware, provided review of the draft survey. Mr. Matthew Anderson, Senior Analyst at Abt Associates, provides contractor support.

Dr. George Parsons, a professor at the School of Marine Science and Policy, University of Delaware, specializes in travel cost, hedonic price, contingent valuation, and choice experiments. His work includes two summary works on travel cost methods and a number of studies on recreation RUM methods and applications. He has worked extensively valuing coastal resources using revealed preference methods.

Mr. Matthew Anderson, a senior analyst at Abt Associates, specializes in data collection and survey implementation. Mr. Anderson is trained in quantitative and behavioral research in the social sciences with strong background in survey research design and analysis. Mr. Anderson is currently directing a mixed-mode survey for the EPA dealing with the removal and repair of lead paint in commercial buildings. He has worked with the EPA to finalize survey instruments, create a project timeline and implementation schedule, refining sampling parameters, and coordinating the field effort. He was previously the deputy survey director for a large-scale mixed-mode national mental health survey for SAMHSA, which collected data from over 22,000 mental health facilities. He has extensive experience managing operations staff, project budgets, instrument design/programming, and creating data cleaning specifications.

3(d) Effects of Less Frequent Collection


The survey is a one-time activity. Therefore, this section does not apply.


3(e) General Guidelines


The survey will not violate any of the general guidelines described in 5 CFR 1320.5 or in EPA’s ICR Handbook.


3(f) Confidentiality


All responses to the survey will be kept confidential to the extent provided by law. To ensure that the final survey sample includes a representative and diverse population of individuals, the survey questionnaire will elicit basic demographic information, such as age, race and ethnicity, number of children under 18, type of employment, and income. However, the survey questionnaire will not ask respondents for personal identifying information, such as names or phone numbers. Instead, each survey response will receive a unique identification number. Prior to taking the survey, respondents will be informed that their responses will be kept confidential to the extent provided by law. The name and address of the respondent will not appear in the resulting database, preserving the confidentiality of the respondents’ identities. The survey data will be made public only after it has been thoroughly vetted to ensure that all other potentially identifying information has been removed. After data entry is complete, the surveys themselves will be destroyed.

The U.S. EPA office location (AED) and U.S. EPA electronic file system used by the principal investigator are highly secure. A keycard possessed only by U.S. EPA employees and contractors is necessary to enter the building. The principal investigators are in a separate keyed office space within the secure building. The computer system where the personal names and addresses associated with respondent numeric codes will be stored during the process of data entry is a secure server requiring principal investigator personal login username and password. At the conclusion of data entry, this file linking personal names and addresses to respondent codes will destroyed (along with hard copy survey responses themselves) and only respondent codes will remain.


3(g) Sensitive Questions


The survey questionnaire will not include any sensitive questions pertaining to private or personal information, such as sexual behavior or religious beliefs.


4. The Respondents and the Information Requested

4(a) Respondents


Eligible respondents for the survey are individuals 18 years of age or older who reside in counties where at least 25 percent of the county’s geographic area falls within a 100-mile radius of Cape Cod. Table A1 lists the states and counties included, and Figure A1 maps this area. The sample will be stratified by geography, with Barnstable County, MA sampled at a rate 3.06 times higher than the rest of the population in the study area. The sample will be drawn from general population addresses of the U.S. Postal Service Delivery Sequence File (DSF).

Households will be selected randomly from the DSF, which covers over 97 percent of residences in the United States. The DSF includes citystyle addresses and post office boxes, and covers singleunit, multiunit, and other types of housing structures. As described in Part B of this Supporting Statement, we assume that 90% of the addresses will be valid and will receive the survey. EPA will request participation from a random stratified sample of 9,520 households in two phases. The first phase, a pretest, will be sent to 370 addresses. In the pretest, we will test the process of administering the survey, and will evaluate whether respondents are able to answer all questions as intended. To evaluate the pretest, we will calculate summary statistics for important variables, including water recreation participation, activities, distance travelled, and demographics (see Part B Section 2(c)(iii) for specific questions). We will also examine item nonresponse for each survey question, and will examine response rates for online and paper surveys and whether there are any major differences between the two modes. We will estimate a basic travel cost model using the pretest results. The second phase, encompassing full survey administration, will be administered to an additional 9,150 addresses. In each phase, we anticipate a response rate of 27 percent, resulting in 90 and 2,163 completed surveys, after correcting for expected undeliverable rates for each county.

Table A1 shows the included counties and anticipated completed survey sample sizes for the geographic regions included in this study. More detail on planned sampling methods and the statistical design of the survey can be found in Part B of this supporting statement.



Table A1: Anticipated Sample Sizes by State and County

State

Counties Included

Phase 1: Pretest

Phase 2: Full Survey

Sample

Size1

Percentage of Sample

Sample

Size1

Percentage of Sample

New Hampshire

Hillsborough, Rockingham

7

8%

166

8%

Massachusetts

Barnstable2, Bristol, Dukes, Essex, Hampden, Middlesex, Nantucket, Norfolk, Plymouth, Suffolk, Worcester

64

71%

1,581

73%

Rhode Island

Bristol, Kent, Newport, Providence, Washington

11

12%

250

12%

Connecticut

New London, Tolland, Windham

8

9%

166

8%

Total


90

100%

2,163

100%

1 Sample sizes presented in this table reflect total expected completed surveys, accounting for expected undeliverable rates by county.

2 Includes oversampling of Barnstable County.


Figure A1. Counties included in sampling area.

4(b) Information Requested

(i) Data items, including recordkeeping requirements


EPA developed the survey based on the findings of a series of seven focus groups conducted as part of survey instrument development (EPA ICR # 2205.17, OMB # 2090-0028). Focus groups provided valuable feedback which allowed EPA to iteratively edit and refine the questionnaire, and eliminate or improve imprecise, confusing, and redundant questions. In addition, later focus groups provided useful information on the approximate amount of time needed to complete the survey instrument. This information informed our burden estimates. Focus groups were conducted following standard approaches in the literature (Desvousges et al., 1984; Desvousges & Smith, 1988; Johnston et al., 1995).

EPA has determined that all questions in the survey are necessary to achieve the goal of this information collection, i.e., to collect data that can be used to support an analysis of recreation and water quality. The draft survey is included as Attachment 1, and described in more detail in Part B of this document. The survey has 5 sections: (1) Your Saltwater Recreation in New England, which gathers participation and effort data for saltwater recreation; (2) Your Most Recent Saltwater Recreation in New England, which gathers information on the last saltwater recreation trip; (3) Other Places for Saltwater Recreation, which elicits water quality perceptions for other locations where the respondent goes for saltwater recreation, asks about the furthest that the respondent would travel on a single day for salt water recreation, and asks about respondents’ responses to bacteria and beach closures; (4) Your Opinions on Coastal Water Quality in New England, which asks for the respondent’s opinions about a set of impacts of water quality issues in New England; and (5) About Your Household, which asks for demographic information, residence zip code and the zip code where the respondent works, and whether the respondent owns a second home and its zip code.


(ii) Respondent activities


EPA expects individuals to engage in the following activities during their participation in the survey:

  • Go online to answer a web survey, or answer a paper survey that will be mailed to those who do not respond to the web survey within 14 days of mailing the second web survey invitation.

  • Review the brief background information provided in the beginning of the survey document.

  • Complete the survey questionnaire, either online or paper version and, if paper version is answered, return paper version by mail.

A typical subject participating in the survey is expected to take 15 minutes to complete the survey. These estimates are derived from focus groups in which respondents were asked to complete a survey of similar length and detail to the current survey.



5. The Information Collected - Agency Activities, Collection Methodology, and Information Management

5(a) Agency Activities


The survey is being developed, conducted, and analyzed by EPA’s Office of Research and Development with contract support provided by Abt Associates Inc. (EPA contract No. EP-C-13-039).

Agency activities associated with the survey consist of the following:

  • Developing the survey questionnaire and related materials as well as sampling design.

  • Randomly selecting survey participants from the U.S. Postal Service DSF database.

  • Programming of web survey.

  • Printing of paper survey.

  • Mailing of initial web survey invitation.

  • Mailing of second web survey invitation.

  • Sending the paper survey to households who did not respond to the web survey.

  • Data entry and cleaning.

  • Analyzing survey results.

  • Conducting the non-response bias analysis based on available data. This will compare results of questions in Section 1 to existing national studies of participation in coastal recreation, present the geographic distribution of the respondents, and compare demographics to census data. See Part B, Section 2(c)(iii) for additional details.

  • If necessary, EPA will use results of the non-response bias analysis to adjust weights of respondents to account for non-response and minimize the bias.

EPA will primarily use the survey results to estimate the social value of changes in ecosystem quality, for recreational uses of coastal waters. EPA will also model water quality perceptions relative to objective measures, and explore the use of sense of place measures in conjunction with economic measures and relative to water quality perceptions.




5(b) Collection Methodology and Information Management


EPA plans to implement the proposed survey using a mixed-mode approach, which will invite respondents to answer the questionnaire on the internet. Offering the survey on the internet will allow respondents to select locations from interactive maps and enable them to identify their destination sites more accurately. An internet survey will also use checks and prompts to minimize missing and/or incorrectly entered information. Those who do not reply to the internet survey will be mailed a paper survey to complete. After finalizing the survey instrument, EPA will program the instrument using Confirmit web software. EPA will then use the U.S. Postal Service DSF database to identify households that will receive the survey invitation. The survey invitation letter (Attachment 2), which contains an explanation of the survey’s purpose and a URL to access the web survey, will be mailed to the selected households. The reminder letter will be similar to the initial invitation letter, with modifications to the introductory text.

Our main reason for selecting the mixed-mode approach is to avoid the potential inaccuracies associated with data entry from paper surveys where people will write in the location of their last day of recreation. Our review of the literature on relevant mixed-mode surveys indicates that overall response rates compared to a straight mail survey are slightly lower (Berzelak et al., 2015; Edwards et al., 2014; Brennan, 2011; Messer & Dillman, 2011; Schmuhl et al., 2010; Hohwu et al., 2013). We expect the improvements in data accuracy, leading to more usable responses and reductions in costs of data handling, will compensate for a small decrease in response rate. A similar number of responses from paper surveys using a mail-only approach may be unusable due to the inability to identify the recreation location precisely enough to connect to water quality. As part of our research, we intend to report on these results by comparing the time and cost of preparing the data and the loss of usable responses from the paper surveys to the time, cost, and usable responses from the web surveys. Our literature review of relevant mixed-mode studies using a mail invitation to a web survey, followed up with a paper survey, indicate that from 61% to 76% of total responses to these surveys are completed on the web (Berzelak et al., 2015; Edwards et al., 2014; Brennan, 2011; Messer and Dillman, 2011). Messer and Dillman (2011) also found that respondent demographics for the mixed-mode sample were similar to those of mail only.

We anticipate that there could be differences between the paper responses and internet responses, and intend to test for this in our statistical modeling. We hypothesize, based on existing literature, that people who respond by paper may have different demographics (e.g., older population) and, as a result, possibly different preferences. However, we do not expect the willingness to pay estimates to be biased for these people. What we do expect is a possible lower accuracy in identifying the location of their last recreation trip. So, it is possible that we may have slightly less accurate estimates of travel costs for these people if, for example, we can only identify the town where they recreated rather than a specific beach. We may also lose some observations if we cannot identify the location accurately enough to connect to water quality measures.

EPA will take multiple steps to promote response. Respondents will be sent a reminder letter approximately one week after the initial letter mailing. Approximately three weeks after the request to complete the web survey, all households that have not responded will receive a copy of the paper questionnaire with a cover letter. The cover letter will remind households to complete the survey. Based on this approach to mixed-mode data collection, it is anticipated that approximately 27 percent of the selected households who received the survey invitation will either complete the web survey or return a completed paper survey (Brennan, 2011; Edwards, 2014; Messer and Dillman, 2011).

Since the desired number of completed surveys for the general population is 2,163, it will be necessary to mail survey invitations to 9,150 households, assuming that a portion of the addresses will not be valid (with county-level variations).

Data quality will be monitored by checking submitted surveys for completeness and consistency. Responses to the survey will be stored in an electronic database. This database will be used to generate a data set for a RUM model of recreational values for ecosystem improvements, and regression models to compare water quality perceptions to objective measures and to explore how water quality perceptions and sense of place are related. To protect the confidentiality of survey respondents, the survey data will be released only after it has been thoroughly vetted to ensure that all potentially identifying information has been removed.


5(c) Small Entity Flexibility


This survey will be administered to individuals, not businesses. Thus, no small entities will be affected by this information collection.


5(d) Collection Schedule

The schedule for implementation of the survey is shown in Table A2.


Table A2: Schedule for Survey Implementation

Pretest Activities

Duration of Each Activity

Printing of invitation and reminder letters and questionnaires

Weeks 1 to 3

Mailing of Invitation Letters

Week 4

Mailing of Reminder Letters

Week 5

Survey Packet mailing (two weeks after reminder letters)

Week 7

Data entry

Weeks 6 to 8

Cleaning of data file

Week 9

Delivery of data

Week 10

Full Survey Implementation


Printing of invitation and reminder letters and questionnaires

Weeks 13 to 15

Mailing of Invitation Letters

Week 16

Mailing of Reminder Letters

Week 17

Survey packet mailing (one week after reminder letter mailing)

Week 18

Data entry

Weeks 18-22

Cleaning of data file

Week 23

Delivery of data

Week 24




6. Estimating Respondent Burden and Cost of Collection

6(a) Estimating Respondent Burden


Subjects who participate in the survey and follow-up interviews during the pre-test and main surveys will expend time on several activities. EPA will use similar materials in both the pre-test and main stages; it is reasonable to assume the average burden per respondent activity will therefore be the same for subjects participating during either pre-test or main survey stages.

Based on focus groups, EPA estimates that on average each respondent mailed the survey will spend 15 minutes (0.25 hours) reviewing the introductory materials and completing the survey questionnaire. EPA will administer the pre-test survey to 370 households; assuming that 90 respondents will complete and return the survey, the national burden estimate for respondents to the pre-test survey is 23 hours. During the main survey stage, EPA will administer the survey to 9,150 households; assuming that 2,163 respondents will complete and return the survey, the national burden estimate for these survey respondents is 541 hours. These burden estimates reflect a one-time expenditure in a single year.


6(b) Estimating Respondent Costs

  1. Estimating Labor Costs


According to the Bureau of Labor Statistics, the average hourly wage for private sector workers in the northeast region of the United States is $34.83 (2016$) (U.S. Department of Labor, 2016). Assuming an average per-respondent burden of 0.25 hours (15 minutes) for individuals mailed the survey and an average hourly wage of $34.83, the average cost per respondent is $8.71. Of the 9,520 individuals invited to participate in the survey during either pre-test or main implementation, 2,253 are expected to complete a survey. The total cost for all individuals who complete surveys would be $19,618.

EPA does not anticipate any capital or operation and maintenance costs for respondents.


6(c) Estimating Agency Burden and Costs


Agency costs arise from staff costs, contractor costs, and printing costs. EPA staff are expected to expend approximately 3,520 hours for survey development and implementation, analyzing data, and writing reports. Total labor costs for EPA staff time are estimated as $140,985.

Abt Associates will be providing contractor support for this project with funding of $203,507 from EPA contract EP-C-13-039, which provides funds for the purpose of coastal recreation survey development and support. Abt Associates Inc. staff and its consultants are expected to spend 1,228 hours pre-testing the survey questionnaire and sampling methodology, conducting the mixed-mode survey, and tabulating and analyzing the survey results. The cost of this contractor time is $203,507.

Agency and contractor burden is 4,748 hours, with a total cost of $344,492 excluding the costs of survey printing.

Printing of the survey is expected to cost $21,681. Thus, the total Agency and contractor burden would be 4,748 hours and would cost $366,173.


6(d) Respondent Universe and Total Burden Costs


EPA expects the total cost for survey respondents to be $19,618 (2016$), based on a total burden estimate of 563 hours (across both pre-test and main stages) at an hourly wage of $34.83.


6(e) Bottom Line Burden Hours and Costs


The following tables present EPA’s estimate of the total burden and costs of this information collection for the respondents and for the Agency. The bottom line burden for these two together is $385,791.


Table A4: Total Estimated Bottom Line Burden and Cost Summary for Respondents

Affected Individuals

Burden (hours)

Cost (2016$)

Pre-test Survey Respondents

23

$784

Main Survey Respondents

541

$18,834

Total for All Survey Respondents

563

$19,618

Annual Respondent Cost*

One-time collection / 3 years

188

$6,539




Table A5: Total Estimated Burden and Cost Summary for Agency

Affected Individuals

Burden (hours)

Cost (2016$)

EPA Staff

3,520

$140,985

Survey Printing


$21,681

EPA's Contractors for the Survey

1,228

$203,507

Total Agency Burden and Cost

4,748

$366,173

Annual Agency Cost*

One-time collection / 3 years


$122,058



6(f) Reasons for Change in Burden


This is a new collection. The survey is a one-time data collection activity.


6(g) Burden Statement


EPA estimates that the public reporting and record keeping burden associated with the survey will average 5 minutes per respondent (i.e., a total of 563 hours of burden divided among 90 pre-test respondents, 23 hours, and 2,163 main survey respondents, 541 hours). Burden means the total time, effort, or financial resources expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal agency. This includes the time needed to review instructions; develop, acquire, install, and utilize technology and systems for the purposes of collecting, validating, and verifying information, processing and maintaining information, and disclosing and providing information; adjust the existing ways to comply with any previously applicable instructions and requirements; train personnel to be able to respond to a collection of information; search data sources; complete and review the collection of information; and transmit or otherwise disclose the information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB control numbers for EPA's regulations are listed in 40 CFR part 9 and 48 CFR chapter 15.

To comment on the Agency's need for this information, the accuracy of the provided burden estimates, and any suggested methods for minimizing respondent burden, including the use of automated collection techniques, EPA has established a public docket for this ICR under Docket ID No. EPA-HQ- EPA-HQ-ORD-2016-0632, which is available for online viewing at www.regulations.gov, or in person viewing at the Office of Research and Development (ORD) Docket in the EPA Docket Center (EPA/DC), EPA West, Room 3334, 1301 Constitution Ave., NW, Washington, DC. The EPA/DC Public Reading Room is open from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal holidays. The telephone number for the Reading Room is 202-566-1744, and the telephone number for the Office of the Administrator Docket is 202-566-1752.

Use www.regulations.gov to obtain a copy of the draft collection of information, submit or view public comments, access the index listing of the contents of the docket, and to access those documents in the public docket that are available electronically. Once in the system, select “search,” then key in the docket ID number, EPA-HQ- EPA-HQ-ORD-2016-0632.


PART B OF THE SUPPORTING STATEMENT


1. Survey Objectives, Key Variables, and Other Preliminaries

1(a) Survey Objectives

The survey is being proposed by the EPA Office of Research and Development, and is not associated with any regulatory ruling of EPA. Because the primary reason for the proposed survey is research, decisions were made in the study design from a perspective of making research contributions, rather than for conducting a definitive benefits analysis for regulatory purposes. The overall goal of this survey is to understand how reduced water quality due to nutrient enrichment is affecting and may affect economic prosperity and social capacity of coastal New England communities. EPA has designed the survey to provide data to support the following specific objectives:

  • To estimate the revealed recreational use values that coastal residents of Southern New England place on improving water quality in coastal New England waters.

  • To understand and connect the social and economic value of improvement in water quality and recreational opportunities.

  • To estimate use of different types of coastal systems.

  • To understand how individuals’ perceptions of water quality relate to actual water quality measurements in coastal systems.

  • To understand how values vary with respect to individuals’ attitudes, awareness, sense of place, and demographic characteristics.

Understanding public values for water quality improvements is necessary to better determine the benefits associated with reductions in nutrients (in this case, nitrogen) to New England coastal waters. Very little data exist on the use of coastal New England waters, and the data that do exist are limited to a few larger beach areas that are rarely exposed to water quality concerns.



1(b) Key Variables


The key questions in the survey ask respondents about the recreation they do and how they perceive and value the water quality in coastal areas. The Random Utility Model (RUM) framework is a type of travel cost method that allows respondents to provide data about their last coastal recreation trip. Travel costs methods are premised upon the idea that the “price” for recreation can be represented by the cost in reaching and entering the location (Parsons, 2003). RUM models are multiple site methods that include substitution among different sites while accommodating quality-change valuation (Parsons, 2003). The questions ask about the costs of a recreational trip including lodging, distance traveled, hours spent on the activity and in traveling to the site, and entrance fees. To understand substitution and quality changes among the sites, the following questions are included: activities the respondent participated in at that location, sense of place, and perceived water quality. The survey design follows well-established revealed preference recreational valuation methodology and format (Parsons, 2014).

The survey focuses on saltwater recreation in coastal New England. It asks respondents for attributes describing their last saltwater recreation trip. Specifically, it includes questions on the following attributes: activity type and frequency, location, and travel costs. As discussed in Parsons (2014), these attributes are important for modeling recreation demand under changing conditions, such as changes in water quality. Variables for demographic characteristics will also be included in the analysis both to control for heterogeneity in preferences for recreation sites, as well as to estimate the cost of time spent on recreation for a respondent.

The study design includes water quality perception and sense of place questions to better understand how these attributes contribute to site choice, and to explore other research hypotheses (as discussed elsewhere in this document). The water quality perception questions are coastal equivalents of a set of questions developed for use in freshwater, nutrient impacted water systems (see Genskow & Prokopy, 2011). As no single study has covered sense of place related to recreation in coastal waters, the sense of place questions are a combined set of questions taken from freshwater studies and recreation studies (Stedman et al., 2006; Mullendore et al., 2015; Smith et al., 2016).



1(c) Statistical Approach


A statistical survey approach in which a randomly drawn sample of households is asked to complete the survey is appropriate for estimating the values associated with improvements in coastal water quality. A census approach is impractical because of the extraordinary cost of contacting all households. Therefore, the statistical survey is the most reasonable approach. Specifically, the target population includes residents living in counties where more than 25 percent of the county falls within 100 miles of Cape Cod, Massachusetts.

EPA developed the survey instrument, and will also analyze the survey results. EPA has retained Abt Associates Inc. (55 Wheeler Street, Cambridge, MA 02138) under EPA contract EP-C-13-039 to assist in the questionnaire design, sampling design, administration of the survey, and data entry and cleaning prior to analysis of the survey results.


1(d) Feasibility


Following standard practice in the non-market valuation literature (Champ et al., 2003), EPA conducted a series of 7 focus groups with 63 people (EPA ICR # 2205.17, OMB # 2090-0028). Based on findings from these activities, EPA made various improvements to the survey instrument to reduce the potential for respondent bias, reduce respondent cognitive burden, and increase respondent comprehension of the survey materials. In addition, EPA solicited input from other experts (see section 3c in Part A), and tested the survey with 10 federal employees at AED. Recommendations and comments received as part of that process have been incorporated into the design of the survey instrument.

Because of the steps taken during the survey development process, EPA does not anticipate that respondents will have difficulty interpreting or responding to any of the survey questions. Furthermore, since the survey will be administered as both a web and a mail survey, it will be easily accessible to all respondents. EPA therefore believes that respondents will not face any obstacles in completing the survey, and that the survey will produce useful results. EPA has dedicated sufficient staff time and resources to the design and implementation of this survey, including funding for contractor assistance under EPA contract No. EP-C-13-039. Given the timetable outlined in Section A 5(d) of this document, the survey results should be available for timely reporting within ORD’s current research cycle (FY16-FY19), with final products due in FY19.



2. Survey Design

2(a) Target Population and Coverage


To assess recreational use values of coastal New England residents for improvements in New England coastal water quality, with a focus on Cape Cod and its surrounding area, the target population is individuals who are 18 years of age or older and includes residents living in counties where more than 25 percent of the county falls within 100 miles of Cape Cod, Massachusetts. Individuals in these areas are more likely to hold use values for improvements to the waters of Cape Cod and surrounding areas than those farther away. The choice of 100 miles is based on typical driving distance to recreational sites (i.e., two hours or 100 miles) for single day or weekend trips. This was supported by our focus groups. In addition, Parsons & Hauber (1998) show that the welfare relevant coefficients in a RUM model of water recreation are stable after the choice set is extended to around a two-hour travel distance.


While we will miss a portion of trips that originate from beyond 100 miles of the coast, which are likely to include more overnight trips, our purposes for the survey do not include a precise estimate of the value of overnight trips as compared to single day trips. Instead, the focus is primarily on the value of water quality changes to New England recreationists. It is likely that those within 100 miles of the coast who take single day trips or have a weekend home or regularly visit the coast for overnight trips will be most sensitive to water quality variations across locations within the region. Those traveling from greater distances are less likely to be aware of water quality variations from place to place within the region and also most likely and able to substitute locations, thus being less affected by the range of water quality variations present within the region.

We found that many trips within this 100-mile buffer are associated with overnight trips, based on our focus groups and knowledge of the local tourism economy. It is important to collect the costs associated with trips originating within the 100 miles of the Cape differently if they were part of a single day or overnight trip (see section 2(d) questions 2.4-2.14b for reasoning for those survey questions). This distinction is standard in the travel cost modeling literature, since the travel costs associated with a recreation trip may be over- or under-estimated if the trips are treated in a uniform manner.

2(b) Sampling Design

(i) Sampling Frame


The sampling frame for this survey is the United States Postal Service Computerized Delivery Sequence File (DSF), the standard frame for address-based sampling (Iannacchione, 2011; Link et al., 2008). The DSF is a non-duplicative list of residential addresses where U.S. postal workers deliver mail; it includes city-style addresses and post office boxes, and covers single-unit, multi-unit, and other types of housing structures with known businesses excluded. In total the DSF is estimated to cover 97% of residences in the U.S., with coverage gradually increasing over the last few years as rural addresses are being converted to city-style, 911-compatible addresses1. The universe of sample units is defined as this set of residential addresses, and hence is capable of reaching individuals who are 18 years of age or older living at a residential address in the four target states. Samples from the DSF are taken indirectly, as USPS cannot sell mailing addresses or otherwise provide access to the DSF. Instead, a number of sample vendors maintain their own copies of the DSF and, through verifying them with USPS, update the list quarterly. The sample vendors can also augment the mailing addresses with additional information (household demographics, landline phone numbers, etc.) from external sources.

For discussion of techniques that EPA will use to minimize non-response and other non-sampling errors in the survey sample, refer to Section 2(b)(ii), below.


(ii) Sample Sizes


The target responding sample size for the main survey is 2,163 completed household surveys. This sample size was chosen to provide statistically robust regression modeling while minimizing the cost and burden of the survey. Given this sample size, the level of precision (see section 2(c)) achieved by the analysis will be more than adequate to meet the analytic needs. For further discussion of the level of precision required by this analysis, see Section 2(c)(i) below.

The sample design includes 22 counties based upon proximity to Cape Cod. EPA plans to oversample residents of Barnstable County on Cape Cod. EPA believes this is appropriate because Cape Cod residents are those who are most likely to receive significant use benefits of water quality improvements on Cape Cod. The sample will be allocated in proportion to the county-level population within each state. The target number of respondents in each county and state is given in Table B1.


Table B1: Population and Expected Number of Completed Surveys for Each County

Sampled State and County

Population

Expected Number of

Completed Surveys

Massachusetts Counties

6,284,793

1,581


Barnstable1

215,423

176


Bristol

551,082

129


Dukes

17,041

4


Essex

755,618

175


Hampden

465,923

108


Middlesex

1,537,215

357


Nantucket

10,298

2


Norfolk

681,845

158


Plymouth

499,759

110


Suffolk

744,426

178


Worcester

806,163

183


Connecticut Counties

708,910

166


Middlesex

165,602

41


New London

274170

65


Tolland

151,539

33


Windham

117,599

27


New Hampshire Counties

700,742

166


Hillsborough

402,922

94


Rockingham

297,820

72


Rhode Island Counties

1,050,292

250


Bristol

49,144

12


Kent

164,843

42


Newport

82,036

21


Providence

628,323

145


Washington

125,946

30


Total2

8,744,737

2,163


Source: U.S. Census Bureau (2013). Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2012. Retrieved September 22, 2016 from http://factfinder2.census.gov/.

1 Sample design includes oversampling of Barnstable County residents.

2 Total population for selected counties.

(iii) Stratification Variables


The sample will be effectively stratified by geography, with Barnstable County, Massachusetts, being sampled at a rate 3.06 times higher than the rest of the population in the study area. The sample will be drawn from the general population addresses of the USPS DSF.


(iv) Sampling Method

Using the stratification design discussed above, individuals will be randomly selected from the U.S. Postal Service DSF database. We will send the main survey to 8,400 households from the DSF. In addition, we will administer 750 surveys to Cape Cod residents for a total sample of 9,150. Assuming undeliverable rates equal to county-level vacancy rates and a 27% response rate, we anticipate 2,163 completed surveys from the DSF.2

For obtaining population-based estimates of various parameters, each responding household will be assigned a sampling weight. The weights will be used to produce estimates that:

  • are generalizable to the population from which the sample was selected;

  • account for differential probabilities of selection across the sampling strata;

  • match the population distributions of selected demographic variables within strata; and

  • allow for adjustments to reduce potential non-response bias.

These weights combine:

  • a base sampling weight which is the inverse of the probability of selection of the household;

  • a within-stratum adjustment for differential non-response across strata; and

  • a non-response weight.

Post-stratification adjustments may be made to match the sample to known population values (e.g., from Census data).

There are various models that can be used for non-response weighting. For example, non-response weights can be constructed based on estimated response propensities or on weighting class adjustments. Response propensities are designed to treat non-response as a stochastic process in which there are shared causes of the likelihood of non-response and the value of the survey variable. The weighting class approach assumes that within a weighting class (typically demographically-defined), non-respondents and respondents have the same or very similar distributions on the survey variables. If this model assumption holds, then applying weights to the respondents reduces bias in the estimator that is due to non-response. Several factors, including the difference between the sample and population distributions of demographic characteristics, and the plan for how to use weights in the regression models will determine which approach is most efficient for both estimating population parameters and for the revealed-preference modeling.

To estimate recreational use values for changes in coastal water quality, data will be analyzed statistically using a standard random utility model framework. Additional regression models will be estimated to examine other research hypotheses regarding the relationship between actual and perceived water quality and the relationship between sense of place and perceived water quality.


(v) Multi-Stage Sampling


Multi-stage sampling will not be necessary for this survey.


2(c) Precision Requirements

(i) Precision Targets

Table B2 presents expected sample sizes for each state. The maximum acceptable sampling error for predicting response probabilities (i.e., the likelihood of choosing a given alternative) in the present case is ±10%, assuming a true response probability of 50% associated with a utility indifference point. Precision of the survey estimates will be affected by the design effect due to unequal weights (i.e., weights assigned to the general population versus residents of Barnstable County). The estimated design effect is 1.32 which is comprised of the design effect due to unequal selection probabilities, equal to 1.04, and the design effect due to calibration and nonresponse adjustments, projected to be 1.27 (see Table B3 below). The effective sample size for this survey (i.e., the equivalent sample size of an independent, identically distributed sample that provides the same precision as this survey) is approximated by dividing the nominal sample size by the design effect due to unequal weighting. Thus, a sample of 2,163 respondents (completed surveys) will provide an effective sample size of 1,642. The margins of error for the estimates of population percentages range from 2.3 percent at the 50 percent population incidence level to 1.5 percent at the 10 percent population incidence level (Table B2). Assuming 30 percent of New England residents participate in coastal recreation (based on a conservative interpretation of participation numbers for counties in our sample from NOAA’s national scale ocean recreation survey, provided to us by NOAA), the projected number of recreational users completing this survey is 649, which provides an effective sample size of 493. For recreational users, the corresponding margins of error are 4.0 percent at the 50 percent population incidence level to 2.6 percent at the 10 percent population incidence level. The projected sample size for recreational users is expected to be sufficient to ensure large sample properties for developing regression models, as it will safely exceed the common rule of thumb of 20 observations per parameter (Harrell, 2015).



Table B2: Sample size and accuracy projections

State

Population size for selected counties1

Expected sample size

(completed surveys)

Effective sample size

Margin of error, 50% incidence2, 3

Margin of error, 10% incidence2,3

Massachusetts

6,284,793

1,5814

1,200

2.8%

1.7%

Connecticut

708,910

166

126

8.7%

5.2%

New Hampshire

700,742

166

126

8.7%

5.2%

Rhode Island

1,050,292

250

190

7.1%

4.3%

Overall

8,744,737

2163

1,643

2.4%

1.5%

Source for population size: U.S. Census Bureau (2013). Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2012. Retrieved September 22, 2016 from http://factfinder2.census.gov/.

1 Includes individuals who reside in counties where at least 25% of county’s geographic area falls within a 100-mile radius of Cape Cod.

2 The equivalent sample size of an independent, identically distributed sample that provides the same precision as the complex design survey at hand.

3 The margin of error is 1.96 times the standard error. The standard errors are based on the effective sample size.

4 Includes oversampling of the Barnstable County.


(ii) Power analysis

Power analysis in this section is performed for a one-sample t-test of proportions for study as a whole. The accuracy of the WTP estimates, and hence the power to detect differences in WTP, depends on the true values of the parameters of the logistic model used in WTP estimation, and hence can only be conducted post-hoc after the parameter estimates are obtained. Given the nature of the survey, the variance of a z-test statistic when the population incidence is equal to is given by



where CV is the coefficient of variation final weights, is the design effect due to variable weights, or ‘DEFF’, and n is the nominal target number of completed surveys. Design effect due to unequal weighting is computed as demonstrated in Table B3.


Table B3: Design effect

Subsample

Probability weight

Expected count

Sum of weights

Sum of squared weights

Cape Cod/Barnstable county

144.46

176

24,894

3,596

Rest of population

442.00

1,987

858,070

388,984

Total


2,163

905,803

392,581

DEFF =


1.037

DEFF due to weight calibration


1.270

Total DEFF


1.317


For a given power level (e.g., 80%), the effect size that can be determined by solving



for p1. This is an extension of the standard power analysis for weighted stratified samples.


Table B4 lists effect sizes using the most typical values for significance level (α=5%) and power (β=80%), and for various scenarios concerning variability of weights within strata (which will be caused by differential non-response).


Table B4: Power analysis.

Effect size detectable with power 80% by a test of size 5%

p0 = 50%

p0 = 10%

Full sample (

3.5%

2.1%

Recreational users (

6.3%

4.0%



(iii) Non-Sampling Errors


A variety of non-sampling errors may be encountered in revealed preference surveys. Coverage error occurs when some eligible units have zero probability of being selected. For the current survey, the generalizable population is that of residents of the 22 counties in the study area. Recreational users from other parts of the country are not covered. EPA has determined that surveying population further away from Cape Cod is economically impractical.

Measurement error occurs when the answers in the surveys do not accurately reflect the true events. In the current survey, a trip other than the most recent one can be reported, or the dates of the most recent trip, the location visited, saltwater activities or water quality may be reported incorrectly, all due to recall error (i.e., the respondent is unable to correctly recall all of the circumstances of the trip).

Non-response bias is another type of non-sampling error that can potentially occur in revealed preference surveys. Non-response bias can occur when households do not participate in a survey (i.e., not complete a web survey or return the mail survey, in this case; this may be a deliberate refusal after the mail is opened, or the mail simply may be tossed without being opened) or do not answer all relevant questions on the survey instrument (item non-response). EPA has designed the survey instrument to maximize the response rate. EPA will also follow Dillman et al.’s (2014) mixed-mode survey approach (see subsection 4(b) for details). If necessary, EPA will use appropriate weighting or other statistical adjustments to correct for any bias due to non-response.

To determine whether there is any evidence of significant non-response bias in the completed sample, EPA will conduct a non-response bias analysis. This will enable EPA to identify potential differences between respondents to the web/mail survey and those who received a URL/questionnaire but did not complete it.


Non-response Analysis

We conservatively estimate our response rate at 27%. Therefore, we are concerned with possible non-response bias. We intend to conduct a non-response analysis to quantify and ultimately address this issue in our estimation of WTP. Any one method will be insufficient to address possible bias (Groves et al., 2006; Groves, 2006). We propose a set of actions following suggestions in Montaquila and Olson (2012), OMB’s own guidelines (Graham, 2006), and Halbesleben and Whitman (2013).

The main purpose of our study is to understand participation in coastal recreation, and how those who participate value a recreation day at places with different attributes (including distance traveled, water quality, and other site attributes). We expect that the primary differences across people who engage in coastal recreation, in terms of where they go and their WTP for attributes that are important to them, will be demographic differences, distance from coastal access points, and avidity. Thus, income is the most critical variable to compare between respondents and nonrespondents. We hypothesize that people with lower income will be less able or willing to travel longer distances to get to a higher quality access point. At a minimum, we plan to benchmark the demographics in our sample to the target population using the 2010 U.S. Census data. Questions 5.4 through 5.10 and 5.13 on our survey provide the comparative data. This will allow us to detect statistical differences in important demographics such as age, income, race/ethnicity, and schooling.

In addition, we are in a favorable position to benchmark our survey on attributes of respondents that are specific to recreation. We plan to compare our sample on avidity and participation rates for coastal recreation to NOAA’s Coastal Expenditure Survey as well as the National Survey of Recreation and the Environment (NSRE) Saltwater Recreation Module. Specifically, we include questions (Q1.1 and Q1.2) that collect the information to compare our sample to these surveys’ samples for participation rates and avidity for coastal recreation. This will help to ensure that our sample is not biased towards heavy or light users of coastal resources as compared to the general population in the sample area. For a revealed preference method, these behavioral differences in our sample from the population are the type of biases of most concern, since the method rests on behavior and not on hypothetical scenarios as in a stated preference study.

Our statistical model has a spatial component needed to calculate the distance to recreation opportunities for each respondent. As an additional non-response bias test, we plan to test whether we collect a skewed sample of distances from the respondent to the coast as compared to getting random responses geographically, given our sample design. We are not sampling uniformly on geography, but instead on population, so this will need to be taken into account when we calculate the distribution of distances to the coast we would expect. This check will bolster the robustness of generalizing our results to the whole sample population. Distance is particularly important as it accounts for a large fraction of the variation in travel cost, which we use to monetize water quality differences (see Part B, section 5(b)).

We plan to compare our key WTP estimates, demographics, and recreation avidity within our sample design. By comparing respondents who respond immediately to the initial mailing to those who respond later in the follow-up plan, we can identify the trend of bias in key variables. We will do this statistically and identify any significant differences in respondent groups along the “continuum of resistance” (Halbesleben and Whitman, 2013). This includes summary variables as well as the results of the statistical model for WTP segmented by response group.

We chose to employ a mixed-mode approach which can “improve survey representativeness and enhance the performance of poststratification weighting adjustments” (Baines et al., 2007). The multiple modes allow different types of people to respond and can increase the overall response rate. We will compare the WTP and key demographic and recreation avidity variables between response modes.

Lastly, we plan to calibrate our WTP estimates, which are themselves a function of demographics (see Part B, section 5(b)), to any statistical differences in the demographics of our sample as compared to the Census for the sampled population. This is to ensure that when the results are generalized to the survey population they match the demographic profile and are corrected for the factors that are found to affect WTP.


2(d) Questionnaire and Mail Materials Design


The full text of the draft recreation questionnaire is provided as Attachment 1. Several categories of questions are included in the survey. The reasons for including each of these categories are discussed below:

Section 1: Saltwater Recreation Activities in New England. This section includes two questions. Question 1.1 asks respondents to check all saltwater recreation activities that they have done in the last 12 months; and Question 1.2 asks how many times in the last 12 months they engaged in saltwater recreation. The purpose of this section is to gather general data on the number of people who engage in different saltwater recreation activities, generally referred to as participation rates, and the level of effort per year for saltwater recreation. The participation and effort estimates will help classify the type of recreator. For example, avidity could be captured by how often the respondent participates in saltwater recreation. We also plan to use these questions to study possible non-response bias by comparing these to other surveys of recreation participation and effort (see section 2c). Respondents who do not select an activity are directed to skip to Section 4.



Section 2: The Most Recent Time Spent Participating in Saltwater Recreation. The second section asks about the last time the respondent participated in any coastal recreation in New England. This section gathers the information to collect a trip profile.



Question 2.1: This question guides the respondent to one of two versions of Section 2, a single day (A) or overnight trip (B) profile. The paper version includes a skip instruction. The online version will automatically present the relevant section.



Section 2A – Most Recent Single Day Recreation Trip

Question 2.2A: This question asks for the month and year of the last trip taken, which will allow us to control for month effects as well as recall of the respondent. It also categorizes the trip as either a weekend/holiday or a weekday, to control for differences in the type of trip.



Question 2.3A: This question elicits the location of the last trip. This is crucial for calculating travel cost and connecting the trip with a waterbody and water quality metric. We need more specific locations than town due to differences in water quality within towns along the coast. For the online version of the survey, this question will include an interactive map that will allow the respondent to drop a pin at the location of the last trip.



Question 2.4A: We ask an open-ended question to understand why they chose this recreation location. We plan to code this open-ended question using standard social science methodology (Saldaña, 2009). It will serve to categorize the reason for trips, as well as allow respondents to clarify why this is the particular trip reported. Focus group participants were eager to tell us why they went to the location of their last trip and also to indicate that, often, their last trip was not to the place they like best. Focus group participants were satisfied with this outlet to address their concerns.



Question 2.5A: We ask how often the respondent goes to this location to understand experience and familiarity with the site. We plan to compare perceived and objective water quality measures, and experience with the location is an important factor.



Question 2.6A: This question collects the type of transportation used for travel. This is to attribute a cost per mile, varying by type of transportation.



Question 2.7A: This question asks the respondent to estimate the time and distance from their home to the recreation location. While we will calculate an objective distance measure with Google maps Application Programming Interface (API) (see section 5b), we need self-reported distance as a verification check on the geolocation of the recreation spot, which is based on Question 2.3A, as well as to test whether perceived and actual distances vary consistently in people’s perception of travel cost. In addition, for trips within the same town, where the location in 2.3A and their home are too close to distinguish from the provided answer, this allows for an estimate of travel cost. We also need respondents’ estimation of perceived travel time, since this is a function of traffic. Traffic causes travel time to vary widely in this highly-developed corridor of the northeast. Because we do not believe we can collect the date and hourly departure/return time for each respondent accurately to use Google’s travel time estimation in real-time traffic, we therefore plan to rely on reported travel time for attributing costs to the trip. This piece of information will also give us additional verification data to confirm the geolocation of the recreation location in Question 2.3A.



Question 2.8A: The purpose of this question is to split travel costs among adults traveling together. It will also allow us to control for the type of recreation trip, based on the hypothesis that people traveling with children may have a different choice process when choosing recreation activity and location.



Question 2.9A: Time spent on site allows us to categorize the type of trip and extent of the recreation. We plan to translate our results to WTP/hour in addition to WTP/trip. These metrics will allow the results to be more widely relevant and useful.



Questions 2.10A and 2.11A: These questions ask about parking in order to correctly attribute parking cost to the trip profile.



Question 2.12A: This question asks which activities the respondent did that day. This is to attribute the trip and cost to an activity and to see which activities occur in combination.



Question 2.13A: By asking the most important activity, we can attribute the trip to one of the many activities that the respondent might have participated in that day.



Question 2.14A: This question asks for the level of water contact on that recreation trip. We hypothesize that as the level of contact increases, the respondent’s sensitivity to water quality will increase. This question will be used to group recreation activities and types of trips by sensitivity to water quality found in the statistical model.



Question 2.15A: We use a water quality scale to elicit the respondent’s perceived water quality at the recreation site. This 0-10 scale is anchored on either end with descriptions. This combined with questions 2.16A, 2.17A will allow us to connect perceived water quality to objective measures and ultimately to uses, to construct a coastal water quality index tailored to perceptions and choice data.



Question 2.16A: We ask how sure the person is of their perceived water quality rating. This will be used in the analysis of perceived versus objective measures of water quality capturing experience and confidence in respondent’s assessment of water quality.



Question 2.17A: We ask this after 2.15A in order to better understand the aspects of water quality that led to the person’s 0-10 rating. These were the aspects that focus group participants consistently referred to as things they consider when judging water quality. This will be used in combination with the other perceived water quality questions, 2.16A, 2.18A to translate perceived to objective measures of water quality.



Question 2.18A: This water quality question connects the water quality at that location to possible uses. Similar to connecting a water quality ladder to designated uses, we will compare these categorical ratings to the 0-10 rating, as well as the objective water quality measures. This, combined with the level of water contact from question 2.10A will connect exposure with sensitivity to water quality.



Questions 2.19A-2.20A: These questions gauge the respondent’s place attachment, place identity, and place dependence, the three components of sense of place. The questions and scales are drawn from the sense of place literature. Through the level of agreement with each of the statements, an index representing respondents’ sense of place can be consistently collected from the sample. This index will be used in the RUM model as a covariate explaining site choice, controlling for attitudes towards the particular site chosen, in addition to connecting water quality and perceived water quality to this attitudinal index in the water quality perceptions modeling. This is a key component of our interdisciplinary research design and will add important and peer reviewed social science methods to complement traditional economic concepts of site choice.


Section 2B – Most Recent Overnight Recreation Trip

This section collects the trip profile information related to a water recreation day that is part of an overnight trip. Many of the questions are the same for single day and multi-day trips, so we only discuss the questions that were not described for Section 2A. We elicit costs associated with a single or an overnight trip separately, because those people taking overnight trips split travel expenses among multiple recreation days and also incur additional costs through lodging charges. Since the statistical model relies on the variation in travel costs associated with trips with various attributes, it is important we take these differences in single day and overnight trips into account (see section 5(b) for explanation of the analysis).



Question 2.4B: This question collects the number of times the respondent has taken an overnight trip to this location in the last five years. This is to gauge familiarity with the location, similar to the question from the single day trip profile addressing visits, Question 2.5A



Question 2.8B: This question collects the length of the overnight trip. This is to attribute a proportion of the overnight trip’s cost to recreation.



Question 2.9B: This question asks for the accommodation expenses in order to attribute these expenses to the recreation trip.



Question 2.11B: This question will be used to estimate the portion of the trip costs associated with the water recreation part of the overnight trip, using the importance of water recreation as part of the overall trip.



Question 2.14B: This will be used to attribute trip expenses per day to the activity valued for a single day of the overnight trip.


Section 3: Other Places for Saltwater Recreation – This section collects information about respondents’ general recreation behavior and water quality perceptions for other places they go. This will be used to scale perceptions to objective water quality measures as well as test choice set definitions for the RUM model described in Section 5b of this Supporting Statement.



Questions 3.1-3.8: These questions ask for the locations that the respondent has visited with the best and worst water quality. These questions allow us to add context to the respondent’s water quality evaluation for their last trip based on their best and worst, which may help us to normalize those responses across respondents. It also gives us two more observations per person for relating perceptions to objective measures.



Question 3.9: This asks how far people are willing to travel on a single day for saltwater recreation. This will be used to test alternative choice set definitions in the RUM model (see section 5b of the Supporting Statement).



Question 3.10: This question asks about behavior given a beach closure. This will be used to define choice alternatives and understand people’s substitution behaviors, as well as to connect state beach bacteria sampling to the survey data in order to provide values for eliminating closures.



Section 4: Your Opinions on Coastal Water Quality in New England

Question 4.1: This question asks for the respondent’s opinions about a set of impacts of water quality issues in New England. These will help us understand both awareness of issues and perceived severity of issues by the surveyed population. The issues were identified through the focus groups as relevant to New England coastal waters. Also, those respondents who do not participate in water recreation will be directed to skip here from the beginning of the survey, which will allow us to collect some opinion data and demographics from them, in order to compare them to people who participate and better predict participation rates.



Section 5: About Your Household

The questions in this section ask respondents to provide basic demographic information, including gender, age, race/ethnicity, household composition (number of adults and children), income, highest level of education, and employment status. Two questions, 5.11 and 5.12, ask about the ability of the respondent to work extra hours and, if so, the wage rate of those hours. Together, these questions collect a more accurate marginal value of time for the respondent than assuming a uniform value based solely on their annual income. The inclusion of the two questions is to address the precision of the travel cost estimates, which is the key variable used to identify changes in marginal willingness to pay for changes in location attributes. These questions also address the modeling question of whether to include opportunity cost of time spent in recreation in the RUM.



This section also asks residence zip code in order to calculate travel distances as well as the zip code where the respondent works to compare commute distance to recreation opportunities. We also ask if the respondent owns a second home, and its zip code, in order to control for that factor in the recreation demand model.



This information will be used in the analysis of survey results, as well as in the non-response analysis. Responses to these questions will be used to estimate the influence of demographic variables on respondents’ site choices, and ultimately, their WTP for water quality improvements.


3. Pretests


EPA conducted extensive development and testing of the survey instrument during a set of seven focus groups with 63 people (EPA ICR # 2205.17, OMB # 2090-0028), and with 10 federal employees at AED. Individuals in these focus groups participated in discussions about their recreational activities, perceptions of water quality, sense of place, and opinions about water quality issues. They also completed draft survey questionnaires and provided comments and feedback about the survey format and content, their interpretations of the questions, and other issues relevant to revealed preference estimation. These discussions were used to develop a survey that provides respondents with the necessary information to complete the questionnaire, develop questions that result in accurate information, and minimize the burden placed on respondents while collecting the necessary information.

Particular emphasis in these survey discussions was on eliciting accurate locations where people recreated and accurate and consistent trip information, eliciting water quality perceptions and the most salient aspects of water quality, and developing questions to better understand attitudes about water quality and related impacts. Based on focus group responses, EPA made various improvements to the questionnaire through revisions of question formats and wording. Focus groups were held near Boston, Massachusetts, Providence, Rhode Island, and Hartford, Connecticut, in order to capture people who reside at different distances from coastal waters and near waterbodies with varying water quality. Focus group participants were professionally recruited by marketing facilities and selected to represent a range of demographics and recreational activities. Participants in the focus group discussions were offered an incentive of $100 for their participation.

EPA intends to implement this survey in two stages: a pretest and a main study. First, EPA will administer the pretest to a sample of 370 households using a mixed-mode survey and the Dillman Total Design Method (Dillman et al., 2014). Assuming 90% of the sampled addresses are eligible and 27% of eligible households return the survey (see Brennan, 2011; Edwards, 2014; Messer & Dillman, 2011), EPA estimates that this will result in 90 returned and completed pretest surveys. Households in the pretest will be selected from the study area (i.e., 22 counties in Connecticut, Rhode Island, Massachusetts, and New Hampshire). Responses and preliminary findings to this pilot study will be used to inform EPA regarding the response rates and the quality of survey data. EPA will evaluate pilot responses and determine whether any changes to the survey instruments or implementation approach are needed before proceeding with the administration of the main survey.

EPA will use results from the pretest to validate the survey design. Specifically, the pretest results will be used to:

  • Compare the actual and expected response rates. Based on typical survey response rates for surveys of this type, the expected response rate is approximately 27% (Brennan, 2011; Edwards, 2014; Messer & Dillman, 2011).

  • Assess whether demographic characteristics of the respondents are significantly different from the average demographic characteristics in the study region.

  • Examine response rates for individual survey questions and evaluate whether adjustments to survey questions are required to promote a higher response rate.

If required, EPA will make the appropriate adjustments to the questionnaire or sampling frame.


4. Collection Methods and Follow-up

4(a) Collection Methods


The survey will be administered as a mixed-mode survey. Respondents will initially be sent a letter with a web URL and login information. A reminder letter will be sent one week later and a packet containing a paper version of the survey two weeks after that. A mixed-mode survey approach will allow respondents to choose the mode of survey completion. The first contacts will encourage completion of the survey via the internet. The mail mode will attract respondents who have limited or no internet access or prefer to complete a hardcopy version of the questionnaire. Offering the internet option will allow respondents flexibility to start and stop the survey (continuing where the left off when they return to login) at their convenience. The online mode will also provide an interactive mapping tool that will provide a more specific pinpoint of locations selected by the respondent. The mixed-mode approach will help achieve the desired response rate, and will balance the increased accuracy of the internet mapping with the opportunity to obtain a higher response rate by offering a paper survey option.


4(b) Survey Response and Follow-up


The estimated response rate for the survey is 27% (Moore et al., 2014; Brennan, 2011; Edwards, 2014; Messer & Dillman, 2011). That is, 27% of the eligible households receive the survey invitation are expected to return a completed survey. To obtain the highest response rate possible, EPA will follow Dillman et al.’s (2014) tailored-design method approach, as described above.


5. Analyzing and Reporting Survey Results

5(a) Data Preparation


Survey responses to the web survey will be automatically entered into a database and responses to the paper survey will be entered into an electronic database after they are returned. EPA will also clean the data to ensure that the data are entered in a consistent manner and any inconsistencies are addressed. Specifically, we will use the Double Entry data entry method for closed-ended responses. The Double Entry method consists of data being keyed twice and compared. Discrepancies are reconciled upon completion of the second entry. After all responses have been entered, the database contents will be converted into a format suitable for use with a statistical analysis software package.


5(b) Analysis


Once the survey data have been converted into a data file, data will be analyzed using statistical analysis techniques. Our primary model will be a RUM model that will be used to value changes in water quality for recreation trips. We will also be estimating ancillary regression models to examine how people’s water quality perceptions relate to objective water quality measures, and to explore the relationship between sense of place and perceptions of water quality. The following section discusses the primary model that will be used to analyze the revealed preference data from the survey.


Analysis of Revealed Preference Data

The model for analysis of revealed preference data is grounded in the standard random utility model (RUM) applied to recreation site choice data. Bockstael, Hanemann and Kling (1987) wrote the seminal works applying the model to recreation demand, which was also concerned specifically with recreation and water quality. The RUM model is applied extensively within revealed preference research, and allows for well-defined welfare measures (i.e., willingness to pay) to be derived from observed recreation choice behavior using the travel cost method (Phaneuf & Smith, 2005; Parsons, 2014).

Within the standard RUM applied to recreation choice behavior, individuals’ choice decisions as to where to recreate are based on choosing the alternative that gives the highest utility. The individual is assumed to face a set of I possible sites for a trip. The sites might be beaches, parks, public accesses, etc. (Parsons, 2014). Each site i (i=1, 2…I) is assumed to give the individual, n, some utility on a given choice occasion.

Following standard random utility theory, utility is assumed known to the respondent, but stochastic from the perspective of the researcher, such that:


where:

= a vector of variables describing attributes of recreation site I;

= a vector of demographic and other attributes of the respondent n;

= cost of choosing site i for respondent n, the travel cost;

= a function representing the empirically estimable component of utility;

= stochastic or unobservable component of utility, modeled as an econometric error.


Consider , the estimable component of utility, in a simple linear form,


where:

= the travel cost coefficient, or marginal utility of the cost of the trip;

= a vector of coefficients for site attributes, Xi;

= a vector of coefficients for demographics, Dn;

= the error term capturing site and individual attributes that influence site choice but are unobserved by the analyst.


Standard RUMs are based on the probability that a respondent’s utility from site i, , exceeds the utility from alternative site j, , for all potential sites j≠i considered by the respondent. The RUM presumes that the respondent assesses the utility that would result from each recreational site choice i, and chooses the site that provides the highest utility, or:


Trip utility, , is the basis for welfare analysis in the RUM model. It is used to value a loss or gain from site access (removal or addition of a site) and changes in site quality, such as water quality (Parsons, 2014). Suppose water quality at sites 2 and 3 is improved through some program. If so, trip utility for person n becomes:


Where and denote the now higher utility due to the improved site quality. In this case trip utility increases from to . Change in utility is monetized by dividing the change by the negative of the coefficient on trip cost (α), which is a measure of the marginal utility of income.3 This gives the following welfare effects (wnclean) in monetary terms for changes in trip utility. These are estimated changes in welfare represented on a per-trip per-person basis (Parsons, 2014).


Econometric Estimation

Following Parsons 2014:

Since the error terms, εin, on each site utility are unknown to researchers, the choice is treated as the outcome of a stochastic process in estimation. By assuming an explicit distribution for the error terms in equation [(2)], we can express each person’s choice as a probability of visiting each site in the choice set . The simplest is to assume that the error terms are independently and identically distributed (iid) type 1 extreme value random variables. This results in a multinomial-logit specification, for the choice probabilities” (Greene, 2008, Ch. 23).


Maximum likelihood is used to estimate the parameters using data on the actual site choices. Since researchers proceed as though choices are the outcome of a stochastic process, trip utility in equations [(2)-(4)] is also stochastic. For this reason, expected trip utility is used as an estimate of Vn in empirical work. It can be shown that each individual’s expected trip utility in a multinomial logit model is


where C is some unknown additive constant (Small and Rosen, 1981). is referred to as the ‘log-sum’ and is the empirical form of used welfare analysis.”


The multinomial logit model allows for straightforward estimation of welfare changes, but the imposed structure does not allow for correlated errors across site alternatives. Thus, the model assumes that, as a site is closed or site attributes change, there will be a proportional increase in the probability of visitation to all other sites. This property is referred to as “independence of irrelevant alternatives” (IIA), and is usually unrealistic (Train, 2009, p. 45). For this reason, more flexible forms such as nested or mixed (or random parameters) logit models allow for various correlation structures across error terms in estimation (Train, 1998; Jeon et al., 2005). We plan to estimate such models using standard maximum likelihood for mixed conditional logit techniques, as described by Train (1998), Greene (2008) and others. Mixed logit model performance of alternative specifications will be assessed using standard statistical measures of model fit and convergence, as detailed by Greene (2008) and Train (1998).


Econometric Specification

Based on focus groups, expert review and attributes of the policies under consideration, EPA anticipates the inclusion a vector of explanatory variables representing site characteristics including water quality, type of waterbody the site is on, size, and possibly indicators for facilities and parking depending on availability across the extent New England. The variable capturing variation in water quality allows respondents’ choices to reveal their WTP for improvements/degradations in water quality. Given we will be eliciting perceived water quality conditions, we plan to validate that the objective water quality metrics describe the conditions by which people perceive water quality and affect their site choice.

A vector of demographics will also be included, capturing heterogeneous preferences and types of people. It is here we anticipate the sense of place indexes to enter to capture varying degrees of place attachment, dependence, and identity to the chosen site. The extent to which this correlates with water quality is of interest; if that correlation is strong and sense of place is ignored, our estimates of marginal effects of changes in water quality will be biased through omitted variable bias.

Linear forms of the utility function, in equation (1), are most common in the literature (Phaneuf, 2002) and EPA anticipates these will provide the basis for analysis. Model fit will be assessed following standard practice in the literature (e.g., Greene, 2008). To monetize the coefficients, and site choice probability elasticities, the travel cost for an observed trip, in equation (2), will be added as a covariate. The travel cost will be calculated using the publicly available Google Maps API, which returns travel distance and time on a road network for pairs of geographic points (Google Maps API, 2016). Welfare estimates will be created using equation (5).


5(c) Reporting Results


The results of the survey will be made public in an EPA report and journal publications. Provided information will include summary statistics for the survey data, extensive documentation for the statistical analysis, and a detailed description of the final results. The survey data will be released only after it has been thoroughly vetted to ensure that all potentially identifying information has been removed.


































REFERENCES


Baines, A.D., Partin, M.R., Davern, M., and Rockwood, T. H. (2007). Mixed-mode administration reduced bias and enhanced poststratification adjustments in a health behavior survey. Journal of Clinical Epidemiology, 60(12), 1246-1255. doi: http://dx.doi.org/10.1016/j.jclinepi.2007.02.011

Berzelak, N., Vehovar, V., and Manfreda, K.L. (2015). Web mode as part of mixed-mode surveys of the general population: an approach to the evaluation of costs and errors. Advances in Methodology & Statistics / Metodoloski Zvezki, 12(1/2), 45-68.


Bockstael, N.E., Hanemann, W. M., & Kling, C.L. (1987). Estimating the value of water quality improvements in a recreational demand framework. Water Resources Research, 23(5), 951-960.


Bloeser, J., Chen, C., Gates, M., Lipsky, A., and Longley-Wood, K. (2015). Characterization of Coastal and Marine Recreational Activity in the U.S. Northeast. http://neoceanplanning.org/wp-content/uploads/2015/10/Recreation-Study_Final-Report.pdf.


Brennan, M. (2011). Recruiting opinion leaders and innovators: a comparison of mail versus 'web plus mail' using addressed-based Sampling. Australasian Journal of Market & Social Research, 19(1), 9-23.


Cape Cod Commission. (2015). Cape Cod Area Wide Water Quality Management Plan Update.


Champ, P., Boyle, K., and Brown, T.C., Eds. (2003). A Primer on Nonmarket Valuation. The Economics of Non-Market Goods and Resources. Boston, Kluwer Academic Publishers.


Cuba, L. and Hummon, D.M. (1993a). Constructing a sense of home: place affiliation and migration across the life cycle. Sociological Forum, 8(4), 547-572.


Cuba, L. and Hummon, D.M. (1993b). A place to call home: identification with dwelling, community, and region. The Sociological Quarterly, 34(1),111-131.


De Groot, R.S., Alkemade, R., Braat, L., Hein, L. and Willemen, L. (2010). Challenges in integrating the concept of ecosystem services and values into landscape planning, management and decision making. Ecological Complexity 7, 260-272.


Desvousges, W.H., and Smith, V.K. (1988). Focus groups and risk communication: The science of listening to data. Risk Analysis, 8, 479-484.


Desvousges, W.H., Smith, V.K., Brown, D.H., and Pate, D.K. (1984). The role of focus groups in designing a contingent valuation survey to measure the benefits of hazardous waste management regulations. Research Triangle Park, NC: Research Triangle Institute.


Dillman, D. A., Smyth, J. D., and Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-mode Surveys: The Tailored Design Method. John Wiley & Sons.


Edwards, M. Dillman, D.A., and Smyth, J.D. (2014). An experimental test of the effects of survey sponsorship on internet and mail survey response. Public Opinion Quarterly, 78(3), 734-750.


Egan, K.J, Herriges, J.A, Kling, C.L, and Downing, J.A. (2009). Valuing water quality as a function of water quality measures. American Journal of Agricultural Economics, 91(1), 106-123.


English, E.P. (2010). Discrete Choice, Recreation Demand, and Consumer Surplus. Cornell University.


Feather, P.M. (1994). Sampling and aggregation issues in random utility model estimation. American Journal of Agricultural Economics, 76(4), 772-780.


Genskow, K. and Prokopy, L., Eds. (2011). The Social Indicator Planning and Evaluation System (SIPES) for Nonpoint Source Management: A Handbook for Watershed Projects. 3rd Edition. Great Lakes Regional Water Program. 104 pp.


Google Maps API. (2016).


Graham, J. (2006). Guidance on Agency Survey and Statistical Information Collections. An Office of Management and Budget memorandum. Located at http://www.whitehouse.gov/omb/assets/omb/inforeg/pmc_survey_guidance_2006. pdf.


Greene, William H. (2008). Econometric Analysis. Pearson Prentice Hall: Upper Saddle River, 92-250.


Groves, Robert M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646-675.


Groves, R.M., Couper, M.P., Presser, S., Singer, E., Tourangeau, R., Acosta, G.P., and Nelson, L. (2006) Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720-736.


Halbesleben, J.R.B. and Whitman, M.A. (2013). Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias. Health Services Journal, 48(3), 913-930.


Harrell, F. (2015). Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, Springer.


Hilger, J., and Hanemann, M. (2008). Heterogeneous preferences for water quality: a finite mixture model of beach recreation in Southern California. CUDARE Working Papers.


Hohwü, L., Lyshol, H., Gissler, M., Jonsson, S.H., Petzold, M., and Obel, C. (2013). Web-based versus traditional paper questionnaires: a mixed-mode survey with a Nordic perspective. Journal of Medical Internet Research, 15(8), e173. doi:10.2196/jmir.2595


IASNR (International Association for Society and Natural Resources). (2016). About IASNR Webpage. www.iasnr.org Accessed: August 30, 2016.


Iannacchione, V.G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556-575.


Jeon, Y., Herriges, J.A., Kling, C.L., and Downing, J. (2005). The role of water quality perceptions in modeling lake recreation demand The International Handbook on Non-market Environmental Valuation, Edward Elgar Publishing: Cheltenham, UK.


Johnston, R.J., Weaver, T.F., Smith, L.A., and Swallow, S.K. (1995). Contingent valuation focus groups: insights from ethnographic interview techniques. Agricultural and Resource Economics Review, 24(1), 56-69.


Jorgenson, B.S., and R.C. Stedman. (2001). Sense of place as an attitude: lakeshore owners attitudes toward their properties. Journal of Environmental Psychology, 21, 233-248.


Jorgenson, B.S., and R.C. Stedman. (2006). A comparative analysis of predictors of sense of place dimensions: attachment to, dependence on, and identification with lakeshores properties. Journal of Environmental Management, 79, 316-327.


Kaoru, Y., Smith, V.K., and Liu, J.L. (1995). Using random utility models to estimate the recreational value of estuarine resources. American Journal of Agricultural Economics, 77(1), 141-151.


Kaltenborn, B.P. (1998). Effects of sense of place on responses to environmental impacts. Applied Geography, 18(2),169-189.


Kyle, G., Graefe, A., Manning, R., and Bacon, J. (2004). Effects of place attachment on users’ perceptions of social and environmental conditions in a natural setting. Journal of Environmental Psychology, 24, 213-225.


Kyle, G.T., Absher, J.D., and Graeffe, A.R. (2003). The moderating role of place attachment on the relationship between attitudes towards fees and spending preferences. Leisure Sciences, 25, 33-50.


Link, M.W. Battaglia, M.P., Frankel, M.R., Osborn, L., and Mokdad, A.H. (2008). A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72(1), 6-27.


Matarrita-Cascante, D., Stedman, R., and Luloff, A. E. (2010). Permanent and seasonal residents’ community attachment in natural amenity-rich areas: exploring the contribution of landscape-related factors. Environment and Behavior, 42(2):197-220.


Melstrom, R.T., and Jayasekera, D.H. (2016). Two-Stage Estimation to Control for Unobservables in a Recreation Demand Model with Unvisited Sites. Agricultural and Applied Economics Association Selected Paper. Located at: http://ageconsearch.umn.edu/bitstream/236252/2/Two-stage%20estimation.pdf.


Messer, B.L., and Dillman, D.A. (2011). Surveying the general public over the internet using address-based sampling and mail contact procedures. Public Opinion Quarterly, 75(3), 429-457.


Millennium Ecosystem Assessment. (2005). Ecosystems and Human Well-being: A Framework for Assessment. Island Press: Washington, DC.


Montaquila, J.M, and Olson, K.M. (2012). Practical Tools for Nonresponse Bias Studies. SRMS/AAPOR Webinar.


Murray, C., Sohngen, B., and Pendleton, L. (2001). Valuing water quality advisories and beach amenities in the Great Lakes. Water Resources Research, 37(10), 2583-2590. doi: 10.1029/2001WR000409


Mullendore, N.D., Ulrich-Schad, J.D. , and Prokopy, L.S. (2015). U.S. farmers’ sense of place and its relation to conservation behavior. Landscape and Urban Planning. 140, 67-75.


NAREA (Northeastern Agricultural and Resource Economics Association). (2016). Welcome to NAREA website. www.narea.org Accessed: August 30, 2016.


NORC. (2016). GSS Response Rates. Available online: http://gss.norc.org/Documents/other/GSS%20Response%20Period%20and%20Field%20Period.pdf Accessed: October 26, 2016.


Opaluch, J.J., Grigalunas, T.A., Mazzotta, M., Johnston, R.J., and Diamantedes, J. (1999). Recreational and resource economic values for the Peconic Estuary. Prepared for the Peconic Estuary Program. Peace Dale, RI: Economic Analysis Inc.


Parsons, George R. (2014). Travel Cost. Selected Works of George Parsons. University of Delaware.


Parsons, G.R., Kang, A.K., Leggett, C.G., and Boyle, K.J. (2009). Valuing Beach Closures on the Padre Island National Seashore. Marine Resource Economics, 24, 213-235.


Parsons, G.R. (2003). The travel cost model. A Primer on Nonmarket Valuation. Springer Netherlands: 269-329.


Parsons, G.R., and Hauber, A.B. (1998). Spatial boundaries and choice set definition in a random utility model of recreation demand. Land Economics, 74(1), 32-48.

Phaneuf, D.J. (2002). A random utility model for total maximum daily loads: Estimating the benefits of watershedbased ambient water quality improvements. Water Resources Research, 38(11), 36-1 – 36-11.


Phaneuf, D.J., and Smith, V.K. (2005). Recreation demand models. Handbook of Environmental Economics, 671-761.


Saldaña, J. 2009. The Coding Manual for Qualitative Researchers. Sage Publications: Thousand Oaks.


Schmuhl, P., Van Duker, H., Gurley, K.L., Webster, A., and Olson, L.M. (2010). Reaching emergency medical services providers: Is one survey mode better than another? Prehospital Emergency Care, 14(3), 361. doi:10.3109/10903121003760184


Smith, J.W., Seekamp, E., McCreary, A.,Davenport, M., Kanazawa, M., Holmberg, K., Wilson, B., and Nieber, J. (2016). Shifting demands for winter outdoor recreation along the North Shore of Lake Superior under variable rates of climate change: A finite-mixture modeling approach. Ecological Economics, 123:1-13.


Stedman, R., Beckley, T., Wallace, S., and Ambard, M. (2004). A picture and a thousand words: Using resident-employed photography to understand attachment to high amenity places. Journal of Leisure Research, 36(4):580-606.


Stedman, R., Amsden, B.L., and Kruger, L. (2006). Sense of place and community: points of intersection with implications for leisure research. Leisure/Loisir, 30(2): 393-404.


Steinback, S. and Kosaka, R. (2016). NOAA National Marine Fisheries Service Economists, personal communication.


Train, K.E. (2009). Discrete Choice Methods with Simulation: Cambridge University Press: Cambridge.


Train, K. (1998). Recreation Demand Models with Taste Differences Over People. Land Economics, 74(2), 230-239.


U.S. Census Bureau (2013). Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2012. Retrieved September 22, 2016 from http://factfinder2.census.gov/.


U.S. Department of Labor, Bureau of Labor Statistics. (2016). Quarterly Census of Employment and Wages. Retrieved October 2016 from: http://data.bls.gov/cew/apps/table_maker/v4/table_maker.htm


U.S. EPA. (2015a). Safe and Sustainable Water Resources: Strategic Research Action Plan 2016-2019.


U.S. EPA. (2015b). Sustainable and Healthy Communities: Strategic Research Action Plan 2016-2019.


Williams, D.R., Patterson, M.E., Roggenbuck, J.W., and Watson, A.E. (1992). Beyond the commodity metaphor: examining emotional and symbolic attachment to place. Leisure Sciences, 14:29-46.


Yeh, C.-Y., Haab, T.C., and Sohngen, B.L. (2006). Modeling multiple-objective recreation trips with choices over trip duration and alternative sites. Environmental and Resource Economics, 34(2), 189-209.

1 For example, in rural areas, Rural Route box addresses have been converted to physical street addresses.

2 The average occupancy rate is based on ACS 2010–14 data on occupied housing units in the study area. We used county-level occupancy rates in our calculations. A 27% response rate is based on Brennan (2011), Edwards (2014), and Messer and Dillman (2011).

3 α from equation (2) describes how site utility changes with a decrease in income (less money to spend on things if a trip is taken). Since trip cost “takes away from income”, α is the marginal effect of taking away income and – α is a measure of adding income, or the marginal utility of income.

4

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorC Moore
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy