2402ss01b (2011-06-28)

2402ss01b (2011-06-28).docx

Willingness to Pay Survey for Section 316(b) Existing Facilities Cooling Water Intake Structures

OMB: 2040-0283

Document [docx]
Download: docx | pdf

PART B OF THE SUPPORTING STATEMENT


1. Survey Objectives, Key Variables, and Other Preliminaries

1(a) Survey Objectives


The overall goal of this survey is to explore how public values (including non-use values) for fish and aquatic organisms are affected by I&E mortality at cooling water intake structures (CWIS) located at existing 316(b) facilities, as reflected in individuals’ willingness to pay for programs that would prevent such losses. EPA has designed the survey to provide data to support the following specific objectives:


  • To estimate the total values, including non-use values, that individuals place on preventing losses of fish and other aquatic organisms caused by CWIS at existing 316(b) facilities.

  • To understand how much individuals value preventing fish losses, increasing fish populations, improvements in aquatic ecosystems, and increasing commercial and recreational catch rates.

  • To understand how such values depend on the current baseline level of fish populations and fish losses, the scope of the change in those measures, and the certainty level of the predictions.

  • To understand how such values vary with respect to individuals’ economic and demographic characteristics.


Understanding total public values for fish resources lost to I&E mortality is necessary to determine the full range of benefits associated with reductions in impingement and entrainment losses at existing 316(b) facilities. Because non-use values may be substantial, failure to recognize such values may lead to improper inferences regarding policy benefits (Freeman 2003).


1(b) Key Variables


The key questions in the survey ask respondents whether or not they would vote for policies that would increase their cost of living, in exchange for specified changes in: (a) I&E mortality losses of fish, (b) commercial fish sustainability, (c) long-term fish populations, and (d) condition of aquatic ecosystems1. More specifically, the choice experiment framework allows respondents to view pairs of multi-attribute policies associated with the reduction of I&E mortality losses. Respondents are asked to choose the program that they would prefer, or to choose to reject both policies. This follows well-established choice experiment methodology and format (Adamowicz et al. 1998; Louviere et al. 2000; Bennett and Blamey 2001; Bateman et al. 2002). Important variables in the analysis of the choice questions are how the respondent votes, the amount of the cost of living increase, the number of fish losses that are prevented, the sustainability of commercial fishing, the change in fish populations, and the condition of aquatic ecosystems. Other important variables include whether or not the respondent is a user of the affected aquatic resources, household income, and other respondent demographics.


1(c) Statistical Approach


EPA believes that a statistical survey approach is appropriate. A census approach is impractical because contacting all households in the U.S. would require an enormous expense. On the other hand, an anecdotal approach is not sufficiently rigorous to provide a useful estimate of the total value of fish loss reductions for the 316(b) case. Thus, a statistical survey is the most reasonable approach to satisfy EPA’s analytic needs for the 316(b) regulation benefit analysis.

EPA has retained Abt Associates Inc. (55 Wheeler Street, Cambridge, MA 02138) as a contractor to assist in questionnaire design, sampling design, and analysis of the survey results.


1(d) Feasibility


The survey instrument was repeatedly pre-tested during a series of seven focus groups (conducted under a different ICR with OMB control # 2090-0028), in addition to the twelve focus groups conducted for the Phase III survey (EPA-HQ-OW-2004-0020), and it will be subject to peer review by reviewers in academia and government, so EPA does not anticipate that respondents will have difficulty interpreting or responding to any of the survey questions. Additionally, since the survey will be administered as a mail survey, it will be easily accessible to respondents. Thus, EPA believes that respondents will not face any obstacles in completing the survey, and that the survey will produce useful results. EPA has dedicated sufficient funding (under EPA contracts No. EP-C-07-23) to design and implement the survey. Given the timetable outlined in Section A.5(d) of this document, the survey results will be available for timely use in the final benefits analysis for the 316(b) existing facilities rule.


2. Survey Design

2(a) Target Population and Coverage


The target population for this survey includes individuals from continental U.S. households who are 18 years of age or older. The sample will be chosen to reflect the demographic characteristics of the general U.S. population.


2(b) Sampling Design

(I) Sampling Frames


The sampling frame for this survey is the panel of individuals selected from U.S. Postal Service Digital Sequence File (DSF) to receive a mail survey. The overall sampling frame from which these individuals would be selected is the set of all individuals in continental U.S. households who are 18 years of age or older and who have a listed address. The DSF includes city-style addresses and P.O. boxes, and covers single-unite, multi-unit, and other types of housing structures with known business excluded. In total the DSF covers 97% of residences in the U.S.

For discussion of techniques that EPA will use to minimize non-response and other non-sampling errors in the survey sample, refer to Section 2(b)(II), below.


(II) Sample Sizes


The intended sample size for the survey is 2,288 households including only households providing completed mail surveys. This sample size was chosen to provide statistically robust regression results while minimizing the cost and burden of the survey. Given this sample size, the level of precision (see section 2(c)) achieved by the analysis will be more than adequate to meet the analytic needs of the benefits analysis for the 316(b) regulation. For further discussion of the level of precision required by this analysis, see Section 2(c)(I) below.


(III) Stratification Variables


The survey sample will be selected using a stratified selection process. For the selection of households, the population of households in the contiguous 48 states and the District of Columbia will be stratified by the geographic boundaries of four study regions: Northeast, Southeast, Inland, and Pacific. As described previously, the Northeast region includes the North Atlantic and Mid Atlantic 316(b) benefits regions, the Southeast region includes the South Atlantic and Gulf of Mexico 316(b) benefits regions, the Pacific region includes states on the Pacific coast, and the Inland region includes all non-coastal states. The sample is allocated to each region in proportion to the total number of households in that region, with at least 288 completed surveys in each region. This is the number required to estimate the main effects and interactions under an experimental design model as described in Section 4(a) of Part A. To accommodate this requirement the sample sizes in other regions will be slightly reduced. A sample of 288 households completing the national survey version would be distributed among the study regions based on the percentage of regional survey sample (as shown in Table A1) to ensure that respondents to the national survey version are distributed across the continental U.S.


(IV) Sampling Method


Using the stratification design discussed above, respondents will be randomly selected from the U.S. Postal Service DSF database. If it is assumed that 30% of the selected households will actually return a completed mail survey (completion rate) then 7,628 questionnaires will need to be mailed to households.2 First, a sample of 7,628 addresses will be randomly selected from the DSF database. in . Then, a copy of the mail survey will be mailed to the selected addresses. For obtaining population-based estimates of various parameters, each responding household will be assigned a sampling weight. This weight combines a base sampling weight which is the inverse of the probability of selection of the household and then an adjustment for non-response. The weights will be used to produce estimates that are generalizable to the population from which the sample was selected (e.g., percent of population participating in water-based recreation such as fishing and shellfishing). Proportional allocation of the sample to regions ensures an equal probability sample. To estimate total WTP for the quantified environmental benefits of the 316(b) existing facilities rulemaking data will be analyzed statistically using a standard random utility model framework.


(V) Multi-Stage Sampling


Multi-stage sampling will not be necessary for this survey.


2(c) Precision Requirements

(I) Precision Targets


Table B1, below, shows the target samples sizes for both the U.S. (excluding Alaska and Hawaii) and each of the four EPA study regions. At the regional level, a sample of 2,000 households (completed surveys) will provide estimates of population percentages with a margin of error ranging from 3.6 to 5.8 percentage points at the 95% confidence level. A sample of 288 household for the national survey version (completed surveys) will provide an estimate of population percentages with a margin of error no greater than 5.8 percentage points at the 95% confidence level.


Table B1: Number of Households and Household Sample for Each EPA Study Region

Region

Household Population

Household Sample

Northeast

23,281,296

417

Southeast

31,378,122

562

Inland

40,852,983

732

Pacific

16,158,206

288

Total for Regional Survey Versions

111,670,607

2,000

National Survey Version

111,670,607

288

Source: The number of households in each region was obtained based on the estimated population size and average household size from the 2006-2008 American Community Survey (ACS).


(II) Non-Sampling Errors


One issue that may be encountered in stated preference surveys is the problem of protest responses. Protest responses are responses from individuals who reject the survey format or question design, even though they may value the resources being considered (Mitchell and Carson 1989). For example, some respondents may feel that any amount of I&E is unacceptable, and choose not to respond to the survey. To deal with this issue, EPA has included several questions, including an open-ended comments section, to help identify protest responses. The use of such methods to identify protest responses is well-established in the literature (Bateman et al. 2002). Moreover, many researchers (e.g., Bateman et al. 2002) suggest that a choice experiment format, such as that proposed here, may ameliorate such responses (over the earlier contingent valuation format).

A different type of non-sampling error is non-response bias. Non-response rates in this survey are affected by non-response among households sent the mail survey. EPA has designed the survey instrument to maximize the response rate. EPA will also follow Dillman’s mail survey approach (Dillman et al. 2008) to minimize the potential for non-response bias in the current survey:

  1. Preview letter: respondents will receive a preview letter that notifies the household that it has been selected and briefly describes the survey;

  2. First survey mailing: the survey booklet will be sent to selected households 1-2 weeks after the preview letter;

  3. Postcard reminder: a postcard reminder will be sent 1 week after the1st survey mailing

  4. Second survey mailing: the survey booklet will be sent to those households who did not respond to the first mailing 3 weeks after the first survey mailing

  5. Second reminder: a follow up letter (Dillman et al. 2008) will be sent 1 week after the second survey mailing

  6. Response rates will be tracked on a daily basis. If any unexpected declines are encountered, corrective action can immediately be undertaken.

  7. EPA will undertake non-response bias analysis as detailed in the following section.

If necessary, EPA will use appropriate weighting or other statistical adjustment to correct the bias because of non-response.


Non-response Interviews

To determine whether there is any evidence of significant non-response bias in the completed sample, EPA will conduct a non-response follow-up study to identify potential differences in WTP estimates associated with respondents to the mail survey and those that did not return the questionnaire.

EPA has used a set of key attitudinal and socio-demographic variables that are thought to be associated with WTP for reducing fish mortality from cooling water intake structures to develop a short questionnaire that will take respondents 5 minutes to complete. The short questionnaire will be implemented using a dual frame of telephone and priority mailing.

  • To select the priority mailing subsample, the entire sample of mail addresses will be matched against the directory listed landline telephone numbers. After the matching, the nonresponding mail addresses will be divided into two strata. The first stratum will consist of those nonresponding addresses with matched telephone numbers. The second stratum will consist of nonresponding mail addresses that do not have matched telephone numbers, The total subsample that we plan to select will be allocated to the two strata in proportion to the number of nonrespondents in each group. Households selected in each stratum will be sent a questionnaire by priority mailing. The mailing will include $2 in cash as an unconditional incentive for completion of the short questionnaire to encourage a high response rate.

  • The telephone subsample will be selected from the first stratum with matched telephone numbers. This subsample will include those that did not respond to priority mailing and those that were not sent priority mailing. This subsample will be contacted by telephone. . Once contact is achieved with a household by telephone for this subsample, one adult is selected in each household as the designated respondent. If there is more than one eligible respondent per household, then a random selection is done for the individual with the most recent/next birthday. Selected households will be sent a letter prior to calling which will include $2 in cash as an unconditional incentive for participation in the telephone interview to promote a high response rate.

A second subsample from stratum 2 (without matched telephone numbers) consisting of those who did not respond to priority mailing and those did not receive priority mailing will again be contacted by priority mailing. This will ensure adequate representation to those whose addresses do not match landline telephone numbers.Keeping in view that the priority mail subsample cover households both with and without landlines and the telephone subsample covers those only with landlines a total subsample of 600 households is recommended with 400 from priority mailing and 200 from the telephone subsample The subsample of 600 households from the non-respondents permits EPA to reject the hypothesis of no difference in population percentages between respondents and non-respondents with 80 percent power when there is a difference of 12 percentage points according to a two-sided statistical test. Since the estimates for the non-respondents are based on different sampling weights, EPA may be able to detect differences of 13 or 14 percentage points. Table B2 illustrates the distribution of the priority mail and telephone subsamples across survey regions.

Table B2: Number of Non-responding Households in the Priority Mail and Telephone Subsamples

Region

Number of Non-Respondents

Number in Priority Mail Subsample

Number in Telephone Subsample

Number in Total Subsample (completes)

Northeast

973

73

36

109

Southeast

1,312

98

49

147

Inland

1,708

128

64

192

Pacific

675

51

25

76

Total for Regional Survey Versions

4,668

350

175

524

National Survey Version

672

50

25

76

Total – All Survey Versions

5,340

400

200

600

Source: The number of households in each region was obtained based on the estimated population size and average household size from the 2006-2008 American Community Survey (ACS).



. EPA will use the data of the non-response questionnaire to compare mail survey respondents and non-respondents. The items of information collected during the short questionnaire will help determine the type of person that is likely to not respond to the survey and may help in forming weighting classes for adjusting weights of respondents to account for non-response and minimize the bias because of non-response. The cover letter and questionnaire used for the priority mail subsample are included as Attachments 12 and 13, respectively. The cover letter and script for the telephone subsample are included as Attachments 11 and 14.


2(d) Questionnaire Design


The information requested by the survey is discussed in Section 4(b)(I) of Part A of the supporting statement. The full text of the draft questionnaire for the Northeast region is provided in Attachment 1 and the full text of the draft questionnaire for the national survey version is provided in Attachment 2.

The following bullets discuss EPA’s reasons for including the questions in the survey:

  • Relative Importance of Issues Associated with Industrial Cooling Water. EPA included this section to prepare respondents to answer the stated preference questions by motivating respondents to consider the relative importance of key issues associated with the use of cooling water by industrial facilities.

  • Concern for Policy Issues. EPA included this section to prepare respondents to answer the stated preference questions by motivating respondents to think about the relative importance of different policy issues.

  • Relative Importance of Effects. This section was included to promote understanding of the metrics included in the stated preference questions by asking them to consider their relative importance prior to evaluating policy options and by encouraging respondents to re-read previous pages for reminders if necessary.

  • Voting for Regulations to Prevent Fish Losses in the Respondent’s Region (or Nationally). The questions in this section are the key part of the survey. Respondents’ choices when presented with specific fish-related resource changes within their region and household cost increases are the main data that allow estimation of willingness-to-pay. The questions are presented in a choice experiment (A, B, or neither) format because this is an elicitation format that has been successfully used by a number of previous valuation studies (Adamowicz et al. 1998; Bateman et al. 2002; Bennett and Blamey 2001; Louviere et al. 2000; Johnston et al. 2002a, 2005; Opaluch et al. 1993). Furthermore, many focus group participants indicated that they have some previous experience making choices within a framework in which they are asked to vote for one of a series of options, and are comfortable with this format.

  • Reasons for Voting “No Policy”. This question provides information that will be used by EPA to identify protest responses.

  • Respondent Certainty and Reasons for Voting. This section is designed to identify respondents who incorrectly interpreted the choice questions or the uncertainty of outcomes. Responses to these questions are important to successfully control for hypothetical bias.

  • Recreational Experience. This question elicits recreational experience data to test if certain respondent characteristics influence responses to the referendum questions. This question will also allow EPA to identify resource non-users, for purposes of estimating non-user WTP (to gauge the relative importance of non-use values to overall benefits).

  • Demographics. Responses to these questions will be used to estimate the influence of demographic variables on respondents’ voting choices, and ultimately, their WTP to prevent I&E mortality losses of fish. This information will allow EPA to use regression results to estimate WTP for populations in different regions affected by the 316(b) rule for existing facilities.

  • Comments. This section is primarily intended to help identify protest responses, i.e. responses from individuals who rejected the format of the survey or the way the questions were phrased.


3. Pretests and Pilot Tests


EPA conducted extensive pretests of the survey instrument during a set of seven focus groups (EPA ICR # 2090-0028), in addition to the twelve focus groups conducted for the Phase III survey. These focus groups included individual cognitive interviews with survey respondents (Kaplowicz et al. 2004), and think-aloud or verbal protocol analyses (Schkade and Payne 1994). Individuals in these focus groups completed draft survey questionnaires and provided comments and feedback about the survey format and content, their interpretations of the questions, and other issues relevant to stated preference estimation. Particular emphasis in these survey pretests was on testing for the presence of potential biases associated with poorly-designed stated preference surveys, including hypothetical bias, strategic bias, symbolic (warm glow) bias, framing effects, embedding biases, methodological misspecification, and protest responses (Mitchell and Carson 1989). Based on focus group and cognitive interview responses, EPA made various improvements to the survey questionnaire including changes to ameliorate and minimize these biases in the final survey instrument.

EPA intends to implement this survey in two stages. First, EPA will implement the Northeast version of this survey. EPA will use the Northeast version of the survey as a pilot study to validate the survey responses, including the following:

  • Compare the actual and expected response rates; Based on typical mail survey response rates for surveys of this type, the expected response rate is between 20% and 40% of deliverable surveys.

  • Assess whether demographic characteristics of the respondents are significantly different from the average demographic characteristics in the Northeast region

  • Check to see what proportion of respondents choose the status quo. If no one is choosing the status quo, it often indicates that the cost levels are too low. Pure random selection would result in 33% of survey respondents choosing status quo. If less than 15 - 20% of responses choose the status quo in the pilot study EPA would consider increasing the cost levels.

  • Make sure there are no strange patterns like the vast majority of respondents always choosing Option A (e.g., if 2/3 of respondents (66%) choosing option A it might indicate that there is a systematic bias).

  • Look at the follow-up questions 8 and 9 to make sure the responses seem to suggest that appropriate tradeoffs are being made and that people feel confident about responses. If either the median or the mean answer is less than 3.0 for these questions (neutral) that this would indicate a problem.

  • Examine response rates for individual survey questions and evaluate whether adjustments to survey questions are require to promote a higher response rate.

These data can be analyzed very easily using means and standard deviations without introducing significant delays in the survey implementation schedule. If required, EPA will make the appropriate adjustments to the sampling frame or attribute levels (e. g., increase or reduce the number of surveys mailed to households, or increase costs to households in the choice questions).


4. Collection Methods and Follow-up

4(a) Collection Methods


The survey will be administered as a mail survey. Respondents will be asked to mail the completed survey back to EPA.


4(b) Survey Response and Follow-up


The target response rate for the mail survey is 30 percent. That is, 30 percent of households which are sent the mail survey are expected to return a completed survey. To improve the response rate, all of these households will receive a reminder postcard approximately one week after the initial questionnaire mailing. Then, approximately three weeks after the reminder postcard, all those who have not responded will receive a second copy of the questionnaire with a revised cover letter. The following week, a letter reminding them to complete the survey will be sent.

As noted in Section 2(b), the survey sample will be selected using a stratified selection process. For the selection of households, the population of households in the contiguous 48 states and the District of Columbia will be stratified by the geographic boundaries of four EPA study regions. In addition, EPA will administer a national version of the survey that does not require stratification. We will keep track of the response rates for each of regional surveys and the national version of the survey to ensure that the rates are reasonable. We will also look at the frame characteristics of non-respondents to determine if there are any substantial biases in the estimates because of an imbalance in the distribution of certain important subgroups in the sample.


5. Analyzing and Reporting Survey Results

5(a) Data Preparation


Since the survey will be administered as a mail survey, survey responses will be scanned and entered into an electronic database after they are returned. After all responses have been entered, the database contents will be converted into a format suitable for use with a statistical analysis software package. The mail survey, database management, and data set conversion will be conducted by Abt Associates Inc.

All survey responses will be vetted for completeness. Additionally, respondents’ answers to the choice experiment questions will be tested to ensure that they are internally consistent with respect to scope and other expectations of neoclassical preference theory, such as transitivity. Responses which satisfy transitivity exhibit relational relationships when separate choices among policy options are compared. For example, if values for policy 1 are greater than policy 2, and values for policy 2 are greater than Policy 3, then values for policy 1 should also be greater than values for Policy 3.


5(b) Analysis


Once the survey data has been converted into a data file, it will be analyzed using statistical analysis techniques. The following section discusses the model that will be used to analyze the stated preference data from the survey.


Analysis of Stated Preference Data

The model for analysis of stated preference data is grounded in the standard random utility model of Hanemann (1984) and McConnell (1990). This model is applied extensively within stated preference research, and allows well-defined welfare measures (i.e., willingness to pay) to be derived from choice experiment models (Bennett and Blamey 2001; Louviere et al. 2000). Within the standard random utility model applied to choice experiments, hypothetical policy alternatives are described in terms of attributes that focus groups (Johnston et al. 1995; Adamowicz et al. 1998; Opaluch et al. 1993) reveal as relevant to respondents’ utility, or well-being. One of these attributes would include a mandatory monetary cost to the respondent’s household.

Applying this standard model to choices among policies to reduce I&E mortality losses, EPA defines a standard utility function Ui(.) that includes environmental attributes of an I&E reduction plan and the net cost of the plan to the respondent. Following standard random utility theory, utility is assumed known to the respondent, but stochastic from the perspective of the researcher, such that


(1) Ui(.) = U(Xi, D, Y-Fi) = v(Xi, D, Y-Fi) + εi


where:

Xi = a vector of variables describing attributes of I&E reduction plan i;

D = a vector characterizing demographic and other attributes of the respondent.

Y = disposable income of the respondent.

Fi = mandatory additional cost faced by the household under plan i;

v(.) = a function representing the empirically estimable component of utility;

εi = stochastic or unobservable component of utility, modeled as an econometric error.


Econometrically, a model of such a preference function is obtained by methods designed for limited dependent variables, because researchers only observe the respondent’s choice among alternative policy options, rather than observing values of Ui(.) directly (Maddala, 1983; Hanemann, 1984). Standard random utility models are based on the probability that a respondent’s utility from a policy Plan i, Ui(.), exceeds the utility from alternative Plans j, Uj(.), for all potential plans ji considered by the respondent. In this case, the respondent’s choice set of potential policies also includes maintaining the status quo. The random utility model presumes that the respondent assesses the utility that would result from each I&E reduction plan i (including the status quo), and chooses the plan that would offer the highest utility.

When faced with k distinct plans defined by their attributes, the respondent will choose plan i if the anticipated utility from plan i exceeds that of all other k-1 plans. Drawing from (1), the respondent will choose plan i if


(2) (v(Xi, D, Y-Fi) + εi) ≥ (v(Xk, D, Y-Fk) + εk) k≠i.


If the εi are assumed independently and identically drawn from a type I extreme value (Gumbel) distribution, the model may be estimated as a conditional logit model, as detailed by Maddala (1983), Greene (2003) and others. This model is most commonly used when the respondent considers more than two options in each choice set (e.g., Plan A, Plan B, Neither Plan), and results in an econometric (empirical) estimate of the systematic component of utility v(.), based on observed choices among different policy plans. Based on this estimate, one may calculate welfare measures (willingness to pay) following the well-known methods of Hanemann (1984), as described by Freeman (2003) and others. Following standard choice experiment methods (Adamowicz et al. 1998; Bennett and Blamey 2001), each respondent will consider questions including three potential choice options (i.e., Plan A, Plan B, Neither Plan)—choosing the option that provides the highest utility as noted above. Following clear guidance from the literature, a “neither plan” or status quo option is always included in the visible choice set, to ensure that WTP measures are well-defined (Louviere et al. 2000).

EPA also anticipates that respondents will consider more than one choice question within the same survey, to increase information obtained from each respondent. This is standard practice within choice experiment and dichotomous choice contingent valuation surveys (Poe et al. 1997; Layton 2000). While respondents will be instructed to consider each choice question as independent of other choice questions, it is nonetheless standard practice within the literature to allow for the potential of correlation among questions answered within a single survey by a single respondent. That is, responses provided by individual respondents may be correlated even though responses across different respondents are considered independent and identically distributed (Poe et al. 1997; Layton 2000; Train 1998).

There are a variety of approaches to such potential correlation. Following standard practice, EPA anticipates the estimation of a variety of models to assess their performance. Models to be assessed include random effects and random parameters (mixed) discrete choice models, now common in the stated preference literature (Greene 2003; McFadden and Train 2000; Poe et al. 1997; Layton 2000). Within such models, selected elements of the coefficient vector are assumed normally distributed across respondents, often with free correlation allowed among parameters (Greene 2002). If only the model intercept is assumed to include a random component, then a random effects model results. If both slope and intercept parameters may vary across respondents, then a random parameters model is estimated. EPA anticipates that such models will be estimated using standard maximum likelihood for mixed conditional logit techniques, as described by Train (1998), Greene (2002) and others. Mixed logit model performance of alternative specifications will be assessed by EPA using standard statistical measures of model fit and convergence, as detailed by Greene (2002, 2003) and Train (1998).


Advantages of Choice Experiments

Choice experiments following the random utility model outlined above are favored by many researchers over other variants of stated preference methodology (Adamowicz et al. 1998; Bennett and Blamey 2001), and may be viewed as a “natural generalization of a binary discrete choice CV [contingent valuation]” (Bateman et al. 2002, p. 271). Advantages of choice experiments include a capacity to address choices over a wide array of potential policies, grounded in well-developed random utility theory, and the similarity of the discrete choice context to familiar referendum or voting formats (Bennett and Blamey 2001). Compared to other types of stated preference valuation, choice experiments are better able to measure the marginal value of changes in the characteristics or attributes of environmental goods, and avoid response difficulties and biases (Bateman et al. 2002). For example, choice experiments may reduce the potential for ‘yea-saying’ and symbolic biases (Blamey et al. 1999; Mitchell and Carson 1989), as many pairs of multi-attribute policy choices (e.g., Plan A, Plan B, Neither) will offer no clearly superior choice for a respondent wishing to express solely symbolic environmental motivations. For similar reasons choice experiments may ameliorate protest responses (Bateman et al. 2002). An additional advantage of such methods is that they permit straightforward assessments of the impact of resource scope and scale on respondents’ choices. This will enable EPA to easily conduct scope tests and other assessments of the validity of survey responses (Bateman et al. 2002, p. 296-342). Finally, such methods are well-established in the stated preference literature (Bennett and Blamey 2001). Additional details of choice experiment methodology (also called choice modeling) are provided by Bennett and Blamey (2001), Adamowicz et al. (1998), Louviere et al. (2000) and many other sources in the literature.

An additional advantage of choice experiments in the present application is that they are commonly applied to assess WTP for ecological resource improvements of a type quite similar to those at issue in the 316(b) policy case. Examples of the application of choice experiments to estimate WTP associated with changes in aquatic life and habitat include Hoehn et al. (2004), Johnston et al. (2002b), and Opaluch et al. (1999), among others. EPA has drawn upon these and other examples of successful choice experiment design to provide a basis for survey design in the present case.

A final key advantage of choice experiments in the present application is the ability to estimate respondents’ WTP for a wide range of different potential outcomes of 316(b) policies, differentiated by their attributes. The proposed choice experiment survey versions will allow different respondents to choose among a wide variety of hypothetical policy options, some with larger and other with very small changes in the presented attributes (annual fish losses, long-term fish populations, recreational and commercial catch, ecosystem condition, and household cost). That is, because the survey is to be implemented as a choice experiment survey, levels of attributes in choice scenarios will vary across respondents (Louviere et al. 2000). The experimental design will also explicitly allow for variation in baseline population and harvest levels, following standard practice in the literature (Louviere et al. 2000; Bateman et al. 2002).

Aside from providing the capacity to estimate WTP for a wide range of policy outcomes, it also frees EPA from having to predetermine a single policy outcome for which WTP will be estimated. Given the potential biological uncertainty involved in the 316(b) policy case, the ability to estimate values for a wide range of potential outcomes is critical.

The ability to estimate WTP for a wide range of different policy outcomes is a fundamental property of the choice experiment method (Bateman et al. 2002; Louviere et al. 2000; Adamowicz et al. 1998). For the purpose of stated preference survey implementation, EPA will use four geographic regions: Northeast, Southeast, Inland, and Pacific. The Northeast regional survey is included in this ICR as Attachment 1. In addition, EPA will administer a national version of the survey that is included as Attachment 2. EPA emphasizes that the survey versions included in this ICR are for illustration only; they are but two of what will ultimately be a large number of different survey versions covering a wide range of potential policy outcomes as described in Attachment 5. The experimental design (see below) will allow for survey versions showing a range of different baseline and resource improvement levels, where these levels are chosen to (almost certainly) bound the “actual” levels. Given that there will almost certainly be some biological uncertainty regarding the specifics of the “actual” baselines and improvements, the resulting valuation estimates will allow flexibility in estimating WTP for a wide range of different circumstances. Additional details on the statistical (experimental) design of the choice experiment is provided in later sections of this ICR.



Comment on Survey Preparation and Pretesting

Following standard practice in the stated preference literature (Johnston et al. 1995; Desvousges and Smith 1988; Desvousges et al. 1984; Mitchell and Carson 1989), all survey elements and methods were subjected to extensive development and pretesting in focus groups to ameliorate the potential for survey biases (cf. Mitchell and Carson 1989), and to ensure that respondents have a clear understanding of the policies and goods under consideration, such that informed choices may be made that reflect respondents’ underlying preferences. Following the guidance of Arrow et al. (1993), Johnston et al. (1995), and Mitchell and Carson (1989), focus groups were used to ensure that respondents are aware of their budget constraints, the scope of the resource changes under consideration, and the availability of substitute environmental resources.

As noted above, survey pretests included individual cognitive interviews with survey respondents (Kaplowicz et al. 2004), and think-aloud or verbal protocol analyses (Schkade and Payne 1994). Individuals in these pretests completed draft survey questionnaires and provided comments and feedback about the survey format and content, their interpretations of the questions, and other issues relevant to stated preference estimation. Based on their responses, EPA made improvements to the survey questionnaire. Of particular emphasis in these survey pretests was testing for the presence of potential biases including hypothetical bias, strategic bias, symbolic (warm glow) bias, framing effects, embedding biases, methodological misspecification, and protest responses (Mitchell and Carson 1989). Based on focus group and cognitive interview responses, EPA made various improvements to the survey questionnaire including changes to ameliorate and minimize these biases in the final survey instrument. Results from focus groups and cognitive interviews provided evidence that respondents answer the stated preference survey in ways appropriate for stated preference WTP estimation, and that their responses generally do not reflect the biases noted above.

The number of focus groups used in survey design, seven (excluding the 12 focus groups conducted for the Phase III survey), exceeds the number of focus groups used in typical applications of stated preference valuation. Moreover, EPA incorporated cognitive interviews as detailed by Kaplowicz et al. (2004). We note that the current survey instrument is built upon an earlier version that was peer reviewed in January 2006 (Versar 2006) and it incorporates recommendations received from that peer review panel. Given this extensive effort in survey design—applying the most state-of-the-art methods available in the literature—EPA believes that survey design far exceeds standards that are typical in the published literature. The details of focus groups conducted for the previous Phase III survey are discussed by EPA in a prior ICR (#2155.01).


Econometric Specification

Based on prior focus groups, expert review, and attributes of the policies under consideration, EPA anticipates that four attributes will be incorporated in the vector of variables describing attributes of an I&E reduction plan (vector Xi), in addition to the attribute characterizing unavoidable household cost Fi.3 These attributes will characterize the annual reduction in I&E losses (x1), anticipated effects on fish populations (all fish) (x2), anticipated effects on commercial fish populations (x3), and anticipated effects on aquatic ecosystem condition (x4). These variables will allow respondents’ choices to reveal the potential impact of both annual fish losses and long-term population effects on utility. Based on results of focus groups and expert opinion, these will be presented as averages across identified aggregate species groups. The survey will also allow for changes in baseline population levels, to assess whether WTP depends on the “starting point” of fish populations.

Although the literature offers no firm guidance regarding the choice of specific functional forms for v(.) within choice experiment estimation, in practice, linear forms are often used (Johnston et al. 2003b), with some researchers applying more flexible (e.g., quadratic) forms (Cummings et al. 1994). Standard linear forms are anticipated as the simplest form to be estimated by EPA, from which more flexible functional forms (able to capture interactions among model variables) will be derived and compared. Anticipated extensions to the simple linear model include more fully-flexible forms that allow for systematic variations in slope and intercept coefficients associated with demographic or other attributes of respondents. Such variations may be incorporated by appending the simple linear specification with quadratic interactions between variables in vector D and the variables Xi and Fi (cf. Johnston et al. 2003b).

One may also incorporate quadratic interactions between policy attributes Xi and Fi, (cf. Johnston et al. 2002b). Such quadratic extensions of the basic linear model allow for additional flexibility in modeling the relationship between policy attributes (including cost) and utility, as suggested by Hoehn (1991) and Cummings et al. (1994). EPA anticipates estimating both simple linear specifications, as well as more fully-flexible quadratic specifications following Hoehn (1991) and Cummings et al. (1994), to identify those models which provide the most satisfactory statistical fit to the data and correspondence to theory. EPA anticipates estimating all models within the mixed logit framework outlined above. Model fit will be assessed following standard practice in the literature (e.g., Greene 2003; Maddala 1983). Linear and quadratic functional forms discussed here, as they are common practice in the literature, are presented and discussed in many existing sources (e.g., Hoehn 1991, Cummings et al. 1994, Johnston et al. 1999, and Johnston et al. 2003b).

For example, for each choice occasion, the respondent may choose Option A, Option B, or Neither, where “neither” is characterized by 0 values for all attributes (except Baseline population levels). Assuming that the model is estimated using a standard approximation for the observable component of utility, an econometric specification of the desired model (within the overall multinomial logit model) might appear as:


v() = 0 + 1(Fish Saved) + 2(Change in Populations of All Fish) + 3(Change in Commercial Fish Populations) + 4(Change in Condition of Aquatic Ecosystem) + 5(Cost) + 6(Fish Saved)(Baseline) + 7(Change in Populations of All Fish)(Baseline) + 8(Change in Commercial Fish Populations)(Baseline) + 9(Change in Aquatic Ecosystem)(Baseline) + 10(Cost)(Baseline) + 11(Fish Saved)(Change in Populations of All Fish) + 12(Fish Saved)(Change in Commercial Fish Populations) + 13(Fish Saved)(Change in Aquatic Ecosystem) + 14(Change in Populations of All Fish)(Change in Commercial Fish Populations) + 15(Change in Populations of All Fish)(Change in Aquatic Ecosystem) + 16(Change in Commercial Fish Populations)(Change in Aquatic Ecosystem)



Main effects are in bold. Interactions are in italics. This sample specification—one of many to be estimated by EPA—allows one to estimate the relative “main effects” of policy attributes (annual reduction in I&E losses, long-term effects on populations of all fish, long-term effects on commercial fish populations) on utility, effects on aquatic ecosystem condition, as well as interactions between these main effects. This specification also allows EPA to assess the impact of baseline fish populations on the marginal value of changes in other model attributes. In sum, specifications such as this allow WTP to be estimated for a wide-range of potential policy outcomes, and allow EPA to test for a wide-range of main effects and interactions within the utility function of respondents. Such flexible utility specifications for stated preference estimation are recommended by numerous sources in the literature, including Johnston et al. (2002b), Hoehn (1991), and Cummings et al. (1994), and follow standard practice in choice modeling outlined by Louviere et al. (2000) and others.


Experimental Design

Experimental design for the choice experiment surveys will follow established practices. Fractional factorial design will be used to construct choice questions with an orthogonal array of attribute levels, with questions randomly divided among distinct survey versions (Louviere et al. 2000). Based on standard choice experiment experimental design procedures (Louviere et al. 2000), the number of questions and survey versions will be determined by, among other factors: a) the number of attributes in the final experimental design and complexity of questions, b) the extent to which estimation of interactions and higher-level effects is desired, and c) pretests revealing the number of choice experiment questions that respondents are willing/able to answer in a single survey session, and the number of attributes that may be varied within each question while maintaining respondents’ ability to make appropriate neoclassical tradeoffs.

Based on the models proposed above and recommendations in the literature, EPA anticipates an experimental design that allows for an ability to estimate main effects, quadratic effects, and two-way interactions between policy attributes (Louviere et al. 2000). Choice sets (Bennett and Blamey 2001), including variable level selection, will be designed by EPA based on the goal of illustrating realistic policy scenarios that “span the range over which we expect respondents to have preferences, and/or are practically achievable” (Bateman et al. 2002, p. 259), following guidance in the literature. This includes guidance with regard to the statistical implications of choice set design (Hanemann and Kanninen 1999) and the role of focus groups in developing appropriate choice sets (Bennett and Blamey 2001).

Based on these guiding principles, the following experimental design framework is proposed by EPA. The experimental design will be conducted by Abt Associates Inc. The experimental design will allow for both main effects and selected interactions to be efficiently estimated, based on a choice experiment framework. For a more detailed discussion of the experimental design, refer to Attachment 5.

Each treatment (survey question) includes two choice Options (A and B), characterized by four attributes and a cost variable that vary across the two choice options (Commercial Fish Populations, Fish Populations (all fish), Fish Saved per Year, Condition of Aquatic Ecosystems, and Increase in Cost of Living of Your Household). Hence, there are a total of ten attributes for each treatment. Based on focus groups and pretests, and guided by realistic ranges of attribute outcomes, EPA allows for three different potential levels for Commercial Fish Populations, Fish Populations (all fish), Fish Saved per Year, and Condition of Aquatic Ecosystems, and allows for six different levels of annual Household Cost for the regional or national choice questions. The number of combinations for each attribute may be summarized as follows:

  • Commercial Fish PopulationsA, Commercial Fish PopulationsB (3 levels)

  • Fish Populations (all fish)A, Fish Populations (all fish)B (3 levels)

  • Fish Saved per YearA, Fish Saved per YearB (3 levels)

  • Condition of Aquatic EcosystemsA, Condition of Aquatic EcosystemsB (3 levels)

  • CostA, CostB (6 levels)


Beyond the levels specified above, each question will include a “no policy” option, characterized by baseline levels for each attribute including a household cost of $0.

Following standard practice, EPA constrained the design somewhat in response to findings in seven focus groups and the prior literature. For example, the focus groups showed that respondents react negatively and often protest when offered choices in which one option dominates the other in all attributes. Given that such choices provide negligible statistical information compared to choices involving non-dominant/dominated pairs, they are typically avoided in choice experiment statistical designs. For example, Hensher and Barnard (1990) recommend eliminating profiles including dominating or dominated profiles, because such profiles generally provide no useful information. Following this guidance, EPA constrained the design to eliminate such dominant/dominating pairs. EPA also constrained the design to eliminate the possibility of pairs in which, when looking across two options, one of the options offers both a greater reduction in fish losses and a smaller increase in the population. The elimination of such nonsensical (or non-credible) pairs is common practice, and is done to avoid protest bids and confusion among respondents (Bateman et al. 2002).

The resulting experimental design is characterized by 72 unique A vs. B option pairs, where attribute levels for option A and B differ across each of the pairs. Each pair represents a unique choice modeling question—with a unique set of attribute levels distinguishing options A and B. Following standard practice for mail surveys, these questions will be randomly assigned to survey respondents, with each respondent considering three questions.


Information Provision

According to Arrow et al. (1993, p. 4605), if “surveys are to elicit useful information about willingness to pay, respondents must understand exactly what it is they are being asked to value.” It is also well known that the provided information can influence WTP estimates derived from stated preference survey instruments and that respondents must be provided with sufficient information to make an informed assessment of policy impacts on utility (e.g., Bergstrom and Stoll 1989; Bergstrom et al. 1989; Hoehn and Randall 2002). As stated clearly by Bateman et al. (2002, p. 122), “[d]escribing the good and the policy context of interest may require a combination of textual information, photographs, drawings, maps, charts and graphs. …[V]isual aids are helpful ways of conveying complex information…while simultaneously enhancing respondents’ attention and interest.” Given that many respondents may not be fully familiar with the details of programs to reduce I&E mortality losses and potential impacts on aquatic life, the survey will include introductory figures to aid respondents’ comprehension of the goods and policies addressed by the survey instrument, and to encourage appropriate neoclassical tradeoffs in responding to choice experiment questions.

Following this guidance of Bateman et al. (2002) and prior examples of Opaluch et al. (1993) and Johnston et al. (2002a), among others, EPA extensively pretested all graphics used in the draft mail survey, to ensure that these graphical elements were not prejudicial, and that they did not bias responses. Graphics judged to be prejudicial or confusing to respondents during the seven focus groups and cognitive interviews were revised or replaced. EPA acknowledges that certain types of graphics can be prejudicial in certain contexts—and hence all graphical elements were pretested extensively. EPA found that focus group respondents endorsed the use of graphics in the survey booklet and indicated that they helped them to visualize how fish are entrained and impinged, technological solutions, facilities locations, and ecosystem effects. Participants made such statements as, “Yeah, I’d rather have them” and “I like on page 2 the graph and illustration because the adage a picture is worth a thousand words”. EPA also emphasizes that there is no precedent or support in the literature for the total elimination of graphics in survey instruments. To the contrary, the literature explicitly indicates that pictures and graphics may be necessary and useful components of survey instruments in many cases (Bateman et al. 2002). EPA highlights that numerous peer-reviewed surveys described in the literature include pictures and graphics both in survey instruments and in introductory materials such as slide shows. For example, see Horne et al. (2005), Ready et al. (1995), Powe and Bateman (2004), Duke and Ilvento (2004), Opaluch et al. (1993), Johnston et al. (1999, 2002a, 2002b), and Mazzotta et al. (2002). Bateman et al. (2002) also includes examples of various types of survey materials including pictures and graphical elements.


Amelioration of Hypothetical Bias

EPA considers the amelioration of hypothetical bias to be a paramount concern in survey design. However, the agency acknowledges—based on prior evidence from the literature—that hypothetical bias is not unavoidable. For example, not all research finds evidence of hypothetical bias in stated preference valuation (Champ and Bishop 2001; Smith and Mansfield 1998; Vossler and Kerkvliet 2003; Johannesson 1997), and some shows that hypothetical bias may be ameliorated using cheap-talk, certainty adjustments, or other mechanisms (Champ et al. 1997; Champ et al. 2004; Cummings and Taylor 1999; Loomis et al. 1996).

To obtain reliable estimates of WTP, the Agency tested and designed all survey elements to promote incentive compatible preference elicitation mechanisms. Incentive compatible stated preference surveys provide no incentive for non-truthful preference revelation (Carson and Groves 2007). The literature is clear regarding the importance of incentive compatibility in stated preference value elicitation and the role of both question format and scenario consequentiality in ensuring this property (Carson et al. 2000; Carson and Groves 2007; Collins and Vossler 2009; Herriges et al. 2010; Johnston 2006; Vossler and Evans 2009). It has been established that referendum-type stated preference choices are incentive compatible given that certain conditions are met, including the condition that responses are believed by respondents to be consequential, or potentially influencing public policy decisions (Carson and Groves 2007; Herriges et al. 2010).

The survey is explicitly designed to emphasize the importance of the budget constraint and program cost. For example, the survey asks respondents to compare protecting aquatic ecosystems to other policy issues which the government could potentially ask households to pay costs. The survey itself includes explicit reminders of program cost and the budget constraint.

The survey has also been explicitly designed to maximize the consequentiality of choice experiment questions, thereby maximizing incentive compatibility (i.e., reducing strategic and hypothetical biases), following clear guidance of Carson et al. (2000). Elements specifically designed to maximize consequentiality include: a) explicitly mentioning that this survey is associated with assessment of proposed policies that are being considered, b) numerous details provided in the survey concerning specifics of the proposed policies, and c) emphasis that the type of policy enacted will depend in part on survey results and that their vote is important. Johnston and Joglekar (2005) show the capacity of such information to eliminate hypothetical bias in choice-based stated preference WTP estimation.

Focus groups and cognitive interviews indicated that respondents viewed choices as consequential, that they considered their budget constraints when responding to all questions, and that they would answer the same way were similar questions asked in a binding referendum. When asked if they thought about the program cost in the same way as “money coming out of their pocket,” the vast majority of focus group and interview respondents indicated that they treated program costs the same way that they would have if there were actual money consequences. For example, respondents made statements such as “No. [My vote] would have been the same actually” and “If I believed that it was gonna affect regulations, I think I would have voted the exact same way.”

EPA does not anticipate significant hypothetical bias in the proposed survey based on focus group results. Focus groups respondents took the survey questions seriously and indicated that they though that their choices would actually influence policy. Regarding the potential use of cheap talk mechanisms or other devices to further address the potential for hypothetical bias, the Agency emphasizes that the literature is mixed as to their performance. For example, the seminal work by Cummings and Taylor (1999) shows that cheap talk is able to reduce hypothetical biases. Similar results are shown by Aadland and Caplan (2003). However, other authors (e.g., Cummings et al. 1995; List 2001; Brown et al. 2003) find that a cheap talk script is only effective under certain circumstances, and for certain types of respondents. For example, Cummings et al. (1995) find that a relatively short cheap talk script actually worsens hypothetical bias, while a longer script appears to ameliorate bias. Brown et al. (2003) finds cheap talk only effective at higher bid amounts—a result mirrored by Murphy et al. (2004). Still other authors find no effect of cheap talk, including Poe et al. (2002). Given the clearly mixed experiences with such mechanisms, EPA is not convinced that cheap talk scripts are likely to provide a panacea for hypothetical bias in the present case—although they appear to reduce bias in a limited set of circumstances – and cheap talk is not included in the survey.


Amelioration of Symbolic Biases and Warm-Glow Effects

Following clear guidance of Arrow et al. (1993) and others, EPA has taken repeated steps to ensure that survey responses reflect the value of the affected fish resources only, and do not reflect symbolic or warm glow concerns (Mitchell and Carson 1989). Following explicit guidance of the NOAA Blue Ribbon Panel on Contingent Valuation (Arrow et al. 1993, p. 4609), EPA has explicitly designed all elements of the survey to “deflect the general ‘warm glow’ of giving or the dislike of ‘big business’ away from the specific program that is being valued.” This was done in a variety of ways, based on prior examples in the literature, such as asking respondents to reflect on importance of attributes before making selections, and using a payment vehicle that doesn’t raise trust issues (cost of living increase rather than an electric bill increase). The focus group and cognitive interview results indicated that most participants answered the choice questions based on the effects discussed in the survey, not on a desire to help the environment in general.

The survey includes clear language to instruct respondents only to consider the specific attributes in the survey, and not to base answers on broader environmental concerns including the statement that:

Scientists expect that effects on the environment and economy not shown explicity will be small. For example, studies of industry suggest that effects on employment will be close to zero.”

This is also consistent with the statement from Arrow et al. (1993) that a referendum-type format may limit the warm-glow effect. Some focus group participants indicated that they were inclined to support environmental causes and would like “to do a good thing” but still considered the cost and effects under the policy options. For example, respondents stated, “[…] if we can do something to help as long as the price is right, then do it” and “I feel if it’s going to be benefit everyone and be better for the economy, I’m OK with paying a little bit more.”

This evidence notwithstanding, EPA believes that it is important to include follow-up questions to ensure that responses do not reflect symbolic biases. Question 9 in the survey instrument—which addresses the rationale for choice responses given earlier in the draft survey—explicitly tests for the presence of symbolic or warm-glow biases. Follow-up questions such as these are common in stated preference survey instruments, to assess the underlying reasons for the observed valuation responses (e.g., Mitchell and Carson 1984).


Assessing Scope Sensitivity

The NOAA Blue Ribbon Panel on Contingent Valuation (CV) (Arrow et al. 1993, p. 4605) states clearly that if “surveys are to elicit useful information about willingness to pay, respondents must understand exactly what it is they are being asked to value (or vote upon)…” They further indicate that surveys providing “sketchy details” about the results of proposed policies call “into question the estimates derived there from,” and hence suggest a high degree of detail and richness in the descriptions of scenarios. Similar guidance is provided by other key sources in the CV literature (e.g., Mitchell and Carson 1989; Louviere et al. 2000). Among the reasons for this guidance is that such descriptions tend to encourage appropriate framing and sensitivity to scope.

Following Arrow et al. (1993), Mitchell and Carson (1989), and others, while noting the clear limitations in scope tests discussed by Heberlein et al. (2005), EPA believes that it is important that survey responses in this case show sensitivity to scope. This is one of the primary reasons for the use of the choice experiment methodology, which is better able to capture WTP differentials related to changes in resource scope (Bateman et al. 2002). Unlike open-ended questions, in which scope insensitivity is a primary concern, EPA emphasizes that choice experiments generally have shown much less difficulty with respondents reacting appropriately to the scope and scale of resource changes. Moreover, as clearly noted by Bennett and Blamey (2001, p. 231), “internal scope tests are automatically available from the results of a [choice modeling] exercise.” That is, within choice experiments, sensitivity to scope is indicated by the statistical significance and sign of parameter estimates associated with program attributes (Bennett and Blamey 2001). Internal scope sensitivity will therefore be assessed through model results for the variables Commercial Fish Populations, Fish Populations (all fish), Fish Saved per Year, and Condition of Aquatic Ecosystems. Statistical significance of these variables—along with a positive sign—indicates that respondents, on average, are more likely to choose plans with larger quantities of these variables.

In addition to internal scope tests implicit in all choice experiment statistical analysis, EPA will also conduct external scope tests (cf. Giraud et al. 1999). The primary difference between internal and external tests is that the former assess sensitivity to scope across choices of a single respondent, while the latter involves split-sample assessments across different respondents. Within a choice modeling context, external scope tests are generally considered “stronger,” although also more likely to be confounded by differences in the implied choice frame (Bennett and Blamey 2001). A variety of options for external scope tests exist, depending on the structure of the stated choice questions under consideration.

In the present case, attribute-by-attribute external scope tests will be conducted over a split sub-sample of respondents considering a specific set of choices, with all held constant across the considered choices except the scope of the attribute for which the test is to be conducted. For example, to conduct an external scope test for reductions in annual fish losses, one would consider a set of choices that is identical over two respondent groups, except that one considers a choice with a greater reduction in fish losses. Assessing the choices over this split sample allows for an external test of scope. To illustrate this test, consider the following stylized choice between Option A and Option B. The generic labels “Level 0”, “Level 1”, and “Level 2” are used to denote attribute levels, where for all attributes Level 2 > Level 1 > Level 0.


Table B3: Illustration of an External Scope Test

Variable

Option A

Option B

Fish Saved per Year


Sample 1: Fish Saved Level 1

Sample 2: Fish Saved Level 2

Fish Saved Level 0

Commercial Fish Populations

Commercial Fish Populations

Level 0

Commercial Fish Populations

Level 0

Fish Populations (all fish)

Population (all fish) Level 0

Population (all fish) Level 0

Condition of Aquatic Ecosystems

Aquatic Ecosystem Condition

Level 0

Aquatic Ecosystem Condition

Level 0

Increase in Cost of Living for Your Household

Cost Level 1

Cost Level 0



In the above example, only Fish Saved per Year and Cost vary across the choice options. Because both Fish Saved per Year (at Level 1 and Level 2) and Cost are higher in Option A than in Option B, neither option is dominant. In the illustrated split-sample test, respondent sample 1 views the choice with Fish Saved per Year at Level 1, while respondent sample 2 views an otherwise identical choice with Fish Saved per Year at Level 2, where Level 2 > Level 1. If responses are externally sensitive to scope in Fish Saved per Year, this will manifest in a greater proportion of sample 2 respondents choosing Option A than sample 1 respondents. This hypothesis may be easily assessed using a test of equal proportions across the two sub-samples, and provides a simple attribute-by-attribute test of external scope. Analogous tests may be conducted for all attributes within the choice experiment design, using parallel methods. EPA emphasizes that the formal applicability of the above-noted scope test is contingent upon the specific choice frame implied by levels of other attributes in the choice question. This is a characteristic of nearly all external scope tests applied in choice experiment frameworks (Bennett and Blamey 2001).

Split-sample tests such as those proposed above often require the addition of question versions to the experimental design, to accommodate the specific structural needs of the attribute-by-attribute external scope test. Otherwise, confounding effects of other varying attributes (including demographic information) can render results of scope tests ambiguous. In the present case, the proposed tests would require the addition of up to six unique question versions to the experimental design, enabling scope tests for the three non-cost attributes within the 316(b) choice experiment scenarios. If scope tests in additional question frames are desired (e.g., the same scope test illustrated above, but given Level 1 for commercial fish populations, fish population (all fish), and aquatic ecosystem condition attributes), still additional question versions would be added. While small numbers of questions added to the experimental design should have minimal impacts on overall efficiency (e.g., orthogonality of the design), larger numbers may have a more significant impact. Hence, given constraints on the total number of survey respondents, there is a potential empirical tradeoff between the number of external scope tests that may be conducted and the efficiency of the experimental design and statistical analysis.


Communicating Uncertainty to Respondents

EPA believes that the role of risk and uncertainty is an important issue to be addressed in the development of benefits estimates, and points out that the literature provides numerous examples of cases in which appropriate survey design, including focus groups, was used to successfully address such concerns. For example, as stated by Desvousges et al. (1984), “using contingent valuation to estimate the benefits of hazardous waste management regulations requires detailed information on how and the extent to which respondents understand risk (or probability) and how government regulatory actions might change it… Using focus groups helped make this determination…” EPA also emphasizes that all regulatory analyses involve uncertainty of some type (Boardman et al. 2001).

The ecological outcome of I&E reductions is subject to considerable uncertainty. EPA believes that it is important that survey respondents be aware of this uncertainty, and that their responses reflect the knowledge that the resource changes reflected in the survey are scientific estimates. However, EPA is also aware of the clear advice from the choice modeling literature (e.g., Bennett and Blamey 2001; Louviere et al. 2000) to avoid cognitive burden on respondents. Hence, the proposed survey materials clearly indicate the uncertainty involved with the described resource changes in choice modeling scenarios, yet do so in a way designed to minimize cognitive burden.

For example, prior to answering choice experiment questions, respondents are told:

Although scientists can predict the number of fish saved each year, the effect on fish populations is uncertain. This is because scientists do not know the total number of fish in Northeast waters and because many factors – such as cooling water use, fishing, pollution and water temperature – affect fish.”

This statement clearly indicates the uncertainty involved with scientific estimates of the outcomes of I&E regulations. This is followed by a further reminder of uncertainty:

Depending on the type of technology required and other factors, effects on fish and ecosystems may be different – even if the annual reduction in fish losses is similar.”

Focus groups and cognitive interviews participants understood that the ecological changes described in the survey were uncertain, and most participants were comfortable making decisions in the presence of this uncertainty. Their responses indicated that they understood this uncertainty based on the information presented in the introductory material and considered it when evaluating policy options. Respondents made such statements as: “My guess is that it did come from studies but I have a healthy dose of skepticism about the accuracy of it. I don’t think it’s been in any way skewed purposefully, but I know that this is a best guess, reasonable guess perhaps”, “it shows me that they are being honest for the most part. You know, you can't obviously be accurate on everything, but this is a kind of a best guess”, “[…] They had more numbers on the commercial fish population. The rest was more of a guesstimate”, and “you don’t know the exact number and nobody knows.”

In previous focus groups conducted for the Phase III survey, EPA tested alternative versions of the Phase III survey instrument in which choice experiment attributes were presented as 90% confidence ranges, rather than as point estimates. Focus group respondents were explicitly asked whether the ranges were helpful in understanding the uncertainty of estimates presented in the choice question or whether they were a source of confusion. Seven out of the eight respondents interviewed on that occasion indicated that the use of ranges was more confusing than the use of point estimates. Furthermore, respondents were comfortable making decisions in the presence of this uncertainty.


5(c) Reporting Results


The results of the survey will be made public as part of the benefits analysis for the 316(b) regulation for existing facilities. Provided information will include summary statistics for the survey data, extensive documentation for the statistical analysis, and a detailed description of the final results. The survey data will be released only after it has been thoroughly vetted to ensure that all potentially identifying information has been removed.

REFERENCES


Aadland, D., and A.J. Caplan. 2003. “Willingness to Pay for Curbside Recycling with Detection and Mitigation of Hypothetical Bias.” American Journal of Agricultural Economics 85(2): 492-502.

Adamowicz, W., P. Boxall, M. Williams, and J. Louviere. 1998. “Stated Preference Approaches for Measuring Passive Use Values: Choice Experiments and Contingent Valuation.” American Journal of Agricultural Economics 80(1): 64-75.

Akter, S., R. Brower, L. Brander, and P. Van Beukering. 2009. “Respondent Uncertainty in a Contingent Market for Carbon Offsets.” Ecological Economics 68(6): 1858-1863.

Arrow, K. , R. Solow, E. Leamer, P. Portney, R. Rander, and H. Schuman. 1993. “Report of the NOAA Panel on Contingent Valuation.” Federal Register 58 (10): 4602-4614.

Bateman, I.J., R.T. Carson, B. Day, M. Hanemann, N. Hanley, T. Hett, M. Jones-Lee, G. Loomes, S. Mourato, E. Ozdemiroglu, D.W. Pierce, R. Sugden, and J. Swanson. 2002. Economic Valuation with Stated Preference Surveys: A Manual. Northampton, MA: Edward Elgar.

Bennett, J. and R. Blamey, eds. 2001. The Choice Modelling Approach to Environmental Valuation. Northampton, MA: Edward Elgar.

Bergstrom, J.C. and J.R. Stoll. 1989. “Aplication of experimental economics concepts and precepts to CVM field survey procedures.” Western Journal of Agricultural Economics 14(1): 98-109.

Bergstrom, J.C., J.R. Stoll, and A. Randall. 1989. “Information effects in contingent markets.” American Journal of Agricultural Economics 71(3): 685-691.

Besedin, Elena, Robert Johnston, Matthew Ranson, and Jenny Ahlen, Abt Associates Inc. 2005. “Findings from 2005 Focus Groups Conducted Under EPA ICR #2155.01.” Memo to Erik Helm, U.S. EPA/OW, October 18, 2005. See docket for EPA ICR #2155.02

Blamey, R.K., J.W. Bennett, and M.D. Morrison. 1999. “Yea-saying in Contingent Valuation Surveys.” Land Economics 75: 126-141.

Boardman, A.E., D.H. Greenberg, A.R. Vining, and D.L. Weimer. 2001. Cost-Benefit Analysis: Concepts and Practice, 2nd edition. Upper Saddle River, NJ: Prentice Hall.

Boyle, K.J. 2003. “Contingent valuation in practice.” In A Primer on Nonmarket Valuation. Edited by P.A. Champ, K.J. Boyle, and T.C. Brown, Kluwer Academic Publishers.

Brown, T. C., I. Ajzen, and D. Hrubes. 2003. “Further Tests of Entreaties to Avoid Hypothetical Bias in Referendum Contingent Valuation.” Journal of Environmental Economics and Management 46(2): 353-361.

Bunch, D.S., and R.R. Batsell. 1989. “A Monte Carlo Comparison of Estimators for the Multinomial Logit Model.” Journal of Marketing Research 26: 56-68.

Cameron, T.A., and D.D. Huppert. 1989. “OLS versus ML Estimation of Non-market Resource Values with Payment Card Interval Data.” Journal of Environmental Economics and Management 17: 230-246.

Carson, R.T., and T. Groves. 2007. “Incentives and informational properties of preference questions.” Environmental and Resource Economics 37(1): 181-210.

Carson, R.T., T. Groves, and M.J. Machina. 2000. “Incentive and Informational Properties of Preference Questions.” Working Paper, Department of Economics, University of California, San Diego.

Champ P.A., and R.C. Bishop. 2001. “Donation Payment Mechanisms and Contingent Valuation: An Empirical Study of Hypothetical Bias.” Environmental and Resource Economics 19(4): 383-402.

Champ, P.A., R.C. Bishop, T.C. Brown, and D.W. McCollum. 1997. “Using Donation Mechanisms to Value Non-use Benefits from Public Goods.” Journal of Environmental Economics and Management 33(2): 151-162.

Champ, P.A., R. Moore, and R. C. Bishop. 2004. “Hypothetical Bias: The Mitigating Effects of Certainty Questions and Cheap Talk.” Selected paper prepared for presentation at the American Agricultural Economics Association Annual Meeting, Denver, Colorado.

Champ, P.A., R. Moore, and R.C. Bishop. 2009. “A Comparison of Approaches to Mitigate Hypothetical Bias.” Agricultural and Resource Economics Review 38(2): 166-180.

Collins, J.P., and C.A. Vossler. 2009. “Incentive compatibility tests of choice experiment value elicitation questions.” Journal of Environmental Economics and Management 58(2): 226-235.

Croke, K., R.G. Fabian, and G. Brenniman. 1986. “Estimating the Value of Improved Water Quality in an Urban River System.” Journal of Environmental Systems 16(1): 13-24.

Cronin, F.J. 1982. “Valuing Nonmarket Goods Through Contingent Markets.” Pacific Northwest Laboratory, PNL 4255, Richland, WA.

Cummings, R.G., and G.W. Harrison. 1995. “The Measurement and Decomposition of Non-use Values: A Critical Review.” Environmental and Resource Economics 5: 225-247.

Cummings, R. G., G.W. Harrison, and L.L. Osborne. 1995. “Can the Bias of Contingent Valuation Surveys Be Reduced?” Economics working paper, Columbia, SC: Division of Research, College of Business Administration, Univ. of South Carolina.

Cummings, R.G., P.T. Ganderton, and T. McGuckin. 1994. “Substitution Effects in CVM Values.” American Journal of Agricultural Economics 76(2): 205-214.

Cummings, R.G., and L.O. Taylor. 1999. “Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method.” American Economic Review 89(3): 649-665.

Desvousges, W.H., and V.K Smith. 1988. “Focus Groups and Risk Communication: the Science of Listening to Data.” Risk Analysis 8: 479-484.

Desvousges, W.H., V.K. Smith, D.H. Brown, and D.K. Pate. 1984. “The Role of Focus Groups in Designing a Contingent Valuation Survey to Measure the Benefits of Hazardous Waste Management Regulations.” Research Triangle Institute: Research Triangle Park, NC.

Dillman, D.A. 2008. Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley and Sons.

Duke, J.M., and T.W. Ilvento. 2004. “A Conjoint Analysis of Public Preferences for Agricultural Land Preservation.” Agricultural and Resource Economics Review 33(2): 209-219.

Entergy Corp. v. Riverkeeper Inc., 129 S. Ct. 1498, 1505 (2009)

Freeman, A.M., III. 2003. The Measurement of Environmental and Resource Values: Theory and Methods. Washington, DC: Resources for the Future.

Giraud, K.L., J.B. Loomis, and R.L. Johnson. 1999. “Internal and external scope in willingness-to-pay estimates for threatened and endangered wildlife.” Journal of Environmental Management 56: 221-229.

Greene, W.H. 2002. NLOGIT Version 3.0 Reference Guide. Plainview, NY: Econometric Software, Inc.

Greene, W.H. 2003. Econometric Analysis. 5th ed., Prentice Hall, Upper Saddle River, NJ.

Haab, T.C., and K.E. McConnell. 2002. Valuing Environmental and Natural Resources: The Econometrics of Non-market Valuation. Cheltenham, UK: Edward Elgar.

Hanemann, W.M. 1984. “Welfare Evaluations in Contingent Valuation Experiments with Discrete Responses.” American Journal of Agricultural Economics 66(3): 332-41.

Hanemann, W.M., and B. Kanninen. 1999. “The Statistical Analysis of Discrete-Response CV Data.” In Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EU, and Developing Countries. Edited by I.J. Bateman and K.G. Willis, Oxford University Press, Oxford, UK.

Hanley, N., S. Colombo, D. Tinch, A. Black, and A. Aftab. 2006a. "Estimating the benefits of water quality improvements under the Water Framework Directive: are benefits transferable?," European Review of Agricultural Economics 33(3):391-413.

Hanley, N., R. E. Wright, and B. Alvarez-Farizo. 2006b. “Estimating the Economic Value of Improvements in River Ecology using Choice Experiments: An Application to the Water Framework Directive.” Journal of Environmental Management 78(2):183-193.

Heberlein, T.A., M.A. Wilson, R.C. Bishop, and N.C. Schaeffer. 2005. “Rethinking the Scope Test as a Criterion in Contingent Valuation.” Journal of Environmental Economics and Management 50(1): 1-22.

Hensher, D. A., and Barnard, P. O. (1990). "The Orthogonality Issue in Stated Choice Designs." In Fischer, M., Nijkamp, P., and Papageorgiou, Y. (eds.), Spatial Choices and Processes. North-Holland, Amsterdam, 265-278.

Herriges, J., C. Kling, C. Lieu, and J. Tobias. 2010. “What are the consequences of consequentiality?” Journal of Environmental Economics and Management 59(1): 67-81.

Hoehn, J. P. 1991. “Valuing the Multidimensional Impacts of Environmental Policy: Theory and Methods.” American Journal of Agricultural Economics 73(2): 289-299.

Hoehn, J.P., F. Lupi, and M.D. Kaplowitz. 2004. Internet-Based Stated Choice Experiments in Ecosystem Mitigation: Methods to Control Decision Heuristics and Biases. In Proceedings of Valuation of Ecological Benefits: Improving the Science Behind Policy Decisions, a workshop sponsored by the US EPA National Center for Environmental Economics and the National Center for Environmental Research.

Hoehn, J.P., and A. Randall. 2002. “The Effect of Resource Quality Information on Resource Injury Perceptions and Contingent Values.” Resource and Energy Economics 24: 13-31.

Horne, P., P.C. Boxall, and W.L. Adamowicz. 2005. “Multiple-use management of forest recreation sites: a spatially explicit choice experiment.” Forest Ecology and Management 207(1/2): 189-99.

Johannesson, M. 1997. “Some Further Experimental Results on Hypothetical Versus Real Willingness to Pay.” Applied Economics Letters 4: 535-536.

Johnston, R.J., E.T. Schultz, K. Segerson and E.Y. Besedin. 2010. “Bioindicator-Based Stated Preference Valuation for Aquatic Habitat and Ecosystem Service Restoration”, In Bennett, J. ed. International Handbook on Non-Marketed Environmental Valuation. Cheltenham, UK: Edward Elgar, forthcoming.

Johnston, R.J. 2006. “Is Hypothetical Bias Universal? Validating Contingent Valuation Responses Using a Binding Public Referendum.” Journal of Environmental Economics and Management 52(1):469-481.

Johnston, R.J., and D.P. Joglekar. 2005. “Validating Hypothetical Surveys Using Binding Public Referenda: Implications for Stated Preference Valuation.” American Agricultural Economics Association (AAEA) Annual Meeting, Providence, July 24-27.

Johnston, R.J., J.J. Opaluch, M.J. Mazzotta, and G. Magnusson. 2005. “Who Are Resource Non-users and What Can They Tell Us About Non-use Values? Decomposing User and Non-user Willingness to Pay for Coastal Wetland Restoration.” Water Resources Research 41(7), doi:10.1029/2004WR003766.

Johnston, R.J., E.Y. Besedin, and R.F. Wardwell. 2003a. “Modeling Relationships Between Use and Non-use Values for Surface Water Quality: A Meta-Analysis.” Water Resources Research 39(12): 1363.

Johnston, R.J., S.K. Swallow, T.J. Tyrrell, and D.M. Bauer. 2003b. “Rural Amenity Values and Length of Residency.” American Journal of Agricultural Economics 85(4): 1000-1015.

Johnston, R.J., G. Magnusson, M. Mazzotta, and J.J.Opaluch. 2002a. “Combining Economic and Ecological Indicators to Prioritize Salt Marsh Restoration Actions.” American Journal of Agricultural Economics 84(5): 1362-1370.

Johnston, R.J., S.K. Swallow, C.W. Allen, and L.A. Smith. 2002b. “Designing Multidimensional Environmental Programs: Assessing Tradeoffs and Substitution in Watershed Management Plans.” Water Resources Research 38(7): IV1-13.

.Johnston, R.J., S.K Swallow and T.F. Weaver. 1999. “Estimating Willingness to Pay and Resource Trade-offs With Different Payment Mechanisms: An Evaluation of a Funding Guarantee for Watershed Management.” Journal of Environmental Economics and Management 38(1): 97-120.

Johnston, R.J., T.F. Weaver, L.A. Smith, and S.K. Swallow. 1995. “Contingent Valuation Focus Groups: Insights From Ethnographic Interview Techniques.” Agricultural and Resource Economics Review 24(1): 56-69.

Just, R.E., D.L. Hueth, and A. Schmitz. 2004. The Welfare Economics of Public Policy: A Practical Approach to Project and Policy Evaluation. Edward Elgar Publishing, Cheltenham, UK and Northampton, MA.

Kaplowicz, M.D., F. Lupi, and J.P. Hoehn. 2004. “Multiple Methods for Developing and Evaluating a Stated-Choice Questionnaire to Value Wetlands.” Chapter 24 in Methods for Testing and Evaluating Survey Questionnaires, eds. S. Presser, J.M. Rothget, M.P. Coupter, J.T. Lesser, E. Martin, J. Martin, and E. Singer. New York: John Wiley and Sons.

Kobayashi, M.K. Rollins, and M.D.R Evans. 2010. “Sensitivity of WTP Estimates to Definition of 'Yes': Reinterpreting Expressed Response Intensity.” Agricultural and Resource Economics Review 39(1): 37-55.

Kuhfeld, W.F. 2009. “Experimental Design: Efficiency, Coding and Choice Designs.” SAS Institute. http://support.sas.com/techsup/tnote/tnote_stat.html#market.

Layton, D.F. 2000. “Random coefficient models for stated preference surveys.” Journal of Environmental Economics and Management 40(1): 21-36.

List, J.A. 2001. “Do Explicit Warnings Eliminate the Hypothetical Bias in Elicitation Procedures? Evidence from Field Auctions for Sportscards.” American Economic Review 91(5): 1498-1507.

Loomis, J., T. Brown, B. Lucero, and G. Peterson. 1996. “Improving Validity Experiments of Contingent Valuation Methods: Results of Efforts to Reduce the Disparity of Hypothetical and Actual Willingness to Pay.” Land Economics 72(4): 450-461.

Louviere, J.J., D.A. Hensher, and J.D. Swait. 2000. Stated Preference Methods: Analysis and Application. Cambridge, UK: Cambridge University Press.

Maddala, G.S. 1983. “Limited-Dependent and Qualitative Variables in Econometrics.” Econometric Society Monographs No. 3, Cambridge University Press, Cambridge.

Mazzotta, M.J., J.J. Opaluch, G. Magnuson, and R.J. Johnston. 2002. “Setting Priorities for Coastal Wetland Restoration: A GIS-Based Tool That Combines Expert Assessments And Public Values.” Earth System Monitor 12(3): 1-6.

McConnell, K.E. 1990. “Models for Referendum Data: The Structure of Discrete Choice Models for Contingent Valuation.” Journal of Environmental Economics and Management 18(1): 19-34.

McFadden, D., and K. Train. 2000. “Mixed Multinomial Logit Models for Discrete Responses.” Journal of Applied Econometrics 15(5): 447-470.

Mitchell, R.C., and R.T. Carson. 1981. An Experiment in Determining Willingness to Pay for National Water Quality Improvements. Preliminary draft of a report to the U.S. Environmental Protection Agency. Resources for the Future, Inc., Washington.

Mitchell, R.C., and R.T. Carson. 1984. A Contingent Valuation Estimate of National Freshwater Benefits: Technical Report to the U.S. Environmental Protection Agency. Washington, DC: Resources for the Future.

Mitchell, R.C., and R.T. Carson. 1989. Using Surveys to Value Public Goods: The Contingent Valuation Method. Resources for the Future, Washington, D.C.

Morrison, M., and J. Bennett. 2004. Valuing New South Wales rivers for use in benefit transfer. Australian Journal Of Agricultural And Resource Economics 48(4): 591-611.

Murphy, J.J., T. Stevens, and D. Weatherhead. 2004. “Is Cheap Talk Effective at Eliminating Hypothetical Bias in a Provision Point?” Working Paper No. 2003-2. Department of Resource Economics, University of Massachusetts, Amherst.

Olsen, D., J. Richards, and R.D. Scott. 1991. “Existence and Sport Values for Doubling the Size of Columbia River Basin Salmon and Steelhead Runs.” Rivers 2(1): 44-56.

Opaluch, J.J., T.A. Grigalunas, M. Mazzotta, R.J. Johnston, and J. Diamantedes. 1999. Recreational and Resource Economic Values for the Peconic Estuary. Prepared for the Peconic Estuary Program. Peace Dale, RI: Economic Analysis Inc. 124 pp.

Opaluch, J.J., S.K. Swallow, T. Weaver, C. Wessells, and D. Wichelns. 1993. “Evaluating impacts from noxious facilities: Including public preferences in current siting mechanisms.” Journal of Environmental Economics and Management 24(1): 41-59.

Poe, G. L., J.E. Clark, D. Rondeau, and W.D. Schulze. 2002. “Provision Point Mechanisms and Field Validity Tests of Contingent Valuation.” Environmental and Resource Economics 23: 105-131.

Poe, G.L., M.P. Welsh, and P.A. Champ. 1997. “Measuring the Difference in Mean Willingness to Pay when Dichotomous Choice Contingent Valuation Responses are not Independent.” Land Economics 73(2): 255-267.

Powe, N.E. 2007. Redesigning Environmental Valuation: Mixing Methods within Stated Preference Techniques. Cheltenham, UK: Edward Edgar.

Powe, N.A., and I.J. Bateman. 2004. “Investigating Insensitivity to Scope: A Split-Sample Test of Perceived Scheme Realism.” Land Economics 80(2): 258-271.

Ready, R.C., P.A. Champ, and J.L. Lawton. 2010. “Using Respondent Uncertainty to Mitigate Hypothetical Bias in a Stated Choice Experiment.” Land Economics 86(2): 363-381.

Ready, R.C., J.C. Whitehead, and G.C. Blomquist. 1995. “Contingent Valuation When Respondents are Ambivalent.” Journal of Environmental Economics and Management 29(2): 181-196.

Smith, V. K., and C. Mansfield. 1998. “Buying Time: Real and Hypothetical Offers.” Journal of Environmental Economics and Management 36: 209-224.

Schkade, D.A. and J.W. Payne. 1994. “How People Respond to Contingent Valuation Questions: A Verbal Protocol Analysis of Willingness to Pay for an Environmental Regulation.” Journal of Environmental Economics and Management 26: 88-109.

Train, K. 1998. “Recreation Demand Models with Taste Differences Over People.” Land Economics 74(2): 230-239.

U.S. Department of Labor, Bureau of Labor Statistics. 2009. Table 1: Civilian workers, by major occupational and industry group. September 2009. http://www.bls.gov/news.release/ecec.t01.htm.

U.S. EPA. 2000. Guidelines for Preparing Economic Analyses. (EPA 240-R-00-003). U.S. EPA, Office of the Administrator, Washington, DC, September 2000.

U.S. EPA. 2006. Peer Review Handbook 3rd Edition. (EPA 100-B-06-002). U.S. EPA, Science Policy Council, Washington, DC, 2006.

Versar. 2006. Comments Summary Report: Peer Review Package for "Willingness to Pay Survey Instrument for §316(b) Phase III Cooling Water Intake Structures.” Prepared by Versar Inc., Springfield, VA.

Vossler, C.A., and M.F. Evans. 2009. “Bridging the gap between the field and the lab: Environmental goods, policy maker input, and consequentiality.” Journal of Environmental Economics and Management 58(3):338-345.

Vossler, C.A., and J. Kerkvliet. 2003. “A Criterion Validity Test of the Contingent Valuation Method: Comparing Hypothetical and Actual Voting Behavior for a Public Referendum.” Journal of Environmental Economics and Management 45(3): 631-649.

Whitehead, J.C., G.C. Blomquist, T.J.Hoban, and W.B. Clifford. 1995. “Assessing the Validity and Reliability of Contingent Values: A Comparison of On Site Users, Off Site Users, and Non-users.” Journal of Environmental Economics and Management 29(2): 238-251.

Whitehead, J.C., and P.A. Groothuis. 1992. “Economic Benefits of Improved Water Quality: a Case Study of North Carolina's Tar Pamlico River.” Rivers 3: 170-178.


1 The environmental attributes to be compared against the cost of living increases where designed based on the Johnston et al. (2009) Bioindicator-Based Stated Preference Valuation (BSPV) method which was developed to promote ecological clarity and closer integration of ecological and economic information within SP studies. In contrast to traditional SP valuation, BSPV employs a more structured and formal use of ecological indicators to characterize and communicate welfare-relevant changes. It begins with a formal basis in ecological science, and extends to relationships between attributes in respondents’ preference functions and those used to characterize policy outcomes. Specific BSPV guidelines ensure that survey scenarios and resulting welfare estimates are characterized by: (1) a formal basis in established and measurable ecological indicators, (2) a clear structure linking these indicators to attributes influencing individuals’ well-being, (3) consistent and meaningful interpretation of ecological information, and (4) a consequent ability to link welfare measures to measurable and unambiguous policy outcomes. The welfare measures provided by BSPV method can be unambiguously linked to models and indicators of ecosystem function, are based on measurable ecological outcomes, and are more easily incorporated into benefit cost analysis. This methodology was developed in part to address the EPA Science Advisory Board’s call for improved quantitative linkages between ecological services and economic valuation of those services.

2 Actual response rates could vary across study regions.

3 EPA plans to complete focus group testing of the instrument before the second Federal Register notice of this information collection request. The inclusion of all four attributes is an important aspect of focus group testing. If focus groups find this number cognitively challenging, the number will be reduced, and if cognitive issues are minimal as identified by randomly selected focus group participants, all four will remain.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorEPA
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy