2484ss01 Part B rev 1_15_14

2484ss01 Part B rev 1_15_14.docx

Willingness To Pay Survey for Santa Cruz River Management Options in Southern Arizona (New)

OMB: 2080-0080

Document [docx]
Download: docx | pdf

Part B of Supporting Statement; ICR 2484.01


1. Survey Objectives, Key Variables, And Other Preliminaries

(a) Survey Objectives


The survey is being proposed by the EPA Office of Research and Development, and is not associated with any regulatory ruling of EPA. As noted earlier, the primary reason for the proposed survey is exploratory research. Thus decisions were made in the study design from a perspective of making research contributions, rather than for conducting a definitive benefits analysis for management purposes.


The objectives of the survey are bulleted below:

  • To estimate values for changing the extent of the flow mileage and associated forest vegetation acreage along the effluent-dominated Santa Cruz River.

  • To estimate values for full contact recreation in the effluent-dominated Santa Cruz such as submersion, as a change from partial body contact recreation such as wading.

  • To compare estimated values for changing a recreation-oriented attribute with values for changing the extent of the wet river ecosystem.

  • To provide a case study for estimating values for modifying river attributes of a waterway highly impacted by urban processes.

  • To compare estimated values for changing attributes of two different reaches of the Santa Cruz River, the South and the North. The South has more forest acres per mile of river flow, but is further away from the population centers sampled.

  • To compare estimated values between two population centers, the Phoenix metro area (which is relatively far away from the Santa Cruz River), and the Tucson metro area (which is relatively close to the Santa Cruz River).

  • To learn about river-related recreation habits of the sample, and how these habits as well as sociodemographic characteristics influence values for the changes in the attributes.

(b) Key Variables


The survey asks respondents whether they would choose a permanent tax increase for their household in exchange for changes in Santa Cruz River attributes. The key variables, including how the variables are described in the survey, were developed from focus group and interview research with residents of southern Arizona (see pretests section below). The variables are meant to be ecological commodities of direct relevance to respondents. This approach is conceptually described by Boyd and Banzhaf (2007), Boyd and Krupnick (2013), Ringold et al. (2009), and Ringold et al. (2013).


The elicitation methodology is a choice experiment, a commonly used format for valuing ecological goods (Louviere, 2000; Champ et al., 2003; Freeman, 2003). Respondents will choose one of three options for each voting question. The first two options are generically labeled “Option A” and “Option B”. These options change from question to question and between survey versions (see experimental design section below). The third option is an “Expected Future” that is constant across choice questions and survey versions. The Expected Future establishes the future baseline. The key variables appearing in Options A, B, and the Expected Future are:


North Santa Cruz Flow and Forest: This is a bundled variable with two numeric values for each level specific to the North Santa Cruz. One number shows miles of surface flow, and another number shows acres of cottonwood/willow forest associated with that flow. The survey considers three possible levels of this variable. The future baseline level is termed the “Expected Future”. Maintaining larger extents of flow and forest are posed, with two different options, the largest of which is the same as the current condition.


North Santa Cruz Full Body Contact: This is a binary “Yes” or “No” variable specific to the surface flows in the North Santa Cruz. A “Yes” means the surface flow would be considered safe for full body contact at normal flow levels, including submersion. A “No” means the surface flow would be considered safe only for partial body contact, i.e. wading, and normal flow levels. The Expected Future and Current Condition are both “No”, but the survey poses the possibility of a “Yes”.


South Santa Cruz Flow and Forest: This is a bundled variable with two numeric values for each level specific to the North Santa Cruz. One number shows miles of surface flow, and another number shows acres of cottonwood/willow forest associated with that flow. The survey considers three possible levels of this variable. The future baseline level is termed the “Expected Future”. Maintaining larger extents of flow and forest are posed, with two different options, the largest of which is the same as the current condition.


South Santa Cruz Full Body Contact: This is a binary “Yes” or “No” variable specific to the surface flows in the South Santa Cruz. A “Yes” means the surface flow would be considered safe for full body contact at normal flow levels, including submersion. A “No” means the surface flow would be considered safe only for partial body contact, i.e. wading, and normal flow levels. The Expected Future and Current Condition are both “No”, but the survey poses the possibility of a “Yes”.


Tax increase per year: Each change varying from the expected future of reduced flows and cottonwood/willow acreages in the North and South has an associated cost. These cost levels currently vary across 6 levels from $0 for the Expected Future, to as high as $60 (subject to change based on pilot survey results). These cost levels are not tied to actual costs estimates for the changes, but rather are designed to bracket values for the sample. That is, the design goal is to set cost levels such that some people agree to them and some people don’t agree to them. The pilot survey will be used to test whether cost levels posed in the survey should be revised upwards or downwards.


(c) Statistical Approach


A statistical approach relying on a sample of households will be used rather than a census approach. A census approach would be prohibitively costly, and standard choice experiment methods are available for the sample approach (Louviere, 2000; Bateman et al., 2002; Kuhfeld, 2010; Greene, 2007). Two urban populations will be sampled, the Phoenix and Tucson metropolitan areas. A random sample of households in each urban population will be selected and specifically recruited by mail to be a part of this research study. This is preferred to an approach where respondents self-select into the sample, since this may reflect unusual interest in the survey topic and lead to higher opportunity for bias in the results as compared with a randomly selected sample.


The work in developing the survey was conducted by EPA ORD with the assistance of contractor support (The Henne Group, 116 New Montgomery Street, Suite 812, San Francisco, CA  94105) under the separate ICR# 2090-0028. The contractor recruited and remunerated focus group participants, as well as individual survey pretest participants, and provided transcripts of meetings. The work involved in preparing the survey for mailing, the mailing itself, data management and analysis, and report write-up, will be conducted by EPA ORD.


(d) Feasibility


The basis for the survey has been extensively researched. An initial phase of focus groups was slated towards identifying priority ecological commodities important to the public to include in the survey. Successive versions of the survey were then pretested in a series of focus groups and interviews as summarized in the pretests section. Several sources describe the importance of pretesting willingness to pay surveys, including Louviere (2000), Mitchell and Carson (1989), Johnston et al. (1995), and Hoehn et al. (2003). The survey was iteratively updated during pretesting to reduce respondent cognitive burden, correct confusing questions, reduce the potential for scenario rejection, and reduce bias by taking measures to clearly describe the ecological goods and changes being valued. Information in the survey was refined specifically to address respondent questions (e.g. Banzhaf et al., 2006), such as information on how much water the river ecosystem uses. Because of these steps, EPA ORD believes the survey has a strong best practice foundation for eliciting choice information desired.


The principal investigator has previous experience conducting a choice experiment mail survey of similar scope, and has research funding to cover the costs associated with the survey. The practicalities of institutional channels needed for successful implementation have been researched, such as bulk mail permitting and government printing office assistance.


2. Survey Design


(a) Target Population And Coverage


The target population is households of the Phoenix and Tucson metropolitan areas in southern Arizona. Phoenix and Tucson urban each represent a sampling stratum. Sampling will be such that each household in a given stratum has an equal probability of being selected to receive a survey. The metropolitan areas will be defined as the delineations used by OMB for the Metropolitan Statistical Areas of Phoenix-Mesa-Glendale, and Tucson, respectively. These two metropolitan areas were selected as the target population since together they represent about 80% of the population of Arizona, they are both in southern Arizona (the same region as the resources described in the survey), and since they vary markedly in distance to the resources described in the survey.


(b) Sampling Design

(i) Sampling Frame


The sampling frame is the United States Postal Service Computerized Delivery Sequence File (DSF). The sample universe for the DSF is defined to be residential household addresses, matching the needs of the survey mode as a mail survey. A discrete set of vendors have paid the USPS for the ability to have access to and sell samples from the DSF. A sample from one of these vendors will procured such that each household in each stratum has an equal probability of being selected to receive a survey.


(ii) Sample Size


The selection of sample size is a tradeoff between precision requirements and cost, including the cost associated with public burden. Furthermore the precision of willingness to pay estimates depends on results as obtained, and cannot be known absolutely beforehand. This survey design utilizes the rule of thumb available from the developers of Sawtooth Software (Orme, 1998), a software popular for designing choice sets. This formula was also recently utilized by NOAA (OMB # 0648-0585). The rule of thumb formula for a minimum sample size is:


(n × t × a)/c > = 500


Where:

n = minimum number of respondents

t = number choice questions

a = number of alternatives per task (not including the “status quo” option)

c = number of “analysis cells.”


When considering main effects, c is equal to the largest number of levels for any single attribute. If considering all two-way interactions, c is equal to the largest product of levels of any two attributes (Orme, 1998). The sample size will be based on a main effects model. There are 4 planned choice questions for each survey, with two options each (not counting “expected future”), and the maximum number of levels for any single attribute is 3. Thus the minimum sample size “n” is 188 for each population to be sampled. The target number of respondents for both the Phoenix and Tucson metro areas is 250. This value is larger than the 200 minimum suggested when the intent is to compare subgroups (Orme, pg 67). As further rationale, Bateman et al. (2002; pg. 110) recommend a sample size of 500 to 1,000 (for each subgroup) for close-ended contingent valuation questions, but note a smaller sample size can be used if one collects more information per respondent (and in this choice experiment there are 4 expected replications per respondent). With an expected response rate of 30%, this means approximately 834 households in each metro area will need to be successfully reached in order to reach the target of 250 respondents. To allow for ineligible addresses, a mailing list of 1,000 for each stratum will be purchased.


(iii) Stratification Variables


The Phoenix and Tucson metro areas will be treated as different populations, that is, different strata. An equal sample size of 834 successfully delivered surveys, with expected 250 returned surveys, is being proposed for both strata. This will mean a higher sampling rate in Tucson since the Tucson metro area has fewer households than the Phoenix metro area. This stratification is designed to compare willingness to pay sample estimates for an urban area relatively close to the Santa Cruz River (Tucson) with estimates for an urban area relatively far from the Santa Cruz River (Phoenix).


(iv) Sampling Method


When purchasing the DSF sample the vendor will be given instructions to prepare the sample such that for each stratum, each household has an equal chance of being chosen (a simple random sample approach). The DSF sample purchased needs to account for the problem of only a subset of those successfully contacted responding. Furthermore some addresses will be found to be ineligible. In planning the mailing list size we follow a prior USEPA ICR submission (OMB # 2010-0043) in expecting a 30% response rate (Helm, 2012; Mansfield et al., 2012; Johnston et al., 2012). Furthermore USEPA (OMB # 2010-0043) cited Link et al. (2008) in expecting that 92% of the sampled addresses would be eligible. For each stratum, a sample size of 907 will be needed to get 834 eligible addresses. If 30% of those households respond, the returned survey targets will be met. To provide an additional margin in case more ineligible addresses are encountered than planned, a sample size mailing list of 1,000 will be purchased for each stratum.


While each household in the stratum will have an equal opportunity of being selected (within the accuracy limits of the DSF), response rates can vary systematically based on factors such as stratum or sociodemographic characteristics. As is standard practice responses from household types that are less represented will be more heavily weighted. Furthermore each stratum has a different total number of households, so a weight is needed to account for this when doing analysis on the combined dataset. There are three potential weights associated with each responding household to account for these issues:


    • A non-response weight based on sociodemographic differences for respondents as compared with Census characteristics for that stratum (a Non-response Weight).

    • A stratum sampling weight which is the inverse of the probability of selection for that household (a Design Weight).

    • A weight for non-response across strata, to account for difference in non-response rate between the two strata (another Non-response Weight).


The second and third weights are important to include when doing analysis on the data as a whole (combining the two strata). When doing such combined analysis the three weights will be combined in multiplicative fashion. When single stratum analysis is being done, the first weight can be used alone. There is no single approach to calculating the first weight since there are multiple sociodemographic criteria to match against Census characteristics. The sociodemographic characteristics upon which weighting will be based will depend on the significance of different sociodemographic characteristics from preliminary choice modeling results.


(v) Multi-Stage Sampling


Not applicable for this survey.


(c) Precision Requirements

(i) Precision Targets


The formula for margin of error can be used to calculate sample size for each stratum:


ME = z [(p(1-p)/n)1/2],


where n is minimum sample size, p is the population proportion (choice probability) as predicted within the desired margin of error, and z is the z score for the desired confidence interval. Based on the proposed sample size of 250 for a stratum, the margin of error would be 0.062 if the true response probability is 0.5 and confidence level of 0.95. This exceeds the study’s desired margin of error of plus or minus 10% at confidence level of 0.95. For the desired level of precision, if a true response probability of 0.5 is assumed (a conservative assumption), a minimum sample size of 96 is required. This is less than the sample size of 250 returned surveys planned for each metro area. Standard error can also be solved for directly, with margin of error being 1.96 times the standard error. The standard error for each stratum is calculated below with the conservative assumption p = 0.5 and the less conservative assumption p = 0.1. The standard error for the entire sample is calculated by the following formula assuming a known population proportion.


(1 / N) * ( Σ { [ Nh3/( Nh - 1) ] * ( 1 - nh / Nh ) * p * ( 1 - p ) / nh } )1/2


Where N is size of population, n is sample size, and subscript h is for stratum. The population proportion in this case is assumed to be the same for each stratum. Based on all calculations the standard error ranges from 0.032 to 0.016.


Stratum

MSA Population (2012)

Expected sample size (completed surveys)

Standard Error with p = 0.5

Standard Error with p = 0.1

Margin of Error with p = 0.5

Margin of Error with p = 0.1

Phoenix

4,329,534

250

0.032

0.019

0.063

0.037

Tucson

992,394

250

0.032

0.019

0.063

0.037

Overall

5,321,928

500

0.026

0.016

0.051

0.031



(ii) Nonsampling error


With a target response rate of 30% there will be a large percentage of non-respondents. If preferences of non-respondents differ markedly from respondents, non-response bias will affect the results. A qualitative non-response analysis will be conducted by comparing the sociodemographics of the respondents with the sampling frame. Study results will note the potential for non-response bias. A description of any sociodemographics that were less represented in the responses will be included in the results. There are also populations, such as rural populations, that will not be sampled at all. This limitation of the research will be described in the results.


(d) Questionnaire Design


The draft survey is attached as Appendix 1. Note that the pages numbers are out of sequence when scrolling through the electronic file, but are arranged in order to correctly print a double-sided booklet. Below is a description of the sections and questions.


PART 1

Background. The cover photos show various states of the Santa Cruz River in both the North and the South, representing both perennial flow and downstream of where perennial flow ends. There are 4 photos in all, which are repeated with further detail on pg 7. Page 2 shows the Santa Cruz River within the landscape of the broader river network in Arizona, for perspective. Page 3 shows a map of where the treated wastewater flow in the South and North reaches are in southern Arizona, background on how they came to exist, and introduces why these resources might be relevant to the respondent. In pretests the maps on pages 2 and 3 were found to be crucial since some participants were not familiar with the location of the Santa Cruz River and very few people were aware of the perennial flow reaches. Page 4 describes the partitioning of treated wastewater releases and the relatively small fraction that is consumptively used by plants or that evaporates in the perennial river ecosystem, with the rest adding to groundwater supplies. This was important to address since there was a common assumption that the wet river ecosystem used a larger fraction of the water. Furthermore water supplies are a prevalent concern in the area and the extent to which the river recharged the local aquifer was of high interest. Page 5 describes the type of riparian forest that will be a topic in the survey, and some of the wildlife that are found in that type of habitat. The final point lists familiar examples of substitutes for the wet river ecosystem to make it clear that the Santa Cruz River is not the unique example of this type of ecosystem. Pages 6, 7, 8, and 9 describe the different attributes that will be in the later choice questions. The “Expected Future” with no intervention is described as opposed to the potential management changes. The Expected Future includes a time horizon of change and explains the expectation of increasing water demands in the region which would require diminishment of the wet river ecosystem. The first attribute of management changes deviating from the Expected Future are those that would retain more of the current condition flow miles and riparian forest acres in the North or South. Representative photos are shown of the riparian area with and without perennial water, along with specified numeric changes of flow mileage and forest acres that the survey will consider. Representative photos are shown for both South and North since the vegetation character is different; this difference is also described in the text and numerically in terms of acres of forest. Page 8 summarizes the levels of the North and South flow and forest possibilities. Page 9 describes the second attribute of management changes, the safety of direct contact with the water in the South and North. The term Full Body Contact is equivalent to “swimmable” water quality but since the river reaches are typically not deep enough to swim in, we did not use the more familiar “swimmable” term since it could be misleading. Page 10 is an example vote showing the format of the choice question and page 11 describes how the attribute levels will be described.


PART 2

Question 1: The first question on page 12 is designed to initiate the respondent’s thought process of weighing the relative importance of the different attributes within the choice experiment in questions 2 - 5.


Questions 2 through 5: These questions comprise the choice experiment portion of the survey, where respondents choose between different cost levels and different marginal changes in river-related attributes. There is always an opt-out zero cost option. Following standard techniques of choice experiments, the options (also known as profiles) participants choose between will be a fraction of the theoretically possible combinations of attributes. The questions are designed to be difficult in order to efficiently yield preference information. There will be different survey versions, with 4 questions per survey (also known as “replications”), as an efficient method of allowing sufficient number of tradeoffs for model estimation. Different survey versions allow “blocking” the still large number of tradeoff questions posed into different groups. These practices save expense and also reduce the sample size and associated public burden. Further choice experimental design considerations are described in the experimental design section below.


PART 3

Questions 6 through 14: These are debriefing questions designed to shed light on motivations for respondents’ answers, and also test for inconsistencies in their responses. Questions 6 and 7 allow insight as to whether familiarity with the Santa Cruz River influences willingness to pay. Question 8 helps identify “protest bids”, that is, occasions when people choose not to pay for philosophical reasons rather than a price that is too high. Question 11 allows a check on whether respondents feel they would answer the same way in an actual vote, and also serves to reinforce that the survey is not an actual binding vote. Recreation behavior questions 12, 13, and 14 allow insight into recreational preferences related to the resources being considered. Recreational data may help predict willingness to pay. They also portray the frequency of use and relative importance of different types of recreational uses.


PART 4

Questions 15 through 24: These are sociodemographic questions that allow comparing the respondents with sample population data available from US Census. Since some discrepancy is expected, each household will be assigned a non-response weight to account for ways in which the sample differs from the population. For example, if certain income categories are underrepresented in the sample, the responses that were obtained are given a relatively higher weight. Sociodemographic variables are also potentially useful predictors of willingness to pay, and are frequently used in choice modeling. Finally, respondents that are found to be under the 18 years of age threshold will be dropped from the sample.


PART 5

The back cover thanks the respondent for their time and response, as is standard. There is an entire page remaining for comments. These comments can be useful in further identifying protest bids or other strategic behavior in responses.


3. PRETESTS AND PILOT TESTS

Pretests


The survey content and format has undergone extensive pretesting. Three phases of qualitative survey development occurred under a different ICR (ICR # 2090-0028). The first phase of survey development did not present study participants with a survey, instead the intention of those research sessions was to identify the most relevant variables upon which to pursue follow-up quantitative valuation survey research. The first phase began with depth interviews in October 2011 with a convenience sample of 12 neighborhood presidents in Tucson. The neighborhood presidents were not previously known to EPA ORD. They were recruited based on their neighborhood being nearby the Santa Cruz River. Financial incentives to participate in the study were not available at that time but direct recruitment by EPA ORD was effective in obtaining these interviewees. Neighborhood presidents were expected to be more representative of local opinions than the pool of environmental researchers working in the region that EPA ORD was also in discussion with. Indeed markedly different input was obtained; for example, it was found that few neighborhood presidents near the Santa Cruz River were aware of the perennial flow reaches that exist. These initial interviews were used to prepare ideas and moderation techniques for upcoming focus groups. In the spring of 2012, there were 10 focus groups in southern Arizona, with 8 in Tucson, 1 in Rio Rico, and 1 in Tubac. Focus group participants were recruited from the general public by a market research contractor using standard market research methods, including paying participants an incentive fee as compensation for the opportunity cost of time. At this stage participants were not asked to take or react to a survey, but instead were asked to help ORD identify attributes important to them about southern Arizona rivers and streams, as well as for the Santa Cruz River in particular.


The information from the above qualitative research was used to develop a first draft of a survey focusing on the North Santa Cruz, the reach nearest to southern Arizona population centers. This version was pretested in the fall of 2011. Cognitive interviews were conducted with 17 persons, all recruited from the general population of Tucson by a market research contractor, again paying incentive fees. The survey draft included key variables identified by earlier qualitative research, however the list of attributes (6) seemed to be near the limit for respondents to effectively consider. In addition there were two attributes that both dealt with forest acreage that were often confused; one was based on naturally occurring forest acreage, and a second attribute that increased forest acreage with the aid of drip irrigation. At the conclusion of the pretest, participants were asked to comment on the relative appeal of preserving the North Santa Cruz River versus the South Santa Cruz River, based on a map of their respective locations and representative photos of the two areas (the photos being the same photos represented on page 7 of the proposed survey). Some participants preferred maintaining the South Santa Cruz location despite it being further away from Tucson.


Based on these pretests it was decided to narrow the scope of attribute types, and to feature both the North and South so as to include two different effluent-dominated sections of the same river. In revising the survey, the principal investigator requested and received updated natural science modeling of the relationship between surface water and forest acreage for the North and South Santa Cruz River (personal communication, J. Stromberg, May 2013). The survey was reformatted into a booklet format adapted from a prior EPA survey (OMB # 2020-0283). In the spring of 2013, there were an additional 2 focus groups and 9 cognitive interviews. One of the focus groups and 3 of the interviews were with people living in the Phoenix area, with one focus group and the remaining interviews being with persons living in the Tucson area. The draft survey instrument is attached as Appendix 1 (note that the pages numbers are out of sequence on the electronic file, they are sequenced so that they will print correctly double-sided).


The final round of pre-tests verified the higher comprehension achieved by narrowing the number of attributes into the two categories of flow and forest, and safety of water contact. The current draft reflects several further edits that were made based on insights gained during the final round of pretests. An important change was revising the description of the full contact vs. partial contact recreation. It was originally posed as “Safe” for contact recreation or “Unsafe” for contact recreation. In the early sessions it was found that people frequently felt compelled to vote for “Safe”, presuming there to be a significant public health hazard if the water was left “Unsafe”. The labeling in the choice experiment matrix was thus changed to the current “Yes” vs. “No” language, along with a fuller description of the attribute, which helped participants have a more accurate grasp of the issue. Another change was inclusion of more background describing “Expected Future” and the competing options. In particular, the revisions emphasized the difference with North and South forest which complement the photo visual aids. In addition, color coding in the attributes that remain at “Expected Future” levels was used to make differences between choice questions clear at a glance; previously, many participants described confusion that the series of choice questions appeared to be the same.


The final round of pre-tests continued to show, as did previous pre-tests, the strong public interest in preserving surface flow of rivers, even if the source is treated wastewater rather than natural flow. Different preferences for the various attributes were noted: some respondents looked for the greatest forest acreage change per dollar; others strongly preferred preserving river and forest resources closer to their residence. In addition, there seemed to a split between those who preferred to have the water be safe enough to swim in, versus some who were indifferent to this attribute. Furthermore, some respondents indicated that it would be difficult for them to feel safe having contact with the water knowing that it was treated wastewater, for others this was not an issue.



Pilot Test


A pilot survey will be mailed to a subset of the Phoenix and Tucson samples. This will not represent an additional burden, but will be a fraction of approximately 10% of the total mailing, for both metro areas. This will allow for the possibility of adjusting the survey for any problems that may surface after this initial wave of survey returns before committing to the full mailing. Following is a list of attention items for the pilot test:

    • The beta parameter vector (see econometric specification section below) may be revised based on analysis of the results; in particular the cost levels may need to be adjusted to efficiently bracket values.

    • If respondents were to select randomly, each of the three options would be selected approximately 1/3 of the time. If selection of “Expected Future” drops below 15% to 20% overall, cost levels may need to be increased.

    • Some idea of non-response and associated bias will be gained, as well as any differential non-response between the two urban areas. There may be a possibility to address survey language or language in the cover letter to address this.

    • Comments in the margins or on the back cover may uncover confusing or poorly worded questions.


4. COLLECTION METHODS AND FOLLOW-UP.

(a) Collection Methods


A mail survey collection method is selected due to its frequent and successful use in the choice experiment literature. The mail survey mode also has a relatively low cost, does not have interviewer effects (although survey text may still bias respondents), and easily includes visual aids (Champ et al., 2003: chp 3).


(b) Survey Response And Follow-up


A five contact method will be used (Dillman, 2000, Dillman et al. 2009; pg 242). Multiple contacts, as well as specified timing and varied look from one contact to the next are a method of increasing response rate. The five contacts planned for this study are listed below:


    • A prenotice letter sent a few days ahead of the survey

    • A cover letter along with a survey that explains why a response is important, as well as a postage paid return envelope

    • A thank you/reminder postcard sent a few days to one week after the prior contact.

    • A follow-up cover letter and replacement survey sent 2 to 4 weeks after the first survey mailing, urging a response.

    • A final reminder letter sent 2 to 4 weeks after the prior letter, listing a way to receive a replacement survey if it has been misplaced.


The text and layout of the five contacts is attached as Appendix 5. Those who have responded will be tracked in a spreadsheet to ensure follow-up mailings are only sent to non-respondents.


5. ANALYZING AND REPORTING SURVEY RESULTS

(a) Data Preparation


All data entry will be conducted by the Principal Investigator. After all data have been entered once, a sample of the surveys will be checked for data entry consistency. Debriefing question responses or other hand-written responses that indicate confusion regarding voting question responses will be flagged, and these data will not be used to estimate the choice model. Written comments that indicate “protest” responses will not be used to estimate the choice model. Responses from persons less than 18 yrs of age as indicated from the ‘what year were you born’ question will not be used. After all data have been entered they will be coded into a format needed for statistical analysis. The choice model software in particular may require a specific coding protocol.


(b) Analysis


The starting point for choice model analysis will be a standard multinomial logit model based in random utility theory, as described by Ben-Akiva and Lerman (1985). To summarize their exposition, let U = utility of household (well-being). Consider U to be a function of a vector zin of attributes for alternative i, as perceived by household respondent n. The variation of preferences between individuals is partially explained by a vector Sn of sociodemographic characteristics for person n.


Uin = V(zin, Sn) + ε(zin, Sn) = Vin + εin


The “V” term is known as indirect utility and “ε” is an error term treated as a random variable (McFadden 1974), making utility itself a random variable. An individual is assumed to choose the option that maximizes their utility. The choice probability of any particular option (Expected Future, Option A, or Option B) is the probability that the utility of that option is greatest across the choice set Cn:


P (iCn) = Pr[Vin + εin Vjn + εjn , for all j Cn, j not equal to i]


If error terms are assumed to be independently and identically distributed, and if this distribution can be assumed to be Gumbel, the above can be expressed in terms of the logistic distribution:

Pn(i) = eμVin / eμVjn

The summation occurs over all options Jn in a choice set. The assumption of independent and identically distributed error terms implies independence of irrelevant attributes, meaning the ratio of choice probabilities for any two alternatives is unchanged by addition or removal of other unchosen alternatives (Blamey et al., 2000). The “μ” term is a scale parameter, a convenient value for which may be chosen without affecting valuation results if the marginal utility of income is assumed to be linear. The analyst must specify the deterministic portion of the utility equation ‘‘V,’’ with subvectors z and S. The vector z comes from choice experiment attributes, and the vector S comes from attitudinal, recreational, and sociodemographic questions in the survey. An econometrics software will be used to estimate the regression coefficients for z and S, with a linear-in-parameters model specification. These coefficients are used in estimating average household value for a change in one level to another level of a particular attribute for welfare estimation. Welfare of a change is given by (Holmes & Adamowicz, 2003):


$ Welfare = (1/βc)[V0 - V1]


where βc is the coefficient on cost, V0 is an initial scenario, and V1 is a change scenario.


The standard multinomial logit model treats the multiple observations (choice experiment replications) from each household as independent. An alternative is to model these as correlated with a random parameters (mixed) logit model. Thus a random parameters logit model will also be tested using techniques described by Greene (2007).


Econometric Specification


A main effects utility function is hypothesized, and following common practice a linear-in-parameters model will be sought. A generic format of the indirect utility function to be modeled is:


V = βo + β1(North Flow & Forest Change) + β2(North Full Contact Recreation Change) + β3(South Flow & Forest Change) + β4(South Full Contact Recreation Change) + β5(Cost)


Experimental Design


As described in the pretests section, the attributes were identified as important by focus group and interview participants prior to a draft survey being presented to them. The attributes appearing in the survey are a subset of the full list of attributes identified. Limiting the choice experiment to this subset was done to reduce cognitive burden for respondents while maintaining research objectives.

All possible choice profile tradeoffs could be presented to respondents, but would be an inefficient way to gauge preferences. Instead, “fractional factorial” models are a standard approach (Louviere et al., 2000). The statistical software package SAS was used to develop an efficient choice experiment design, given a total number of design choice sets to manipulate, as well as a provisional beta vector (Kuhfeld, 2010). Essentially the software searches for the questions, given the constraint on number of choice sets, likely to yield the most preference information. The computer generated design was manually checked for any potentially dominating choices, or profile combinations that may seem unlikely to respondents. The total number of choice sets must be at least as large as the number of parameters to be estimated and is typically much more. A total number of choice sets that allows each level to occur an equal number of times is also desirable, for balance (Kuhfeld, 2010: pg. 78).


The proposed design is attached as Appendix 6. The choice experiment design must balance the number of factors to be estimated, the desired precision, and funding. To summarize relevant factors from Part B 1(b) above:


  • North Flow and Forest (3 levels)

  • North Full Body Contact (2 levels)

  • South Flow and Forest (3 levels)

  • South Full Body Contact (2 levels)

  • Cost (6 levels)


A choice experiment size of 72 choice profiles, plus one constant, no cost, opt-out alternative (the Expected Future), was selected as the smallest number of profiles allowing both orthogonality and balance among the main effects to be estimated. These profiles were then optimally organized into choice sets (individual choice questions), and blocked in 9 survey versions of 4 questions each with SAS software (v 9.3). Louviere et al. (2000) citing Bunch and Batsell (1989) recommend at least 6 respondents per block to satisfy large sample statistical properties, and our expectation is an average of 250/9 responses per block, or 27.8 (much higher). This allows a safety margin given that there will be some variation in the number of returned surveys for each block, even though an equal number for each block will be mailed out. Nonetheless, during follow-up mailings to achieve the target overall response rate, it will also be ensured that enough responses from each of the 9 survey versions is being achieved.

The design was manually inspected to adjust dominating or potentially confusing alternatives per question or per block. The choice experiment design is subject to update based on pilot test results.


(c) Reporting Results


The results will be written up and submitted to a peer-reviewed environmental journal. The results will contain summary statistics for the survey data, documentation of the choice experiment analysis, and presentation of final choice models used. The results will note the limitations of the study including the potential for non-response bias.




References


Arizona Department of Environmental Quality. 2010. Draft 2010 Status of Water Quality in Arizona 305(b) Assessment and 303(d) Listing Report. http://www.azdeq.gov/environ/water/assessment/assess.html. Retrieved March, 2013.


Arrow, K., Solow, R., Leamer, E., Portney, P., Rander, R., Schuman, H., 1993. Report of the NOAA panel on contingent valuation. Federal Register 58(10): 4602-14.


Banzhaf, S.; Burtraw, D.; Evans, D.; Krupnick, A. 2006. Valuation of natural resource

improvements in the Adirondacks, Land Economics 82: 445–464.


Bateman, I.J., R.T. Carson, B. Day, M. Hanemann, N. Hanley, T. Hett, M. Jones-Lee, G.

Loomes, S. Mourato, E. Ozdemiroglu, D.W. Pierce, R. Sugden, and J. Swanson. 2002.

Economic Valuation with Stated Preference Surveys: A Manual. Northampton, MA:

Edward Elgar.


Ben-Akiva, M., and S. R. Lerman. 1985. Discrete choice analysis. MIT Press, Cambridge, Massachusetts.


Berrens, R. P., A. K. Bohara, C. L. Silva, D. Brookshire, and M. McKee. 2000. Contingent values for New Mexico instream flows with test of scope, group-size reminder and temporal reliability. Journal of Environmental Management 58:73–90.


Blamey, R. K., J. W. Bennett, J. J. Louviere, M. D. Morrison, and J. Rolfe. 2000. A test of policy labels in environmental choice modelling studies. Ecological Economics 32:269–286.


Boyd, J., Banzhaf, S., 2007. What are ecosystem services? The need for standardized environmental accounting units. Ecological Economics 63 (2–3), 616–626.


Boyd, J., and A. Krupnick. 2013. Using Ecological Production Theory to Define and Select Environmental Commodities for Nonmarket Valuation. Agricultural and Resource Economics Review 42(1):1-32.


Brouwer, R. 2000. Environmental value transfer: state of the art and future prospects. Ecological Economics 32(1): 137-152.


Bunch, D.S., and Batsell, R.R. 1989. A Monte Carlo comparison of estimators for the multinomial logit model. Journal of Marketing Research 26: 56-68.


Bureau of Labor Statistics. 2012. http://www.bls.gov/oes/. Retrieved September, 2013.

Champ, P.A., K.J. Boyle, and T.C. Brown. 2003. A Primer on Nonmarket Valuation. Kluwer.


Desvousges, W.H., Naughton, M.C., and G.R. Parsons. 1992. Benefit transfer: conceptual problems in estimating water quality benefits using existing studies. Water Resources Research 28 (3), 675–683.


Dillman, D.A. 2000. Mail and Internet Surveys: The Tailored Design Method. Second Edition. John Wiley & Sons, Inc., N.Y., N.Y.


Dillman, D.A., J.D. Smyth, and L.M. Christian. 2009. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Third Edition. John Wiley & Sons, Inc., Hoboken, N.J.


Frisvold, G., and T.W. Sprouse. 2006. Willingness to Pay for Binational Effluent. Water Sustainability Program. http://wsp.arizona.edu/node/277. Retrieved May, 2012.


Freeman, A.M. III. 2003. The Measurement of Environmental and Resource Values. Second Edition. Resources for the Future, Washington, D.C.


Greene, W.H. 2007. NLOGIT Version 4.0 Reference Guide. Plainview, NY. Econometric Software, Inc.


Helm, E. (2012, June 05). Stated preference (sp) survey – survey methods and model results, memorandum to the section 316(b) existing facilities rule record. Retrieved from http://water.epa.gov/lawsregs/lawsguidance/cwa/316b/upload/316bmemo.pdf


Hoehn, J.P., Lupi, F., Kaplowitz, M.D., July, 2003. Untying a Lancastrian bundle: valuing ecosystems and ecosystem services for wetland mitigation. Journal of Environmental Management 68(3): 263-272.


Holmes, T. P., and W. L. Adamowicz. 2003. Attribute-based methods. Pages 171–220 in P. A. Champ, K. J. Boyle, and T. C. Brown, editors. A primer on nonmarket valuation. Chap . 6. Kluwer Academic Publishers, The Netherlands.


Johnston, R.J., Weaver, T.F., Smith, L.A., Swallow, S.K., April, 1995. Contingent Valuation Focus Groups: Insights from Ethnographic Interview Techniques. Agricultural and Resource Economics Review, 56-68.


Johnston, R.J., Schultz, E.T., Segerson, K., Besedin, E.Y., and Ramachandran, M. 2012. Enhancing the content validity of stated preference valuation: The structure and function of ecological indicators. Land Economics, 88(1), 102-120.


Kaplowitz, M.D., Hoehn, J.P., February, 2001. Do focus groups and individual interviews reveal the same information for natural resource valuation? Ecological Economics 36(2): 237-247.


Kuhfeld, W.F. 2010. Marketing Research Methods in SAS. SAS 9.2 Edition, MR-2010. Available for download at: http://support.sas.com/techsup/technote/mr2010.pdf.


Link, M.W. Battaglia, M.P., Frankel, M.R., Osborn, L., and Mokdad, A. H. 2008. A comparison of address-based sampling (ABS) versus random-digit dialing (RDD) for general population surveys. Public Opinion Quarterly, 72(1), 6-27.


Mansfield, C., Van Houtven, G., Hendershott, A., Chen, P., Porter, J., Nourani, V., and Kilambi, V. 2012. Klamath River Basin restoration nonuse value survey, Final report. Sacramento, CA: Prepared for the US Bureau of Reclamation. Retrieved from: http://klamathrestoration.gov/sites/klamathrestoration.gov/files/DDDDD.Printable.Klamath%20Nonuse%20Survey%20Final%20Report%202012%5B1%5D.pdf


McFadden, D. 1974. Conditional logit analysis of qualitative choice behavior. Pages 105–142 in P. Zarembka, editor. Frontiers in econometrics. Chap. 4. Academic Press, New York.

Louviere, J.J., D.A. Hensher, and J.D. Swait. 2000. Stated Choice Methods: Analysis and Application. Cambridge University Press. 402 p.


Mitchell, R.C., and R.T. Carson. 1989. Using surveys to value public goods: The contingent valuation method. Washington, D.C. Resources for the Future.


Morgan, D.L., and R.A. Krueger. 1998. Focus Group Kit (6 volumes). Sage Publications, Thousand Oaks, CA.


NOAA Office of Habitat Conservation and Office of Response and Restoration, and Stratus Consulting. 2012. Ecosystem Valuation Workshop (binder prepared for workshop participants). Dates: June 6-7, 2012. Location: Asheville, N.C.


Norman, L.M.; N. Tallent-Halsell, W. Labiosa, M. Weber, A. McCoy, K. Hirschboeck, J. Callegary, C. van Riper III, and F. Gray. 2010. Developing an Ecosystem Services Online Decision Support Tool to Assess the Impacts of Climate Change and Urban Growth in the Santa Cruz Watershed; Where We Live, Work, and Play. Sustainability 2(7):2044-2069.


NSF. 2010. Press Release 10-182: NSF Awards Grants for Study of Water Sustainability and Climate. http://www.nsf.gov/news/news_summ.jsp?cntn_id=117819. Retrived April, 2013.


Orme, B. 1998. Sample Size Issues for Conjoint Analysis Studies. Sawtooth Software Research Paper Series, Sawtooth Software, Inc.


Ringold, P.L., Boyd, J.W., Landers, D., Weber, M., Meeting Date: July 13 to 16, 2009. Report from the Workshop on Indicators of Final Ecosystem Services for Streams. EPA/600/R-09/137. 56 p. http://www.epa.gov/nheerl/arm/streameco/index.html


Ringold, P.L., J. Boyd, D. Landers, and M. Weber. 2013. What data should we collect? A framework for identifying indicators of ecosystem contributions to human well-being. Frontiers in Ecology and the Environment 11: 98–105.


Rubin, H.J., Rubin, I.S, 2005. Qualitative Interviewing. 2nd Edition. Sage Publications. Thousand Oaks, CA.


USEPA. 2013a. Research Programs: Science for a Sustainable Future. http://www.epa.gov/ord/research-programs.htm. Retrieved April, 2013.


USEPA. 2013b. Sustainability. http://www.epa.gov/sustainability/. Retrieved April, 2013.


Weber, M., Stewart, S., 2009. Public Valuation of River Restoration Options on the Middle Rio Grande. Restoration Ecology 17(6):762-771.


White, M.S. 2011. Effluent-Dominated Waterways in the Southwestern United States:

Advancing Water Policy through Ecological Analysis. Ph.D. Dissertation, Arizona State University. 244p.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorrwestlund
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy