NPS_Visibility_Part_B_5-16-12

NPS_Visibility_Part_B_5-16-12.docx

Visibility Valuation Survey: Pilot Study

OMB: 1024-0255

Document [docx]
Download: docx | pdf

Supporting Statement: Visibility Valuation Pilot Study (OMB# 1024-0225)


Supporting Statement B


Visibility Valuation: Pilot Study


OMB Control Number 1024-0225



Collections of Information Employing Statistical Methods


The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results. When the question “Does this ICR contain surveys, censuses, or employ statistical methods?” is checked "Yes," the following documentation should be included in Supporting Statement B to the extent that it applies to the methods proposed:



1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The pilot survey will be administered by the Virginia Tech Center for Survey Research (CSR). The CSR was established in 1990 and conducts research design consultation, telephone surveys, mail surveys, and web-based surveys, statistical analyses, and other essential data functions for virtually every area of social research, evaluation, and policy analysis. The CSR has conducted a wide variety of projects with local, state, or federal funding, in a number of vital policy content areas.

Target Population: The target population for the survey is the household population in two multi-state regions. One consisting of Utah, Arizona, New Mexico and Colorado (“Four Corners”) and a second consisting of Delaware, Virginia, West Virginia, Kentucky, Tennessee, North Carolina, South Carolina, Georgia, Alabama, Mississippi and Florida (“Southeast”). According to the 2010 Census these regions contained approximately 6 and 28 million households, respectively. These regions were selected because they represent a range of baseline and improved visibility conditions, and are areas where previous visibility valuation research was conducted for purposes of comparison (e.g., Chestnut and Rowe, 1990; Balson et al., 1990)


Sampling Unit: The sampling unit is residential mailing addresses in the Four Corners and Southeast regions.


Sample Frame: The USPS Computerized Delivery Sequence File (DSF) will be used as the sample frame of residential mailing addresses. The DSF contains all delivery point addresses serviced by the USPS (except general delivery). In this manner it offers attractive population coverage, but coverage does vary by region and is generally lower in rural and/or lower-income areas. A random sample for the two multi-state regions will be purchased from Survey Sampling International.


A total of 1,600 households will be contacted in each region with an expected 25 percent response rate, yielding 400 complete responses. The estimated response rates (Table 1) are based on results from similar efforts undertaken by members of the study team (examples below) and have been approved as reasonable and conservative by Dr. Susan Willis-Walton, Director of the Virginia Tech CSR. Note that we apply conservative estimates because:


  1. Survey response rates have been declining over time

  2. Our focus group research indicates that the survey topic is not of particular interest to some respondents

  3. We will use the pilot results to calibrate our response rate for the final survey


The household populations of the two regions are sufficiently large that differences in size are not a significant factor in determining sample sizes, or estimated precision of survey responses. Rather, based on previous, similar stated-preference studies conducted by the study team (examples below) the estimated 400 complete responses are expected to be sufficient to:

  1. Verify the sign and significance of choice question attributes

  2. Verify the bid distribution

  3. Compare implicit prices across the two regions

  4. Examine behavior of key covariates in validity equations


Examples of previous studies relied upon to estimate response rate and sample size parameters:


  1. Ozdemir and Boyle (2009): Four survey versions, N = 500 for each version, response rates 39 to 44 percent

  2. Holmes et al., (2002) and Holmes and Boyle (2005): Three survey versions, N = ~700 for each version, response rates 42 to 48 percent


A subsample of nonrespondent addresses with matched telephone numbers will be contacted to complete a short follow-up survey. The matched numbers will be generated from reverse listings in phone directories and will also be provided by Survey Sampling International. It is anticipated that approximately 50 percent of nonrespondents can be associated with phone numbers (Dr. Susan Willis-Walton, personal communication). Thus, the sample size is 600. A subsample of nonrespondents without matched numbers will be contacted via mail with the same follow-up survey. Here the sample size is 240; given our assumed response rates this would provide equal numbers of telephone and mail follow-up responses (Table 1). The survey will consist of a subset of demographic and benchmarking questions in Section G, as described in Part A of this Supporting Statement. The results of the telephone and mail follow-up surveys will be used to compare respondents and nonrespondents and adjust for sample selection if necessary, as described further under (3).


Table 1. Sample Sizes and Expected Response



Region


Survey

Respondent Universe (Households)


Sample Size

Estimated

Response Rate

Estimated Final Responses

Four Corners

Random Household Survey (mail)

~6,000,000

1,600

25%

400

Four Corners

Nonrespondent Follow Up (phone)

1,200

600

10%

60

Four Corners

Nonrespondent Follow Up (mail)

1,200

240

25%

60

Southeast

Random Household Survey (mail)

~28,000,000

1,600

25%

400

Southeast

Nonrespondent Follow Up (phone)

1,200

600

10%

60

Southeast

Nonrespondent Follow Up (mail)

1,200

240

25%

60

TOTAL Mail

TOTAL Phone Follow Up

920

120


2. Describe the procedures for the collection of information including:

* Statistical methodology for stratification and sample selection,

* Estimation procedure,

* Degree of accuracy needed for the purpose described in the justification,

* Unusual problems requiring specialized sampling procedures, and

* Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


To estimate values for visibility improvements, we will use the random utility model (Haab and McConnell, 2002). Under this approach, individual i's utility for a particular visibility program j, which is defined by a set of K attributes, can be expressed as:


,


where yi is individual i's money income, Cj is the cost of visibility program j, and Xjk is the level of attribute k that is offered in visibility program j.

The βk's are the marginal utilities for each of the K visibility attributes and βy is the marginal utility of money income. Under the RUM specification, and given individuals' stated responses to binary choice questions comparing program j to no program, these parameters can be estimated using the conditional logit model. Once parameter estimates are available, the marginal value of any particular attribute k can be estimated as:


An important feature of the pilot study, for modeling and estimation purposes, is that the visibility attributes will be defined two different ways. This will allow for a great deal of flexibility in ultimately identifying values for different visibility programs. The first approach is to define full visibility programs, which we will designate as θ's. These θ's are defined by the percentages of days that will occur in a year at each of the five visibility photos, A, B, C, D, and E. Every unique set of percentages defined in the survey will be represented by a different program dummy variable θ. This allows for direct estimation of the marginal values for each of these programs. A key result of this research will be the estimation of values for specific θ's that are on the projected visibility improvement paths. The paths are defined (in accordance with the provisions of the Regional Haze Rule) as a linear improvement in the mean of the 20 percent worst visibility days in a year from current to natural conditions by 2064. Improvement paths for the Southeast Region (Great Smokies photographs) and Four Corners Region (Canyonlands photographs) are shown in Tables 2 and 3.


Table 2. Great Smokies Visibility Paths (Southeast Region)- Percent of Days

in Year Allocated to Each Photograph


Year

Percent

Photo A

Photo B

Photo C

Photo D

Photo E

2007

0.05

0.19

0.24

0.21

0.22

0.14

2019

0.25

0.33

0.3

0.19

0.14

0.04

2024

0.33

0.43

0.3

0.16

0.09

0.02

2034

0.5

0.64

0.25

0.08

0.03

0

2044

0.67

0.84

0.14

0.02

0

0

2049

0.75

0.91

0.08

0.01

0

0

2061

0.95

0.99

0.01

0

0

0

2064

1

1

0

0

0

0







Table 3. Canyonlands Visibility Paths (Southeast Region)- Percent of Days

in Year Allocated to Each Photograph


Year

Percent

Photo A

Photo B

Photo C

Photo D

Photo E

2007

0.05

0.16

0.19

0.2

0.27

0.17

2019

0.25

0.23

0.23

0.21

0.23

0.1

2024

0.33

0.28

0.24

0.21

0.2

0.07

2034

0.5

0.37

0.26

0.19

0.15

0.04

2044

0.67

0.48

0.26

0.15

0.09

0.02

2049

0.75

0.54

0.25

0.13

0.07

0.01

2061

0.95

0.68

0.21

0.08

0.03

0

2064

1

0.72

0.19

0.07

0.02

0



The following attributes are included in this first model:


θ dummy variable for program, as defined by Photos A, B, C, D, E

health dummy variable for health benefits

ecol dummy variable for ecological benefits

time time for program to take effect

cost cost of the program


The second approach to defining visibility attributes is based on the individual photos. We can re-define the θ's as additive functions of the set of five visibility photos, A, B, C, D, and E:



where the variables photoAj through photoEj are defined as the percentages of days realized at the visibility levels defined by those photos under program j.

The following attributes are included in the second model:


photo_A the percent of days in a year at the visibility level defined by Photo A

photo_E the percent of days in a year at the visibility level defined by Photo E

health dummy variable for health benefits

ecol dummy variable for ecological benefits

time time for program to take effect

cost cost of the program




To be able to estimate both of these models, an experimental design must be developed that is flexible enough to identify all parameters in both models. This requires sufficient variation in all of the attribute levels defined above; specifically variation is needed across visibility programs (the θ's ) and across individual photo levels A through E, as well as across the other attributes in the survey.


The Experimental Design


The experimental design challenge is to define a series of binary choice sets that will allow for the identification of all sets of parameters defined in the previous section. In this pilot survey, all choice sets will be binary choices offering a visibility program that can be provided at a cost compared to no program at no cost. This means that each binary choice set is fully defined by specifying the levels of the attributes that are being offered, as well as the cost.

To derive these choice sets, a 24-row, 211 41 61 orthogonal, main-effects design matrix was drawn from a well-regarded, on-line catalog of orthogonal matrices by Warren Kuhfeld. The size of this design matrix allows for orthogonal placement of our three two-level attributes (a health benefit dummy variable, an ecological benefits dummy variable, and time, which will be 10 or 20 years), one four-level attribute (program cost, which will take values of $15, $35, $65, and $115), and one six-level attribute (the programs, θ, more detail below).

As described above, the goal of this analysis is to estimate the utility model in two separate ways: one that allows us to estimate marginal values for the visibility improvement programs that are predicted to occur over time (the θ's), and one that estimates marginal values for the occurrence of specific levels of visibility improvements, as defined by Photos A through Photo E. This challenge is addressed by making sure that both approaches to measuring visibility -- the photo percentages and the definitions of the θ's -- vary sufficiently across and within choice sets. To do this, we first pull three visibility programs for each region directly from the visibility improvement paths in Tables 2 and 3. The programs pulled are at the 5%, 50% and 100% points along those paths. Second, to get sufficient variation in the photo percentages, we create four additional programs by "perturbing" the 50% program in the following four ways: we increase and decrease the percentage occurrence of Photo A, and we increase and decrease the percentage occurrence of Photo E. In all cases, the amount of increases and / or decreases are added and/ or subtracted from Photo C. This process results in a total of seven visibility programs.1,2




The experimental designs for the Great Smokies and Canyonlands regions are presented in Tables 4 and 5.3 Following these tables are graphs that show the range of values for Photos A and Photos E.4 The design has 24 choice sets, which are assumed to be randomly assigned to four different survey versions with six questions per survey.



Table 4. Canyonlands Design -- 4 survey versions with 6 questions each


+-------------------------------------------------------------------------------------+

| version health ecol time photoa photob photoc photod photoe cost |

|-------------------------------------------------------------------------------------|

1. | 1 0 1 20 30 26 27 15 3 15 |

2. | 1 0 0 10 16 19 20 27 17 35 |

3. | 1 0 0 20 30 26 27 15 3 65 |

4. | 1 0 0 10 72 19 7 2 0 115 |

5. | 1 1 0 10 49 26 3 15 8 65 |

6. | 1 1 0 20 49 26 3 15 8 115 |

|-------------------------------------------------------------------------------------|

7. | 2 1 1 20 30 26 22 15 8 15 |

8. | 2 1 1 20 16 19 20 27 17 15 |

9. | 2 0 1 20 49 26 3 15 8 35 |

10. | 2 0 1 10 37 26 19 15 4 35 |

11. | 2 1 1 10 49 26 8 15 3 35 |

12. | 2 1 1 10 37 26 19 15 4 65 |

|-------------------------------------------------------------------------------------|

13. | 3 0 1 20 16 19 20 27 17 65 |

14. | 3 0 0 20 37 26 19 15 4 115 |

15. | 3 1 0 10 30 26 27 15 3 115 |

16. | 3 1 0 20 30 26 22 15 8 35 |

17. | 3 1 1 10 49 26 3 15 8 15 |

18. | 3 0 0 10 49 26 8 15 3 65 |

|-------------------------------------------------------------------------------------|

19. | 4 1 0 10 16 19 20 27 17 15 |

20. | 4 1 0 20 72 19 7 2 0 35 |

21. | 4 0 0 20 49 26 8 15 3 15 |

22. | 4 0 1 10 49 26 8 15 3 115 |

23. | 4 0 1 10 72 19 7 2 0 115 |

24. | 4 1 1 20 72 19 7 2 0 65 |

+-------------------------------------------------------------------------------------+




Table 5. Great Smokies Design -- 4 survey versions with 6 questions each

+-------------------------------------------------------------------------------------+

| version health ecol time photoa photob photoc photod photoe cost |

|-------------------------------------------------------------------------------------|

1. | 1 0 0 10 49 25 23 3 0 65 |

2. | 1 0 0 20 100 0 0 0 0 115 |

3. | 1 1 0 10 49 25 18 3 5 65 |

4. | 1 0 0 20 64 25 3 3 5 65 |

5. | 1 1 0 20 100 0 0 0 0 115 |

6. | 1 0 0 20 64 25 3 3 5 115 |

|-------------------------------------------------------------------------------------|

7. | 2 1 0 20 19 24 21 22 14 35 |

8. | 2 1 1 10 100 0 0 0 0 65 |

9. | 2 0 0 10 19 24 21 22 14 15 |

10. | 2 0 1 20 49 25 18 3 5 15 |

11. | 2 0 1 10 49 25 23 3 0 115 |

12. | 2 0 1 20 64 25 8 3 0 65 |

|-------------------------------------------------------------------------------------|

13. | 3 1 1 20 64 25 8 3 0 115 |

14. | 3 0 1 10 100 0 0 0 0 35 |

15. | 3 1 0 10 64 25 8 3 0 15 |

16. | 3 0 0 10 64 25 8 3 0 35 |

17. | 3 1 1 20 49 25 23 3 0 15 |

18. | 3 1 1 20 19 24 21 22 14 65 |

|-------------------------------------------------------------------------------------|

19. | 4 0 1 10 19 24 21 22 14 15 |

20. | 4 0 1 20 49 25 18 3 5 35 |

21. | 4 1 1 10 64 25 3 3 5 35 |

22. | 4 1 1 10 64 25 3 3 5 15 |

23. | 4 1 0 10 49 25 18 3 5 115 |

24. | 4 1 0 20 49 25 23 3 0 35 |

+-------------------------------------------------------------------------------------+



Testing the Experimental Design


To verify that the experimental design will identify all parameters, a simulation was run on the Canyonlands experimental design with 1,000 replications. Each replication assumed a sample size of 400: 100 responses to each of the four survey versions. With each survey version having six questions, the total sample size for each replication was 2,400.

The simulation assumed the following specification for utility:


U = .04*PhotoA - .05*PhotoE + .7*Health + 1.15*Ecol -.03*Time -.025*Cost + ε


Simulation results for both types of models to be estimated are provided in Table 6. All parameters appear to be well estimated, given the sample size.






Table 6. Canyonlands Simulation Results



Mean estimation Number of obs = 1000


---------------------------------------------------------------------

| Mean Std. Err. [95% Conf. Interval]

--------------------+------------------------------------------------

response_b_health | .6526956 .0025738 .6476449 .6577464

response_b_ecol | 1.151795 .0023456 1.147192 1.156398

response_b_time | -.0386896 .0001654 -.0390141 -.038365

response_b_price | -.0258266 .0000354 -.0258961 -.0257572

response_b_Itheta_2 | 1.048048 .0046263 1.038969 1.057126

response_b_Itheta_3 | 1.257548 .0044654 1.248785 1.266311

response_b_Itheta_4 | 1.44215 .0045162 1.433288 1.451012

response_b_Itheta_5 | 1.783491 .0040151 1.775612 1.791369

response_b_Itheta_6 | 1.979975 .0038199 1.972479 1.987471

response_b_Itheta_7 | 3.073499 .0047036 3.064269 3.082729

---------------------------------------------------------------------


Mean estimation Number of obs = 1000


-------------------------------------------------------------------

| Mean Std. Err. [95% Conf. Interval]

------------------+------------------------------------------------

response_b_health | .7048772 .0024329 .700103 .7096515

response_b_ecol | 1.155364 .0022502 1.150949 1.15978

response_b_time | -.0300786 .0001951 -.0304615 -.0296958

response_b_price | -.0250515 .0000352 -.0251206 -.0249823

response_b_photoa | .0400566 .0000779 .0399038 .0402094

response_b_photoe | -.0505181 .0002356 -.0509805 -.0500557

-------------------------------------------------------------------




Analysis of Collected Data


As described above, choice data will be analyzed using standard discrete choice models in the RUM framework and illustrative values for various visibility improvement scenarios will be calculated. In addition, standard errors and confidence intervals will be calculated using the Krinsky and Robb (1986) simulation method. We will then perform several tests to evaluate the robustness of our results.


  1. We will test whether the individual coefficients in the choice model are statistically significant at conventional levels and the signs are as anticipated. Specifically, we expect that the coefficients on the visibility improvement and health and ecosystem impacts (specified either as ‘no change’ or ‘a small improvement’) will be positive. Alternatively, cost must be negative, as well as timing (assuming that respondents would prefer specified improvements to occur sooner rather than later).


  1. We will test whether choice model coefficients and estimated WTP values are significantly different between the Four Corners and Southeast regions. For example, Swait and Louviere (1993) and Paterson et al. (2008) describe hypothesis testing methods for discrete choice models.



  1. We will compare our WTP estimates to those in previous research and published literature. While this comparison is influenced by differences in study design and the passage of time, it nonetheless provides a check on the reasonableness of estimated values.



3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


A number of methods will be used to maximize survey response rates, as summarized below:


  • Use of USPS Delivery Sequence File as Sample Frame- By drawing the sample from a comprehensive list of residential mailing addresses, we avoid the potential for incomplete coverage of the target population potentially associated with other sampling frames.


  • Careful Survey Design and Focus Group Pre-Testing- The survey was developed and rigorously tested in 20 two-hour focus group sessions (four groups in each of five different states). The questions are worded in a manner that is easy to understand and organized in a logical order. In addition, we have consulted a graphic design expert to assist with survey graphics, layout and presentation.



  • Administration by a University Survey Research Center- Surveys that are Government/University sponsored tend to receive higher response (Heberlein and Baumgartner, 1978). Our survey will be administered by the Virginia Tech Center for Survey Research.



  • Best-Practice Implementation Sequence- Following Dillman (2000), households selected to participate in the survey will receive:



    • A pre-survey notification (initial contact) letter on NPS letterhead and signed by the Director of the Air Resources Division explaining the purpose and significance of the survey.


    • One week later respondents will be sent a copy of the survey with cover letter (including a toll-free number for respondents to call with any questions) and an incentive in the form of a $2 bill. The use of modest monetary incentives has been shown to significantly increase survey response rates (Rathbun and Baumgartner, 1996 and Warriner et al., 1996). Furthermore, incentives have been shown to reduce nonresponse bias by increasing cooperation, particularly among those who are not interested or involved in the survey topic (Groves, Singer, and Corning, 2000; Groves, Presser, and Dipko, 2004; Groves et al., 2006).


    • Within five days of the initial survey mailing a reminder postcard will be sent.


    • Within three weeks of the initial survey mailing a second copy of the survey will be sent. Incoming responses will be tracked and the second mailing may be sent earlier if returns are tapering significantly.


    • Three weeks after the second survey mailing the data collection period will conclude and the nonresponse surveys will be implemented.



Identifying Possible Nonresponse Bias


Nonresponse bias refers to the expected difference between an estimate from respondents in the sample and an estimate from the target population and may arise from both unit (household does not return survey) and item (returned survey is incomplete) nonresponse. Of particular concern in this context is whether nonresponse results in biased measures of WTP for visibility improvements.


We propose three specific procedures for investigating potential nonresponse bias in our collected survey data;


  1. Benchmarking- Responses to demographic questions (e.g., age, income, gender, race, education) from respondents will be compared to data from the 2010 Census. In addition, the survey includes several questions regarding opinions on environmental issues and government programs from the National Opinion Research Center General Social Survey (collectively these are questions 26 to 36 as described in Part A). These responses will also be compared.

  2. Late Responders- We will compare survey responses, respondent characteristics and estimated WTP values across individuals who returned their surveys at different times during the data collection period. For example, we can compare individuals who returned their surveys after the first mailing versus the second mailing. Although all of these people are responders, those who respond later may share important characteristics with non-responders.

  3. Nonrespondent Telephone and Mail Follow-Up Surveys- A sample of approximately one-half of survey nonrespondents with matched telephone numbers will be drawn and contacted to complete a short follow-up survey consisting of a subset of five of the questions in (1) above . Up to six call-backs will be attempted to complete the survey. In addition, a sample of nonrespondents without matched numbers will be contacted to complete the same brief questionnaire via mail. To encourage response these follow-up questionnaires will be sent via Priority Mail. We will then compare the phone responses to the mail responses.



Statistically-significant differences in the means and/or distributions of variables described in (1), (2) and/or (3) above would provide evidence of likely nonresponse bias.



Adjusting for Nonresponse Bias


In making adjustments for potential nonresponse bias we are concerned with factors that are related to response rates and individual’s WTP for visibility improvements. The most common approach for testing and correcting for sample selection is the Heckman two-stage model (Heckman, 1979). The first stage entails modeling the likelihood of responding as a function of individual characteristics. We will rely upon the data collected in the mail and nonrespondent phone surveys regarding demographic characteristics and responses to attitudinal questions. The estimated parameters from the first stage are used to calculate the inverse Mills ratio, which is included in the second stage to correct for selection under certain assumptions. In our case the second stage are the models explaining responses to the valuation questions.


Finally, we will test for significant differences in WTP estimates from the standard and selection models.



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


The purpose of the pilot study is to determine whether survey, valuation scenario and experimental design parameters are functioning properly prior to implementation of the full survey. The pilot survey materials were developed and tested extensively through a series of focus groups and informed by an exhaustive review of past visibility valuation literature. The focus groups were conducted in five states in 2008 and 2009. Four groups were held in each state (two groups per evening on consecutive evenings) at professional focus group facilities. Respondents were randomly recruited from samples of local telephone numbers.


  • Atlanta, GA: The first set of groups focused on investigating respondents' understanding of "National Parks and Wilderness Areas”; evaluating the degree that respondents focus on visibility improvements versus any health and/or ecological benefits resulting from reduced haze; evaluating the degree to which respondents believe that visibility improvements will

only occur within a designated "visibility improvement region;” investigating respondents' reactions to images selected to depict five levels of visibility due differing levels of haze; determining the best approach for presenting numerical and graphical information about the distribution of visibility levels throughout the year; and exploring respondents’ reactions to different payment vehicles for eliciting willingness to pay for visibility improvements.


  • Chicago, IL: Key objectives of the second set of groups included evaluating respondents’ understanding of how particles that form haze move to National Parks and National Wilderness Areas; evaluating whether participants were able to understand how the Regional Haze Rule will result in improved air quality in National Parks and National Wilderness Areas; evaluating respondents' reactions to digitally manipulated photographs that depict visibility at five different levels of haze; evaluating respondents’ reactions to numerical and graphical presentations of information about the distribution of visibility levels throughout the year, under baseline (current) conditions and improved (reduced haze) conditions; and, evaluating respondents’ reactions to the use of electricity bills as the payment vehicle used for eliciting willingness to pay for visibility improvements.

  • Sacramento, CA: Key objectives of the third set of groups included evaluating respondents’ ability to understand bar charts depicting information about the distribution of visibility levels throughout the year under baseline conditions (no implementation of haze-reduction program), natural conditions (all human-caused haze eliminated), and conditions under a haze-reduction program; evaluating respondents’ reactions to the introduction of visibility improvement program attributes and levels; and, evaluating respondents’ responses to draft attribute-based choice questions.

  • Denver, CO: The fourth set of groups focused on further refining the description and presentation of choice question attributes and levels. In addition, two variants of the survey were tested- a regional section which only focuses on improvements within the one visibility improvement region closest to where the participants live, and a national section which considers visibility improvements within all seven improvement regions across the United States.

  • Boston, MA: The fifth and final set of groups in Boston focused on final revisions to the choice questions. Specifically, the attribute table was divided into two columns, with-program and without-program to explicitly define the status quo conditions; the visibility improvement scenario represented by bar charts was moved to the top of the table to encourage respondents to explicitly consider this attribute when answering each choice question; and, two bar chart formats were investigated, one with current and improved conditions on the same chart (as in previous groups) and one with separate charts for each state.


Upon completion of the Boston focus groups the study team was confident that the choice question format with separate charts was superior and that the remainder of the information and questions in the survey were functioning properly. All survey materials were then provided to experts in the field of stated-preference and visibility valuation for peer review (Dr. Vic Adamowicz and Dr. William Schulze). Comments from these experts were incorporated and final materials for the pilot survey were developed. Full reports describing the focus group proceedings and the peer review reports are available for review upon request.


Dr. Vic Adamowicz,

Distinguished University Professor, Department of Rural Economy, University of Alberta,

(780) 492-4603

Dr. William Schulze, Professor

Applied Economics and Management,

Department of Applied Economics and Management

Cornell University,

(607) 255-9611.



5. Provide the names and telephone numbers of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


  • Dr. Kevin Boyle, Professor and Department Head, Agricultural and Applied Economics, Virginia Tech University, (540) 231-2907. Dr. Boyle is the co-Principal Investigator; he will direct survey development, design, implementation and data analysis.


  • Dr. Richard Carson, Professor, Department of Economics University of California, San Diego, (858) 534-3384. Dr. Carson is the co-Principal Investigator; he will direct survey development, design, implementation and data analysis.


  • Dr. Susan Willis-Walton, Director, Virginia Tech Center for Survey Research, Virginia Tech University, (540) 231-3695. Dr. Willis-Walton will oversee mail and phone survey administration.


  • Mr. Robert Paterson, Principal, Industrial Economics, Incorporated, (617) 354-0074. Mr. Paterson will provide technical support for the pilot study and lead the data analysis.







REFERENCES

Dillman, D.A. 2000. Mail and Internet surveys: The Tailored Design Method. New York, NY: John Wiley & Sons.


Groves, R. M. and M. P. Couper. 1998. Nonresponse in Household Interview Surveys. New York, Wiley.

Groves, R. M., M. P. Couper, S. Presser, E. Singer, R. Tourangeau, G. P. Acosta and L. Nelson. 2006. "Experiments in Producing Nonresponse Bias." Public Opinion Quarterly 70(5): 720-736.

Groves, R. M., S. Presser and S. Dipko. 2004. "The Role of Topic Interest in Survey Participation Decisions." Public Opinion Quarterly 68(1): 2-31.

Haab, T.C. and K.E. McConnell. 2002. Valuing Environmental and Natural Resources: The Econometrics of Non-Market Valuation. Edward Elgar, Northampton, MA.


Heckman, J. 1979. "Sample selection bias as a specification error". Econometrica 47 (1): 153–61


Krinsky, I and A. L. Robb. 1986. “On Approximating the Statistical Properties of Elasticities.” Review of Economic and Statistics 68: 715-719.


Kuhfeld, Warren, F., Orthogonal Arrays, Advanced Analytics Division, SAS, http://support.sas.com/techsup/technote/ts723_Designs.txt


Heberlein, T. A. and R. Baumgartner. 1978. "Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature." American Sociological Review 43(4): 447-462.

Holmes, T.D. and K.J. Boyle. 2005. “Dynamic Learning and Context-Dependence in Sequential Attributed-Based Stated-Preference Valuation Questions,” Land Economics, 81(1): 114-126.

Holmes, T.D., K.J. Boyle, M.F. Teisl and B. Roe, “A Comparison of Conjoint Analysis Response Formats: Reply,” American Journal of Agricultural Economics, 84(4): 1172-1175.

Ozdemir, Semra and K.J. Boyle. 2009. “Convergent Validity of Attribute-Based Choice Questions in Stated-Preference Studies,” Environmental and Resource Economics, 42(2): 247-264

Paterson, Robert W., Kevin J. Boyle, Christopher F. Parmeter, James E. Neumann and Paul De Civita. 2008. “Heterogeneity in Preferences for Smoking Cessation,” Health Economics, 17(12)

Rathbun, P.R. and R.M. Baumgartner. 1996. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the 1996 Joint Statistical Meetings. Chicago, Illinois. June.

Swait, J. and J. Louviere. 1993. The Role of the Scale Parameter in the Estimation and Comparison of Multinomial Logit Models. Journal of Marketing Research. 30: 305-314.

Warriner, K., J. Goyder, H. Gjertsen, P. Hohner, and K. McSpurren. 1996. "Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment." Public Opinion Quarterly 60 (4): 542-562.



1 Because two programs turned out to be very close for the Great Smokies region, only six programs are used in the final experimental design for that region.

2 Since the design matrix only accommodates a six-level attribute, variation over the seven programs is manufactured by mixing information from two additional two-level columns from the design matrix into the perturbation routine.

3 A price adjustment was made on a small number of choice sets to decrease the probability of having complete dominance -- choice sets where all respondents choose the same alternative. When generated choice sets resulted in a high visibility (100% point on visibility path) and low cost ($15) program, or vice versa, low visibility (5%) at a high cost $115, then the costs were replaced with the more consistent value -- $115 for the high visibility program and %15 for the low visibility program.

4 Photos A and E are the primary focus of the visibility analysis, so variation in these levels is most important.

19


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy