Supporting Statement B 9-10-2015

Supporting Statement B 9-10-2015.docx

Glen Canyon Survey

OMB: 1024-0270

Document [docx]
Download: docx | pdf

Supporting Statement B

Glen Canyon Survey

OMB Control Number 1024-0270



1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


Mail Survey: We will purchase a randomly generated list of names, addresses and telephone numbers from Survey Sampling International (SSI). The respondent universe for this collection will be adults (18 years of age or older) living in households within in the Glen Canyon Dam region and the rest of the United States (see Table 2 below). We mail surveys to a total of 4,626 residential mailing addresses with the expected response rate of 30% (n=1,041) response rate for the national survey and a 40% (462) for the regional survey.

Table 2: Sample Size and Response Rates

Sample Frames

Total Number of

Households

Number of Sampled Households

Estimated Response Rate

Estimated Number of Responses

Glen Canyon Dam Regional Sample

UTAH: Washington, Kane and San Juan Counties


NEVADA: Clark County


ARIZONA: Mojave, Coconino Navaho and Apache Counties

939,900*


1,156


40%


462


National Household Sample

115,226,000*

3,470

30%

1,041

Total


4,626


1,503

* U.S. Census Quickfacts (2014)



Non-response survey: From the list of all non-respondents we will randomly select and call 200 households with related telephone numbers. If we do not reach our goal of 70 responses we will return to the list of remaining non-respondents and randomly select n additional households until we reach 70 responses.


Table 3 Non- response Survey


Total Number of

Phone Numbers

Random selected telephone numbers

Estimated number of responses

Non-response Survey

3,123

200

70


2. Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Using the mailing addresses purchased from SSI, we will use Dillman’s (2007) repeat-contact mail protocol to initiate the mail-out process. A cover letter will be used to explain the purpose of the survey and request that the adult in the household with the most recent birthday complete the questionnaire, however, if that individual is not available any adult living in the household will then be eligible to respond to the questionnaire. The specific Dillman mailing protocol to be used includes, 1) an initial contact letter introducing the survey, 2) a survey package one week later, 3) a reminder postcard one week following mailing of the survey package, 4) a second full survey package to non-respondents 2-3 weeks following the reminder postcard, and 5) a follow up telephone call to a random sample (n=200) of all non-respondents to conduct the non-response survey.


Estimation Procedure: Once the collection is completed, the data will be cleaned, coded, and edited. All data will be analyzed using SAS statistical software. The software will also be used to perform statistical tests on responses to key survey measures among the two primary subpopulations (national and regional).


We will generate statistics to summarize and compare responses, response rates, and individual characteristics across groups defined in the sampling plan. A post-stratification adjustment will also be generated to correct any detected non-response bias.



Estimating Household’s Total Willingness-to-Pay (WTP)

To analyze the data from the conjoint/discrete choice experiment questions, we will apply a random utility modeling (RUM) framework, which assumes that survey respondents implicitly assign utility to each choice option presented to them. This utility can be expressed as

,

  • Uij is individual i’s utility for a choice option (i.e., restoration option) j

  • V() is the nonstochastic part of utility, a function of Xij,

  • Xij represents a vector of attribute levels for the option j (including its cost) presented to the respondent

  • Zi, a vector of personal characteristics

  • i, a vector of attribute-specific preference parameters

  • eij is a stochastic term, which captures elements of the choice option that affect individuals’ utility but are not observable to the analyst. On each choice occasion, respondents are assumed to select the option that provides the highest level of utility. By presenting respondents with a series of choice tasks and options with different values of Xij, the resulting choices reveal information about the preference parameter vector.


Conditional Logit Estimation


To estimate the parameters of the conjoint model, we will use a standard conditional logit (CL) model (McFadden 1986), which assumes the disturbance term follows a Type I extreme-value error structure and uses maximum-likelihood methods to estimate the attribute parameters. The conditional logit is a computationally straightforward estimation approach that can provide useful insights into the general pattern of respondents’ preferences, trade-offs, and values.

The parameter estimates from the CL model will then be used to estimate the average marginal value of each non-cost attribute. They will also be used to estimate the average WTP for acquiring the combination of attributes associated with one management scenario (X1) compared to the attributes of another scenario (e.g., the no action alternative) (X0):

.


The standard errors and confidence intervals for these value estimates will be estimated using the delta method (Greene 2003) or the Krinsky and Robb (1986) simulation method.


Degree of accuracy for the purpose described in the justification


Based on the results of the pilot study we expect a response rate of 30% to 40% resulting in more than 1,500 completed surveys for the two populations combined. The expected standard errors associated with the key survey parameters (proportions) would be +/– 0.05 at a 95% confidence interval (Based on an estimated proportion of 0.5).


Accuracy of the WTP values to be estimated from the Conjoint models cannot be specified a priori through the use of an analytical formula. Work by Orme (2006) indicates that based on standard rules of thumb, the sample sizes planned for the conjoint model estimations are more than adequate to estimate direct effect values for all included attributes. Results from the Klamath Passive Use study (Mansfield et al. 2012), which employed a very similar conjoint question, estimated mean household WTP for their National Sample with a 95% confidence interval that was +/– 25% of the mean WTP value. We would expect our full sample results to return estimates of WTP with similar precision.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


Non-response bias is the expected difference between an estimate from the sample members who respond to the survey and an estimate from the target population, that is:


(1)

Equation 1 shows that an overall population estimate, depends on the proportion of respondents and non-respondents (denoted as and respectively, where and the mean response from both respondents and non-respondents (denoted as and respectively). Bias in an estimate due to non-response is given by the following equation:


, (2)


The extent to which non-response bias occurs ultimately depends on (1) the extent of missing data and (2) the difference in an estimate between respondents and non-respondents. The bias can be expressed in the following equation:


(3)


It reveals that the non-response bias depends on two components, non-response rate ( ) and the difference between mean responses for respondents and non-respondents ( ). If both components are small, then the bias should be negligible. For bias to be significant, a large non-response rate ( ) should exist, and/or with a large difference between the mean responses.


The likelihood (propensity, probability) of responding to the survey may be related to sampling unit characteristics, such as age. For example, that young people are less likely to respond than old people. It is evident when noting that both components of non-response bias (namely and ) would be magnified because young people may be less likely to respond and if they do respond, they may respond differently from old people. The non-response bias then can be expressed in another way as a function of how correlated response propensity is to a survey outcome variable:

(4)

where is the response propensity, and is the mean propensity in the target population over sample realizations, given the sample design, and recruitment realizations. The stronger the relationship ( ) between the survey outcome variable and response behavior is, the larger the bias would be. Equation 4 also reveals that within the same survey, different survey outcomes can be subject to different non-response biases. Some, unrelated to the propensity to respond, can be immune from biasing effects of non-response, while others can be exposed to large biases.

Both equations 3 and 4 also imply that higher response rates do not necessarily mean low non-response bias for any survey or any given estimates. For example, if response rates are increased using devices that are not disproportionately attractive to the low propensity groups, the non-response biases may increase despite high response rates. It should be noted that two different viewpoints of non-response bias in Equations 3 and 4 implicitly assume all other sources of bias are absent, especially measurement error.


The study design aims to address the major sources of survey error that pose a substantial threat to the accuracy of survey estimates. Our approach to non-response bias includes three goals: increasing response rates, identifying non-response bias, and correcting for non-response bias post-data collection. We discuss each element in more detail below.

Increasing Response Rates

Coverage error: We will address potential coverage error by using addressed-based sampling and selecting the survey samples from the USPS’s Delivery Sequence File (DSF). The DSF contains all postal mailing addresses in the United States. Using mail as the primary contact method avoids under-coverage problems found with other methods by including residential addresses, but also other types, such as post office boxes and general delivery. Unlike other sampling frames, this also provides the ability to accurately stratify the sample geographically.

Encourage response to mail survey: We acknowledge that response rates in household surveys have been declining (Groves and Couper 1998; Stussman, Dahlhamer, and Simile 2005), and with this trend, non-response poses a substantial threat to survey inference. We will use Dillman TDM (2007) to optimize our survey response rate.

Identifying Possible Non-response Bias

Our sample design and data collection methods allow for a thorough non-response bias analysis after data collection is completed to assess whether non-response bias exists. This will be done in two ways:

  • The respondents from the later mailings will be viewed as less cooperative than the respondents from the first mailing. In our experience, we have found that respondents to later mailings tend to have low response propensity, and tend to have characteristics similar to non-respondents. By comparing the WTP estimates between the first and second mailings, we will be able to measure the correlation between the response propensity and responses to survey outcome variables indirectly ( in Equation 4).

  • After the third mailing, we will contact a random sample of 200 non-respondents by phone for a short non-response survey. The phone survey will only provide information about non-respondents with phone numbers. Information about non-respondents without phone numbers will come primarily from comparing non-respondents who are converted by the third mailing or are part of the non-response sample.


Using the address of the respondent, we can learn something about the non-respondents using other data available at the county, city, or state level. Again, we have to assume that the non-respondents’ characteristics, attitudes, and habits are similar to those of people who live near them. Using data from the American Community Survey and the 2010 Census.


Adjusting for Non-response Bias:

In the analysis, we will examine the impact of the variables collected in the survey on WTP estimates. We will assess to the extent possible whether factors that are significantly related to WTP also appear to be related to response rates.


To address the potential for non-response bias in our estimation sample, we will conduct additional tests and analyses to test for statistically significant differences between our samples’ socio demographic characteristics (e.g., age, income, gender, race, education) and those from the general population from which they are drawn. A close correspondence in these characteristics will not necessarily ensure an unbiased sample for WTP estimation (nor would a lack of correspondence necessarily imply bias); however, it can at least be interpreted as lowering the potential for significant bias.


We will also compare the characteristics of respondents who returned their surveys at different times during the data collection period. We will compare individuals who returned their surveys after the first mailing, after the second mailing, and non-respondent survey (phone call) . Although all of these people will be considered as respondents, those who respond later may share characteristics with non-responders.





4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


In August 2014, OMB authorized a limited pretest of the survey instrument. The results of the pretest survey are described in Supporting Statement A, and the Glen Canyon Survey Pretest Report (added in ROCIS as a supplementary document). The surveys returned in the pretest showed the survey instrument to be understandable and the key price attribute range used to be appropriate for model estimation.

The survey components and sampling design for this collection are largely based upon the methods used by Welsh et al. (1995) in the Glen Canyon Dam, Colorado River Storage Project, Arizona—Nonuse Value Study (OMB Control Number 1006-0016) and the implemented Klamath Nonuse Valuation Survey (OMB Control Number 1090-0010).


In order to further refine the draft survey instruments and procedures, we solicited feedback from three professionals with expertise in economic valuation, natural resource management and planning as well as survey design and methodology. These peer reviewers (Drs. John Loomis, Michael Welsh, and Lynne Koontz) were asked to provide comments concerning the structure of the survey and to provide feedback about the validity of each question and the clarity of instructions. Methodological and editorial suggestions by the reviewers were incorporated into the final versions of the surveys.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Person consulted on the statistical aspects of survey and sampling design

Dr. David Patterson

Department of Mathematical Sciences

University of Montana

Missoula, MT 59812

(406) 243-6748


Principal Investigator for data collection and analysis

Dr. John Duffield

Department of Mathematical Sciences

University of Montana

Missoula, MT 59812

(406) 243-5569





References Cited

Dillman, D.A. (2007). Mail and internet surveys-The tailored design method, 2nd ed: Hoboken, NJ, John Wiley & Sons, Inc.


Greene, W.H. (2003). Economic Analysis, 5/e. India, Pearson Education.


Groves, R. M. and M. P. Couper (1998). Non-response in Household Interview Surveys. New York, Wiley.


Krinsky, I. and A. L. Robb. (1986). On approximating the statistical properties of elasticities. The Review of Economics and Statistics, 715-719.


Mansfield, C., G. Van Houtven, A. Hendershott, P. Chen, J. Porter, V. Nourani, and V. Kilambi. (2012). "Klamath River Basin Restoration Nonuse Value Survey." RTI International.


McFadden, D. (1986). The choice theory approach to market research. Marketing Science, 5(4): 275-297.


Orme, B. (2006) Getting Started with Conjoint Analysis: Strategies for Product Design and Pricing Research. Madison, WI.: Research Publishers LLC.


Stussman, B., J. Dahlhamer and C. Simile. (2005). The Effect of Interviewer Strategies on Contact and Cooperation Rates in the National Health Interview Survey. Paper presented at the Federal Committee on Statistical Methodology, Washington, DC.


United States Census Bureau. (2013). “State & County QuickFacts.” Available at http://quickfacts.census.gov/qfd/index.html


Vossler, Christian A. and Mary F. Evans. (2009). “Bridging the Gap between the Field and the Lab: Environmental Goods, Policy Maker Input, and Consequentiality.” Journal of Environmental Economics and Management, 58(3): 338-345.


Vossler, Christian A., Maurice Doyon and Daniel Rondeau. (2012). “Truth in Consequentiality: Theory and Field Evidence on Discrete Choice Experiments.” American Economic Journal: Microeconomics.


9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPonds, Phadrea
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy