Supporting_Statement_PartB_Klamath 20110204

Supporting_Statement_PartB_Klamath 20110204.doc

Klamath Non-use Valuation Survey

OMB: 1090-0010

Document [doc]
Download: doc | pdf

Supporting Statement for

Paperwork Reduction Act Information Collection Submission

OMB Control Number 1090-0010


Klamath Nonuse Valuation Survey


Terms of Clearance: None


B. Collections of Information Employing Statistical Methods


The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results.


B1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The primary objective of the sample design is to construct a sample selection procedure to obtain nationally representative samples of households in the area near the Klamath River and the rest of the United States. We propose a stratified sampling method to select households from each stratum and conduct a mail survey.


Target Population (respondent universe): The target population for the stated-preference survey is the household population located in 50 states and the District of Columbia (DC) in the United States. There are about 105,480,000 households in the 50 states and DC according to the 2000 Census.


Sampling Unit: Residential mailing addresses in the 50 states and DC in the United States are the sampling units for the stated-preference survey.


Sample Frame: We will use the U.S. Postal Service (USPS) residential mailing address list as the sample frame of residential mailing addresses. This offers compelling time and cost savings compared with field enumeration and has much better coverage than a phone book list of households. We will purchase augmented residential mailing addresses from Marketing Systems Groups (MSG), a private company with a nonexclusive license agreement with USPS. About 65% of MSG-augmented residential mailing addresses have associated phone number(s). Phone numbers can be used to prompt respondents to complete the mail survey and/or to collect information from nonrespondents.


Stratification: Households near the Klamath River and the households farther away from the Klamath River may respond to survey questions differently and have different opinions on dam removal. To capture the differences among the target population, we will place residential mailing addresses into three geographic strata. The three first-stage strata are defined as follows:


  • Stratum1—Klamath River Area. This area includes 12 counties adjacent to the Klamath River, 5 in southern Oregon (Lake, Klamath, Douglas, Jackson, and Josephine counties) and 7 in northern California (Modoc, Siskiyou, Del Norte, Humboldt, Trinity, Shasta, and Tehama counties).

  • Stratum 2—Rest of Oregon and California, excluding the 12 counties in the Klamath River area. According to the current Settlement Agreement, the taxpayers of Oregon and California will bear the cost of removing the dams, while the taxpayers in the United States as a whole will fund much of the post-dam removal restoration activities. In addition, studies have found that people are much more willing to pay for projects in their state than outside their state.

  • Stratum 3—Rest of the United States excluding Oregon and California.


Within each stratum, we can further stratify counties into three strata by accounting for county population density, such as core-based statistical area (CBSA) classification to balance the household sample between urban and rural areas. We will discuss further stratification in details in Section B2.


Sample Size and Expected Response Rate: Table B1 displays the sample size for selected residential mailing addresses, target respondents, and expected response rates in three strata. We will select 13,000 residential mailing addresses and, assuming 20% of the addresses are not valid, the sample will be 10,400 valid addresses. The final sample (the 90% sample from Table A2) will consist of 9,360 valid mailing addresses from which we expect to achieve approximately 2,700 responding households.


Table B1. Sample Size and Response Rates for Final Data Collection

Stratum

Stratum Description

Target Respondents

Estimated Response Rate

Final Sample (90% sample)

Useable Addresses

Safety Margina

Sampled Units

Klamath River Area

12 counties within a 100-mile radius of the Klamath River

819

35.00%

2340

2600

20.00%

3,250

Rest of Oregon and California

Excluding 12 counties in the Klamath River area

948

27.00%

3510

3900

20.00%

4,875

Rest of the U.S.

Rest of the U.S., excluding Oregon and California

948

27.00%

3510

3900

20.00%

4,875

Total

50 states and DC

2,714

 

9360

10400


13,000

a We added a 20% margin to the selected residential mailing addresses to account for bad address, mail delivering failure, etc.

B2. Describe the procedures for the collection of information including:

a. Statistical methodology for stratification and sample selection,

b. Estimation procedure,

c. Degree of accuracy needed for the purpose described in the justification,

d. Unusual problems requiring specialized sampling procedures, and

e. Any use of periodic (less frequent than annual) data collection cycles to reduce burden.



  1. Statistical Methodology for Stratification and Sample Selection: As discussed in Section B1, we stratify the residential mailing addresses into three first-stage strata. Within each first-stage stratum, we will further stratify residential mailing addresses by either explicit stratification or implicit stratification to balance the residential mailing address samples geographically and to cover urban and rural areas well. Sorting the sample frame by frame attributes serves as implicit stratification.

  • Stratum 1—12-County Klamath River Area

Explicit Stratification: Each of the 12 counties will get one-twelfth of the sample.

Implicit Stratification: County, CBSA classification (MSA, uSA, and non MSA and uSA), and Zip

  • Stratum 2—Rest of Oregon and California

Implicit Stratification: County, CBSA classification (MSA, uSA, and non MSA and uSA), and Zip

  • Stratum 3—Rest of the United States

Stratification: Four Substrata corresponding to four Census regions (Northeast, South, Midwest, and West)

Implicit Stratification: State, county, CBSA classification (MSA, uSA, and non MSA and uSA), and Zip



Sample Selection: We will select residential mailing addresses using a systematic random sampling method. Within each stratum or substratum (for Stratum 3), the sample frame of residential mailing addresses will be sorted by variables as the implicit stratification. We will identify the universe (total number of residential mailing addresses), divide it up into evenly sized intervals, and select one residential mailing address at random within each interval. For example, in Stratum 1, there are about 345,000 residential mailing addresses in the sample frame. We divide the sample frame into approximately 3,000 intervals, corresponding to approximately 3,000 addresses needing to be selected, with about 115 residential mailing addresses in each interval. We then randomly select one residential mailing address from 115 residential mailing addresses from every interval. A questionnaire will be sent to the selected residential mailing address. In the cover letter, we will ask that the adult in the household with the most recent birthday to complete the survey questionnaire, but if that individual is not available any adult living in the household is eligible to respond to the questionnaire.

  1. Estimation Procedure

After the survey data collection is completed, the data will be cleaned, coded, and edited. RTI will also calculate sampling weights. The point estimates and their estimated variances for the stated- preference survey data will be calculated using SUDAAN®, a software package designed by RTI statisticians to appropriately calculate variances for complex survey data, to account for the sampling weights to make inferences about the target populations. The software will also perform statistical tests on responses to key survey measures among interested subpopulations, for example, the three primary design strata (Klamath River Area, rest of Oregon and California beyond the Klamath River Area, and rest of the United States).


We will generate statistics to summarize and compare responses, response rates, and individual characteristics across groups (in particular, according to the strata defined in the sampling plan). To make inferences about the target populations or subpopulations, we will apply sampling weights to account for the probability of inclusion in the survey and nonresponses. A poststratification adjustment will also be generated to correct for coverage bias and nonresponse bias.



Estimating Household’s Total Willingness-to-Pay (WTP)

We will use the SP data to estimate households’ total value for changes associated with removing the Klamath Dam and related river ecosystem restoration measures. To analyze the data from the conjoint/discrete choice experiment questions, we will apply a random utility modeling (RUM) framework, which is commonly used to model discrete choice decisions in SP studies. The RUM framework assumes that survey respondents implicitly assign utility to each choice option presented to them. This utility can be expressed as

,

where Uij is individual i’s utility for a choice option (i.e., restoration option) j. V() is the nonstochastic part of utility, a function of Xij, which represents a vector of attribute levels for the option j (including its cost) presented to the respondent; Zi, a vector of personal characteristics; and i, a vector of attribute-specific preference parameters. eij is a stochastic term, which captures elements of the choice option that affect individuals’ utility but are not observable to the analyst. On each choice occasion, respondents are assumed to select the option that provides the highest level of utility. By presenting respondents with a series of choice tasks and options with different values of Xij, the resulting choices reveal information about the preference parameter vector.

For the initial and most basic analysis, we assume the following form for utility:

,

where yi is a measure of respondents’ household income, and Cij is the cost of option j to respondent i (in this formulation, the cost attribute is separated from the other attributes in Xij). The parameter vector, , is assumed to be the same for all respondents and includes two main components: 1, the vector of marginal utilities associated with each attribute in Xij, and 2, the marginal utility of income.

Conditional Logit Estimation

To estimate the parameters of this simple model, we will use a standard conditional logit (CL) model (McFadden, 1984), which assumes the disturbance term follows a Type I extreme-value error structure and uses maximum-likelihood methods to estimate 1 and 2. One of the well-recognized limitations of the CL model is the assumed property of Independence of Irrelevant Alternative (IIA), which often implies unrealistic substitution patterns between options, particularly those that are relatively similar (McFadden, 1984); nevertheless, it is a computationally straightforward estimation approach that can provide useful insights into the general pattern of respondents’ preference, trade-offs, and values.

The parameter estimates from the CL model will then be used to estimate the average marginal value of each noncost attribute:

,

where k refers to the kth element of the X and 1 vectors. They will also be used to estimate the average WTP for acquiring the combination of attributes associated with one program (X1) compared to the attributes of another program (e.g., the no action alternative) (X0):

.

The standard errors and confidence intervals for these value estimates will be estimated using the delta method (Greene, 2003) or the Krinsky and Robb (1986) simulation method.

The welfare analysis will also need to account for the timing of the cost variable. The current version of the survey assumes that the payment vehicle will be expressed as an annual payment lasting 20 years. For welfare estimation we will want to convert this variable (or the WTP estimates directly) to be expressed as a permanent annual payment. This annualization procedure will require an assumption regarding the appropriate discount rate. For example, Banzhaf et al. (2004) address this same issue in their CV study of Adirondack lake water quality improvements by varying the assumed discount rate between 3 and 5%. We will use a similar approach but consider a wider range of discount rates, 2% to 14%, based on advice from our expert reviewers. A question at the end of the survey asking the respondent their WTP up front for a device that will lower their electric bills in the future will provide some guidance on time preferences.

The analysis will also need to account for the direct preference effect associated with the NO ACTION (i.e., opt out) alternative. This alternative is explicitly characterized in terms of the fish population (30% decline), extinction risk (suckers—high risk/coho salmon—moderately high risk), and cost ($0) attributes; however, it also differs systematically from all of the ACTION PLAN alternatives. Dam removal, water assurances, and fish restoration are always not part of the NO ACTION and always included for the ACTION PLANS. Therefore, to account for the preference effect of action vs. no action, the analysis will include an alternative specific constant for the NO ACTION alternative.

To examine, test for, and estimate differences in preferences across our sample, we will estimate both separate and pooled models for subsamples of the data and test the restrictions of the pooled models using log-likelihood ratio tests. We will also estimate varying parameter models by interacting the attribute vector (Xij) with elements of the respondent characteristics vector (Zi). The parameter estimates from the interaction terms will allow us to examine whether and how the marginal values associated with program attributes vary systematically with respect to respondent characteristics.

We will specifically use a varying parameter approach to investigate the extent of the market for various program attributes. That is, following the example of Concu (2007) and Concu (2009), we will interact the attribute vector with measures (and functions) of respondents’ geographic distance from the Klamath River. The expectation is that these interactions will, if anything, reveal a decline (i.e., decay) in the absolute value of the estimated marginal utilities as distance increases; however, this will be a testable hypothesis using these models.

Based on the findings of these pooling tests and varying parameter models, we will determine whether and how households’ WTP for Klamath Basin restoration varies across the population according to location (strata and distance) and other observable characteristics. We will use the model results to predict average household’s total WTP for different subgroups and to demonstrate how benefits of Klamath Basin restoration are distributed across different subsectors of the population. To estimate aggregate benefits, we will segment the U.S. population according to our sampling strata and multiply the size of these populations (number of households) by the predicted average WTP for these subgroups.

Mixed Logit Estimation

In addition to analyses using CL, we will estimate mixed logit (ML) models (Revelt and Train, 1998). Although these models are somewhat more complex, they offer several advantages. First, in contrast to the CL, mixed logit is not subject to the restrictive IIA assumption. Second, ML specifically accounts for unobserved heterogeneity in tastes across subjects. It introduces subject-specific stochastic components for each , as follows:

,

where ηi is a stochastic component of preferences that varies across respondents according to an assumed probability distribution. Third, it can be used to capture within-subject correlation in responses (i.e., panel structured data), which is important for conjoint/discrete choice experiments that involve multiple choice tasks per respondent (as in this study).

The main difference in the output of ML models compared to CL models is that ML provides the ability to characterize the unobserved heterogeneity in respondents’ preferences. This can be especially important if we believe there are differences in how different people trade off attributes of the plans being evaluated. The statistical model allows the model parameters (element of the vector) to have a stochastic component. The standard deviation estimates can be interpreted as measures of attribute-specific preference heterogeneity. As a result, the Revelt and Train methodology allows for the development of estimates for both the mean and standard deviation for each of the parameters considered random.

When applying ML models to estimate WTP, one must make additional judgments regarding model specification (Balcombe, Chalak, and Fraser, 2009), including the following:

  • Which coefficients should be assumed to be fixed or randomly distributed?

  • What statistical distribution(s) should be used for the random parameters?

  • Should the model be estimated in “utility space” or “WTP space”?

In addition, there are two main approaches to estimating ML models: simulation-based maximum likelihood estimation and Bayesian (i.e., Hierarchical Bayes [HB]) estimation. In general, the two methods have equivalent asymptotic properties, but they use different estimation procedures that offer advantages and disadvantages for addressing the specification issues described above. The two estimation procedures are discussed in-depth in (Train, 2001) and Huber and Train (2001).

The analytical expressions for WTP involve ratios of coefficients, which can be problematic, leading to unstable or implausible WTP distributions when both the numerator and denominator are assumed to be randomly distributed. One approach that has often been used to address this issue is to assume that the income/cost parameter (2) is fixed. An alternative approach is to estimate the model in “WTP space”:

,

where λi = β2ii, ωi = β1i/ β2i, εij=eiji such that μi is the scale parameter, and ωi is the vector of marginal WTP values for the attributes in X (Scarpa, Thiene, and Train, 2008). In this framework, one can begin by directly specifying the distributions of MWTP (ωi) and λi; however, the result is a model that is nonlinear in utility parameters. One advantage of HB estimation is that this type of nonlinearity is much easier to accommodate than in classical ML estimation. For this project, we will evaluate the ML approaches and use one or more in the analysis.

Use and Nonuse Values

By itself, the conjoint analyses will provide estimates of household-level total value (WTP) for different Klamath River changes and the distribution of WTP across subpopulations of interest (including users and nonusers). For users of Klamath River resources, these value estimates will include both use and nonuse values. To investigate the impact of use on nonuse values, we propose to include additional questions in the survey regarding (1) actual use-related behaviors (e.g., recreational fishing) for the Klamath River and nearby water resources and (2) stated future behaviors under alternative river condition scenarios. The information on use of the Klamath River Basin will be used to create interaction terms for the WTP regressions. These interaction terms will provide information on how use values affect nonuse, but we will not attempt to estimate nonuse and use values separately.

  1. Degree of accuracy for the purpose described in the justification

We start with power calculations to detect differences in proportions between groups. The calculation is relevant for looking at differences in attitudes, opinions, uses of the river, and demographics when these variables are framed as a yes/no response. In addition, as discussed above, the first conjoint question will be the same on all the surveys, and we will want to examine the percentage of respondents who select the plan versus select no action and compare support for the plan across different groups.

To evaluate efficiency of a sample design, a design effect (deff) is used. Deff is a measure of the precision gained or lost by using the more complex design instead of a simple random sample. Deff is a function of the clustering effect and the unequal weighting effect (UWE) and can be defined as deff = UWE*(1 + (m-1)*ICC), where m is the number of interviews within a cluster, ICC is the intracluster correlation coefficient that measures the degree of similarity among elements within a cluster, and UWE measures variation in the sample weight.

With our stratified random sample design, the UWE is the only source of the deff. We conservatively assume the design effect (deff) would be about 1.2. The statistical power to detect 10% of the difference between interested groups along with the precision of the estimates is illustrated in Table B2. In calculating statistical power, we have assumed a 95% confidence limit and 600 responding households in the groups being compared (i.e., residents of the three primary strata in Table B2).

Table B2. Statistical Power and Precision of Estimatesa

True Proportion

Users of Klamath River Resource


General Public from Rest of the U.S.

SE

RSE

Statistical Power of Detecting 10% of Difference

n

Estimated Deff


n

Estimated Deff

20%

600

1.2


600

1.2

0.0179

8.95%

95.7%

30%

600

1.2


600

1.2

0.0205

6.83%

91.5%

40%

600

1.2


600

1.2

0.0219

5.48%

89.2%

50%

600

1.2


600

1.2

0.0224

4.48%

89.2%

a The proposed sample design and sample size allow DOI to detect a 10% difference between interested groups with at least 90% of statistical power for various survey outcomes.

Power calculations for estimating the parameters from the conjoint data are more difficult to develop. Orme (1998) provides an equation to calculate sample size based on the number of tasks each respondent completes, the number of alternatives in each task (one in our case), and the maximum number of levels for any one attribute.1 Based on this equation, if we include two tasks per respondent, with one choice and a maximum of four levels for any one attribute, we need a sample size of approximately 500 respondents.

We also used an approach from Hensher, Rose, and Greene (2005). If we want to estimate the full set of main effects and the interaction effects, we will need 28 degrees of freedom based on the calculation: (L1 − 1)* (L2 − 1)* (L3 − 1)* 1 * A + 1 = 3*3*3*1+1 = 28, where L1 is the number of attribute levels for salmon and steelhead population, L2 is the suckers, L3 is the coho salmon; cost will be modeled as a continuous variable, and A is the number of attributes.

Table B3 provides the sample size estimates for both total number of questions (N) and number of respondents if each individual answers two questions (N/R). The true proportion of respondents who will select a plan over the no-action or over another plan is unknown, so the table provides three different proportions. The absolute difference is the allowable deviation between the estimated proportion and the true proportion as a percentage. In this case, the absolute difference is 0.05 for a 95% confidence level. The variable q = 1 − true proportion and Z2 equals the square of the inverse normal distribution function for (1 − α)/2.

The Hensher et al. formula yields much larger required sample sizes than the Orne formula under some assumptions. An unpublished meta-analysis of conjoint surveys by RTI found that sample sizes of 250 to 300 were sufficient for a survey with 10 conjoint questions per respondent.2 Our survey will have two conjoint questions, which would imply a sample size of 600 would be sufficient (without accounting for the additional information one gets from having more respondents rather than more questions per respondent).

Table B3. Sample Sizes for Different True Proportions

Alternative

True Proportion

Absolute Difference (α)

q

Choice Questions per Person

(R)

Z2

Total Choice Questions (N)

Total Number of Respondents (N/R)

Plan A

0.5

0.05

0.5

2

3.84

1536

768

Plan A

0.3

0.05

0.7

2

3.84

3584

1792

Plan A

0.6

0.05

0.4

2

3.84

1024

512



  1. Unusual problems requiring specialized sampling procedures

None.

  1. Any use of periodic (less frequent than annual) data collection cycles to reduce burden

This is a one-time data collection.



B3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


Nonresponse bias is the expected difference between an estimate from the sample members who respond to the survey and an estimate from the target population, that is:

(1)

Equation 1 shows that an overall population estimate, depends on the proportion of respondents and nonrespondents (denoted as and respectively, where and the mean response from both respondents and nonrespondents (denoted as and respectively). Bias in an estimate due to nonresponse is given by the following equation:

, (2)

The extent to which nonresponse bias occurs ultimately depends on (1) the extent of missing data and (2) the difference in an estimate between respondents and nonrespondents. The bias can be expressed in the following equation:

(3)

It reveals that the nonresponse bias depends on two components, nonresponse rate ( ) and the difference between mean responses for respondents and nonrespondents ( ). If both components are small, then the bias should be negligible. For bias to be significant, a large nonresponse rate ( ) should exist, and/or with a large difference between the mean responses.

The likelihood (propensity, probability) of responding to the survey may be related to sampling unit characteristics, such as age. For example, that young people are less likely to respond than old people. It is evident when noting that both components of nonresponse bias (namely and ) would be magnified because young people may be less likely to respond and if they do respond, they may respond differently from old people. The nonresponse bias then can be expressed in another way as a function of how correlated response propensity is to a survey outcome variable:

(4)

where is the response propensity, and is the mean propensity in the target population over sample realizations, given the sample design, and recruitment realizations. The stronger the relationship ( ) between the survey outcome variable and response behavior is, the larger the bias would be. Equation 4 also reveals that within the same survey, different survey outcomes can be subject to different nonresponse biases. Some, unrelated to the propensity to respond, can be immune from biasing effects of nonresponse, while others can be exposed to large biases.

Both equations 3 and 4 also imply that higher response rates do not necessarily mean low nonresponse bias for any survey or any given estimates. For example, if response rates are increased using devices that are not disproportionately attractive to the low propensity groups, the nonresponse biases may increase despite high response rates. It should be noted that two different viewpoints of nonresponse bias in Equations 3 and 4 implicitly assume all other sources of bias are absent, especially measurement error.

The study design aims to address the major sources of survey error that pose a substantial threat to the accuracy of survey estimates. Our approach to nonresponse bias includes three goals: increasing response rates, identifying nonresponse bias, and correcting for nonresponse bias post-data collection. We discuss each element in more detail below.

Increasing Response Rates

Coverage error

First, the target populations need to be sufficiently covered—all individuals in the target populations should have a nonzero probability of being included. We address potential coverage error by using addressed-based sampling and selecting the survey samples from the USPS’s Delivery Sequence File (DSF); the DSF contains all addresses to which mail is delivered in the United States. Using mail as the primary contact method with sample members avoids undercoverage problems found with other methods by including residential addresses, but also other types, such as post office boxes and general delivery. Unlike other sampling frames, this also provides the ability to accurately stratify the sample geographically.

Encourage response to mail survey

Once selected, sample members may fail to respond. Indeed, response rates in household surveys have been declining (Groves and Couper, 1998; Stussman, Dahlhamer, and Simile, 2005), and nonresponse poses a substantial threat to survey inference. This is particularly so when there is reason to suspect that nonrespondents are systematically different from respondents. We address this potential through three different design features: providing multiple modes of data collection, offering multiple methods of contacting the sample members, and offering incentives.

Table B3 presents a summary of methods we will use to encourage response to the mail survey. The methods used to encourage response rate include good practice in survey design, reminders and follow-ups, and efforts to reduce barriers to response. The actions to encourage response rate must be balanced against introducing other forms of bias into the survey response. In addition to coverage and nonresponse, a third major source of survey error that is likely in this study is measurement error once participation is gained. Questions can be found socially desirable by many respondents, particularly items on central environmental issues. The use of interviewers has been found to lead to more socially desirable responses (Tourangeau and Smith, 1996), posing a potential trade-off between using interviewers to reduce nonresponse and not using interviewers to preserve self-administration of the survey. In addition, the short time frame for data collection limits our ability to follow up with reminders.

Table B3. Encouraging Response to the Mail Survey

Actions that Increase Response Rate or Decrease Sample Selection

Expected Result

Use addressed-based sampling, selecting the survey samples from the USPS’s Delivery Sequence File

Maximize the number of individuals in the sampling frame to reduce coverage error

Design and pretest survey instrument to eliminate confusing, redundant, or unnecessary information/questions

Respondents more likely to respond to a survey that is easy to answer

Include $2 incentive in initial mailing

Based on research, an incentive will increase response rate, especially among less interested respondents

Prenotification postcard

Prenotification letters or post cards are recommended to increase response rate.

Ask adult 18 years or older with most recent birthday to complete the survey

Although with a mail survey we cannot verify who took the survey, the request should help limit the scope of sample selection resulting when more interested individuals are more likely to respond.

Reminder postcard with address for web version of survey

A reminder postcard increases response rate. In addition, offering another mode, web survey, increases response rate.

2nd mailing of the survey several weeks after the first

2nd mailing of survey instrument increases response rate.

3rd mailing of a reminder letter with toll-free telephone number and email address for respondent to request another copy of the survey.


Reminder letter increases response rate.


Non-responses follow-up: For 20% of nonrespondents, send a FedEx or Priority Mail letter with a 6 question version of the survey offering an incentive of $20 to complete

Several days after the FedEx letters are sent, telephone prompt (live person, not recorded) to reiterate the $20 incentive, answer questions, and see if respondent needs a new survey instrument. The follow-up call will also be used to collect responses to three questions.

New mode of contact (Fed Ex or Priority Mail), shorter survey and new, much higher incentive ($20) should increase response rate among the sample of nonrespondents selected for this treatment.



Phone prompt should increase response rate.

Offer web option reminder postcard after first mailing and with second and third mailings.

Web option provides an alternative response mode that should increase response rate.

Put a toll-free number in survey and in letter for people to call with questions

Help minimize barriers that prevent response to the survey.


Studies of incentives consistently demonstrate increased cooperation (for reviews, see Heberlein and Baumgartner [1978] and Singer et al. [1999]). This amount is provided as a token of appreciation aimed to build a social exchange between the organizations making the survey request and the individual (Dillman, 1978; Dillman, 2000), to the extent possible. Furthermore, incentives have been shown to reduce nonresponse bias by increasing cooperation particularly among those who are not interested or involved in the survey topic (Groves, Singer, and Corning, 2000; Groves, Presser, and Dipko, 2004; Groves et al., 2006). Thus, the use of incentives is instrumental in increasing response rates and reducing nonresponse bias.

In addition to these aspects of the design, we also describe several more nuanced elements aimed at reducing nonresponse, as well as sensitivity to trade-offs with other sources of error. Offering the Web as an option can allow us to collect data from individuals who are unlikely to complete a mail survey for a number of reasons—whether related to age, mobile lifestyle, or something else—reasons that can also be related to use and value given to the Klamath River Basin. Finally, changing the survey protocol for a subsample of nonrespondents can be a cost-efficient method to obtain estimates of nonresponse bias and reduce nonresponse bias in final survey estimates. Changing the protocol requires using a more effective protocol that can garner participation from those who were reluctant under the original protocol. Using higher incentives has been demonstrated to achieve this, supported by theory and empirical evidence (Groves, Singer, and Corning, 2000; Groves, Presser, and Dipko, 2004; Groves et al., 2006), and effectively achieved through such a phased design employing subsampling of nonrespondents (Peytchev, Baxter, and Carley-Baxter, 2009). To entice nonresponders, we will send a Federal Express letter offering a $20 incentive for completing a shorter version of the survey to a sample of 20% of the nonrespondents and then follow up with a phone call for the estimated 65% with phone numbers. Research suggests that incentives have been found to disproportionately increase participation among likely nonrespondents, particularly those who are less interested in the topic. The phone call will be a live person, not recorded. However, if the respondent cannot be reached after two calls, they will leave a message.

Identifying Possible Nonresponse Bias

Our sample design and data collection methods allow us to conduct a thorough nonresponse bias analysis after data collection is completed to assess whether nonresponse bias exists. It can be done in three ways:

  • The respondents from the later mailings can be viewed as less cooperative than the respondents from the first mailing. The respondents to later mailings tend to have low response propensity, and they are likely to have characteristics similar to nonrespondents. Comparing the estimates for the respondents from the second mailing with the respondents from the first mailing allows us to measure the correlation between the response propensity and responses to survey outcome variables indirectly ( in Equation 4).

  • After the third mailing, we will draw a sample of 20% of the nonrespondents. The nonresponse sample will be sent a letter offering a $20 incentive to return a shorter version of the survey.

  • The mailing to the nonresponse sample will be followed by phone calls to nonrespondents we have numbers for. We will attempt to collect responses to three questions over the phone from nonrespondents.

The discussion above focuses on the unit nonresponse (a sampling unit fails to respond to the survey). Please be aware that item nonresponse (a respondent fails to respond to a specific question) can also cause nonresponse bias on the survey estimates. The nonresponse bias caused by item nonresponse can be assessed in the similar way as for the unit nonresponse. In most surveys, the item response rates are high enough so that the nonresponse bias due to the item nonresponse is less concerned.

Table B4 summarizes our approach to identifying the presence of nonresponse bias.

Table B4. Identifying Possible Nonresponse Bias

Action

Assumptions

Compare sample characteristics and survey responses of people who respond after first mailing, after second mailing and after third mailing/phone prompt for both demographic variables and opinions, recreational use of rivers and attitudes.

Later respondents will be more like nonrespondents. If later respondents differ from earlier respondents, then nonrespondents probably also differ.

Compare sample characteristics and survey responses for the shortened version of survey of people who respond after Federal Express letter with offer of higher incentive and phone prompt.

If the answers from the nonrespondent sample differ from respondents to first three mailings, then it is likely that true nonrespondents will differ from respondents.

Compare distribution of variables like income, not just medians.

Distributions of variables may different, in addition to means and medians.

Compare estimated WTP of people who respond after different mailings and after phone prompts.

If WTP same across groups, then less likely to have nonresponse bias. If WTP differs across groups, then more likely to have nonresponse bias. Again, the assumption is that respondents who reply later, after more prompts, are more similar to nonrespondents.

Collect responses to three questions from nonrespondents during phone prompting that will accompany the third and fourth mailings.

Respondents who answer the questions over the phone but do not return the survey can be compared with the respondents who answer the questions over the phone and do return the survey, as well as the responses from the rest of the sample.

Use mailing addresses to identify Census tract. Regress response rate from tract with Census data.

Look at whether Census tract-level variables predict response rate.

Use data on attitudes, opinions, and recreational activities from other national surveys to compare with our sample.

Look at whether response rate by geographic location is correlated with measures of opinions, attitudes, and recreation from other surveys. If response rate in a geographic location is related to these variables, then that suggestions the possibility of nonresponse bias.

Use response rates to Census by county or zip code (or groups of counties) (see http://2010.census.gov/2010census/take10map/) and compare with response rate to survey.

Some counties have relatively higher or lower response rates to the Census. If response rate to Klamath survey in one county is lower relative to response rate to Census, then this may indicate nonresponse bias. The assumption is that response rates to the Census represent a gold standard benchmark.


As part of the nonresponse follow-up, we will telephone nonrespondents for whom we have phone numbers. During these phone calls we will ask three questions that can be used later if the nonrespondents are not converted and do not return their survey. Two of the questions are taken from the survey instrument. Because we will ask the questions of all nonrespondents, we can compare the responses to the three questions between the nonrespondents who are converted (return the survey) and those who are not. We can also compare the phone responses to the two questions from the survey with the responses from the whole sample.

Information can be collected over the phone on key items from the survey. Unfortunately, the key items in the survey, the SP conjoint questions, cannot be administered over the phone. Instead, as part of the phone prompt we will ask the following two questions from the survey instrument:

  • Have you ever heard of the Klamath River that runs through Southern Oregon and Northern California?

  • We are interested in how people are getting along financially these days. Would you say that you and your family are better off, just about the same, or worse off financially than you were a year ago?

In addition, we will ask the people who have not returned the survey:

  • May I ask why?

1MISSPLACED THE SURVEY

2 TOO BUSY

3 NOT INTERESTED IN THE TOPIC

4 THE SURVEY IS TOO LONG

5 THE SURVEY IS TOO COMPLICATED

6 OTHER REASON

98 Don’t know

99 Refusal

The phone survey will only provide information about nonrespondents with phone numbers. Information about nonrespondents without phone numbers will come primarily from comparing nonrespondents who are converted by the third mailing or are part of the nonresponse sample.

Using the address of the respondent, we can learn something about the nonrespondents using other data available at the county, city, or state level. Again, we have to assume that the nonrespondents’ characteristics, attitudes, and habits are similar to those of people who live near them. Using data from the American Community Survey and the 2000 Census (although the Census data may be out of date), we can compare data on age, race, ethnicity, employment, education, and income.

Attitude and opinion differences may also account for nonresponse bias. Measures of attitudes and opinions are more difficult to find at the county level. Information about political leaning, voting patterns, the major industries in the area (especially if the industries are related to agriculture, commercial fishing, recreation, or electric power generation), and whether the area is urban, suburban, and rural will be examined for impact on response rate. Because the Klamath sample will most likely have few respondents in any given county, we may have to aggregate geographic areas to get a sufficient sample size to conduct comparisons.

Adjusting for Nonresponse Bias

Ultimately, we are interested in whether nonresponse bias affects our estimates of WTP. In the analysis, we will examine the impact of the variables collected in the survey on WTP estimates. We will assess to the extent possible whether factors that are significantly related to WTP also appear to be related to response rates.

Postsurvey adjustments can help reduce bias from both coverage and nonresponse when auxiliary information related to the survey outcomes is available. We will create separate adjustments for each source of error. Response propensity models will be used to create nonresponse weighting adjustments, employing information available at the Census block group level. Poststratification will be employed to correct for coverage and sampling error, using aggregate data for each of the three strata. The poststratification can include an indicator for whether the phone number for the household was available to account for the possible bias introduced by telephone follow-up reminders.

To address the potential for nonresponse bias in our estimation sample, we will also conduct additional tests and analyses. First, for each of the main strata, we will compare and test for statistically significant differences between our samples’ sociodemographic characteristics (e.g., age, income, gender, race, education) and those from the general population from which they are drawn. Comparisons are usually made to Census data; however, since the available Census data are now 10 years old we will use the American Community Survey. A close correspondence in these observable characteristics will not necessarily ensure an unbiased sample for WTP estimation (nor would a lack of correspondence necessarily imply bias); however, it can at least be interpreted as lowering the potential for significant bias.

We will also compare the characteristics of respondents who returned their surveys at different times during the data collection period. We can compare individuals who returned their surveys after the first mailing, after the second mailing, and after the reminder phone call. Although all of these people are responders, those who respond later may share characteristics with nonresponders.

Second, we will conduct additional analyses based on the methods described in Heckman (1979), Cameron and DeShazo (2008), and Smith and Mansfield (1998). The most common model-based approach for testing and correcting for sample selection bias in regression analysis is a Heckman two-stage model (Heckman, 1979). The first stage typically applies individual-level data describing both respondents and nonrespondents to explain the dichotomous response-nonresponse outcome. The second stage is typically an ordinary least squares model, although it has been adapted to logit and probit. We do not have individual-level information about nonrespondents. However, we can again use the later respondents who responded only after the third mailing or after the offer of the $20 incentive as nonrespondents. If comparisons between the early and late respondents show systematic differences, we will use the Heckman approach to further investigate the possible impacts of these differences on WTP.

We will also employ methods suggested in Cameron and DeShazo (2008). Cameron and DeShazo (2008) use Census-tract summary-level data (based on mailing address) to construct explanatory variables for the first-stage analysis. To address the second limitation, we will again follow the Cameron and DeShazo approach. That is, we will use the first-stage model to generate fitted selection probabilities for each respondent and interact these terms with the explanatory variables in the logit model. We will informally test for sample selection bias in the logit coefficients by examining the statistical significance of these interaction terms. In addition to Census-tract data from Census and from the American Community Survey, we will explore the use of other variables that can be geographically located, for example, miles to nearest river recreation area, distance to large dams, or the environmental ranking of the Senators or Congressional representatives.

Measurement Error

Another potential source of survey error that is likely in this study, in addition to coverage and nonresponse, is measurement error once participation is gained. Questions can be found socially desirable by many respondents, particularly items on central environmental issues. The use of interviewers has been found to lead to more socially desirable responses (Tourangeau and Smith, 1996), posing a potential trade-off between using interviewers to reduce nonresponse and not using interviewers to conduct interviews in order to preserve self-administration of the survey. The probability-based sampling design for this study allows for direct design-based survey inference, producing estimates for all three target populations. Our unit of analysis, however, is the household, and any person-level estimates will not be probability based. We also acknowledge that many adults within the same household may not agree on the various values they assign. An assumption is made that these values, on expectation, are unbiased; although an interviewed person in a household may disagree with a noninterviewed person in the same household, the other opinion will be captured in another household and in expectation, the aggregated estimates will be unbiased.

Within-household selection will be included in the cover letter mailed with the survey where we ask the adult with the most recent birthday to complete the survey. To protect response rates, we will also tell respondents that if the person with the most recent birthday is not available any adult in the household can take the survey. The survey includes a question at the end asking whether the respondent is the adult in the household with the most recent birthday (and a statement that we are interested only for statistical purposes). We can use this question to compare surveys completed by adults with the closest birthday and without.

Measurement error may also result if individuals misinterpret the choice questions or give a “protest” response to the choice questions. For example, in the current economy and political climate, we have seen evidence in the focus groups that individuals may reject the question because it will increase their federal taxes. We have included a series of debriefing questions after the choice questions to help identify potential protest votes or misinterpretation of the choice questions. The debriefing questions are based in part on feedback from the focus groups and cognitive interviews.


Mode Effects

Mode effects are another possible source of error. Respondents can use either the paper survey they receive in the mail or they can take the survey on the Web. The Web option was included to increase response rates among individuals who prefer using their computers to take surveys. Fears of social desirability bias resulting from interviewer-administered survey modes lead us to select self-administered paper and Web-based methods.

We expect that most respondents will use the paper survey to respond; however, based on previous experience, we expect some respondents will complete the Web survey. The Web survey will be programmed to mirror the paper survey as exactly as possible. Studies suggest that Web and paper surveys produce more similar answers than either option compared to a telephone survey. Although using two modes may introduce minor measurement error, we believe that the potential increase in response rate is worth the risk. We will test for differences in responses to survey questions between the two modes controlling for respondent characteristics. Most importantly, we will test whether the mode appears to affect WTP responses.

B4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


Survey Instrument Design

The survey design began with careful review of existing information provided by the Contracting Officer’s Technical Representative (COTR) to ensure that there is a clear common understanding of the survey context and objectives and of the parameters and uncertainties that need to be included in the design.

In addition, a number of resources for designing the instrument were used, including the following:

  • Past SP surveys, especially those measuring total value conducted by the RTI team. The RTI project team has extensive experience conducting SP surveys on a wide variety of topics. The economists on the team have provided peer review of both contingent value and conjoint SP surveys for journals or other clients, written book chapters and made presentations on how to conduct SP surveys and how to measure total values, and conducted meta-analyses of SP surveys that involved careful assessment of every aspect of the survey design, administration, and analysis.

  • A bibliography of SP valuation surveys, including several that are specific to dam removal.

  • National longitudinal surveys can provide external data on recreation patterns and environmental attitudes that can be useful in developing the survey and data collection plan.


Early in the process two focus groups were conducted. The two focus groups were conducted with two distinct populations of interest, used different materials, and sought to assess different objectives. The first group of nine individuals was conducted in Medford, OR. Medford is located close to the Klamath River Basin but is not part of the basin. The objective of the focus group was to assess the level of knowledge and attitudes about the Klamath River Basin from a nearby community. Simple questions and background materials were presented, and respondents were asked what additional information they needed, whether the materials contained confusing language, and whether the materials seemed biased.

The second focus group of nine individuals was conducted in Kansas City, KS. The city is located far from the Klamath River Basin. The objective for this focus group was to examine issues related to extent of the market and how people in one area of the county view projects in another area of the country. The Klamath River Basin was never mentioned during this focus groups. Instead, respondents were presented with four different river restoration projects that affected endangered species.

The results from these focus groups, along with feedback from the stakeholder group and our outside consultants, resulted in a draft survey instrument. The draft survey instrument was tested and revised in four focus groups. Two of the focus groups were outside the Klamath region in Raleigh, NC, and Phoenix, AZ. The other two groups were in two different parts of the Klamath region in Eureka, CA, and Klamath Falls, OR. For these focus groups, participants read and answered the questions in the first half of the survey up to page 9 after the description of the endangered species. After they were finished, the moderator led a discussion of the information on each page to examine misunderstandings, perceptions of bias, and overall reactions to the materials. In the second half of the focus groups, the participants completed the survey through the first choice question. Again, the moderator led the participants through a discussion of the information presented in the second part of the survey and reactions to the choice questions.

The draft instrument was reviewed by survey methodologists at RTI. We further reviewed the draft instrument using one-on-one interviews with 12 individuals recruited from across the country. The respondents were sent a copy of the survey materials and the interview took place over the phone. The interview focused on understanding, interpretation of the text, and perceptions of bias.

The Agency received additional comments on the survey instrument shortly after the instrument originally submitted to OMB was distributed on December, 14, 2010 to the Technical Coordinating Committee (a committee comprised of parties who signed the agreements) at their request. December 14 (coincidentally) was also the day when the pilot study was approved by OMB. All of the comments focused on the background material and description of the no action and action alternatives. No comments were received on the questions themselves. Further revisions were made based on these comments and changes suggested by the team of federal biological scientists working on the project. Four additional one-on-one interviews will be conducted to test for reactions to the changes in the survey wording during the 30-day comment period.

In the course of the expert reviews, focus groups, and one-on-one interviews, a number of important issues related to the design of the survey instrument were raised. Below we summarize the most important:

  1. Length of survey and amount of information: Earlier drafts of the survey included more detailed information throughout the first half of the survey before the choice questions. We had many comments on the length of the survey, the amount of text, and the density of the text on the page. In response, we reduced the number of choice questions to two, and we reduced the amount of information presented based on feedback from the various reviews and pretests on the relative importance of different pieces of information. We also spread out the text and added more white space so the pages did not look as dense.

  2. Uncertainty over outcomes: Scientists are not certain about the long-run impact of the KBRA and dam removal on the different species of fish in the Klamath River Basin. Expert panels convened as part of the project are evaluating the potential range of outcomes for the different species. Based on previous research, explaining uncertainty and probability to the lay public is a difficult task and understanding can be limited. We felt it would be too complicated to present the attribute levels for the fish populations as uncertain or in terms of a probability distribution. In the survey, we describe the variation in outcomes for the fish as a result of different levels of fish restoration. In the choice questions, each action plan is associated with a level of improvement for salmon and steelhead, coho salmon, and the suckers. The level of population increase for the salmon and steelhead by the year 2060 is described as the level that “scientists expect.” For the endangered species, the attribute levels are described in terms of risk of extinction, and we felt that adding uncertainty to the level of risk was again too complicated to explain in a survey. In the pretesting, the respondents seemed to generally accept the description of the attribute levels, although one or two individuals wondered about the evidence basis for the estimates. When asked if the scenarios were believable, the main comments were about the government’s inability to correctly execute and follow through on projects, but the actual levels of the changes in populations were believable. We included a debriefing question asking for their level of agreement with the statements “I do not believe the plans will actually increase the number of fish as described.”

  3. Cost and description of cost: The agreements provide the payment mechanisms for the choice questions: increases in the cost of electric power for PacifiCorp customers spending by the Oregon and California governments (California will issue a bond, which will be voted on in a coming election), and federal taxes. For the survey, we decided to use the actual payment vehicles outlined in the agreements as the payment vehicles in the survey. In pretesting, some respondents said that they did not want to pay more federal taxes, which has been observed in other surveys. Others felt that the government could move money around in the budget instead of increasing taxes. Most respondents in the pretests accepted that Oregon and California residents would and should pay more. An alternative would be to describe the cost as an increase to the household’s “cost of living”. We selected taxes and electricity costs because they are the actual payment vehicles and because for respondents in other parts of the country it was hard to describe how the “cost of living” would increase if the Klamath dams were removed beyond increases in taxes. We included a debriefing question asking for their level of agreement with the statements “I should not have to contribute to the restoration of the Klamath River” and “Some of the plans cost too much compared to what they deliver” to help judge the impact of cost and the payment vehicle on responses.

  4. Timing of costs and outcomes: The major elements of the KBRA and the dam removal happen many years in the future. The dam removal is slated to occur in 2020. Many pretest respondents wondered why they could not remove the dams today, so we added text stating that the dams would be removed “after several years of careful study.” In subsequent focus groups, the issue did not come up. The changes in fish populations occur between 2020 and 2060, over a 40-year time period. Respondents in the pretests accepted the restoration timetable as reasonable. Several mentioned preserving the environment for their children and grandchildren. We included a debriefing question asking for their level of agreement with the statements “The changes offered by the plans would occur too far in the future for me to really care.” The costs to households are for the next 20 years. We selected 20 years because dam removal comes in the middle of the time period, so it would cover pre- and postdam removal activities. We added the statement “You would be making a commitment to pay this additional amount each year for the next 20 years” and emphasized that the payments would be for 20 years. Pretest respondents remarked on the 20-year time period and that it was a real commitment, indicating that they had understood the payment schedule. We describe the payment in terms of the annual and monthly amount they would have to pay. People are accustomed to thinking about expenses as yearly or monthly recurring payments, which makes the payment easier for them to compare to other elements of their budget. We did not present the full amount paid over 20 years because people are not used to seeing numbers like that and might be confused.

  5. Current economic conditions: The expert reviewers expressed concerns about WTP in the current economic climate. In the focus groups, many people mentioned being unemployed and the hard economic times. Because WTP is sensitive to income, the recession will probably result in lower WTP than values we might find in better economic conditions. We included a debriefing question asking for their level of agreement with the statement “My choices would have been different if the economy in my area were better.” We also included questions at the end of the survey asking about changes in the economic situation in the last 12 months and their opinion of economic conditions.

  6. Focus on fish restoration: Many pretest respondents remarked that the first half of the survey talked about the basin, the people who use it, and a variety of topics, but the description of the action plans and the choice questions were much more focused on fish restoration. People worried about the impacts on the local economy and on farmers and fishermen. Fish restoration is listed as the first objective in the KBRA. In addition, the very expensive requirements under the dam relicensing to protect the endangered species was one of the drivers behind the agreement. The total value associated with the outcomes of the KBRA and dam removal will flow primarily from environmental improvements and fish restoration. We revised the wording of the survey to emphasize that the difference between the agreements was in the amount and type of fish restoration projects and that is why the agreements focus on changes for fish populations. We also added more detail to the page that lists the benefits and costs of the agreement to different parties and the beginning of the survey lists the human uses. We included a debriefing question asking for their level of agreement with the statement “I am concerned that the plans would hurt the economy in the Klamath River Basin,” “I think the agreement to remove the dams is a bad idea,” and “It is important to restore the Klamath River Basin, no matter how much it costs.” To examine the respondents’ opinions about the nonendangered Chinook salmon and steelhead and the endangered coho salmon and suckers, we included questions asking for their level of concern about the nonendangered species and the endangered suckers and coho. In focus groups, opinions on these questions varied. Although many people were equally or more concerned about the endangered species, others were more concerned about declines in the population of the nonendangered species.

Data Collection

  1. The steps in the data collection plan are described below. At the start of data collection, the prenotification post card, first mailing, reminder postcard with address for web version of survey and second mailing will be sent to 10% of the sample (the pilot test sample). The surveys received a few weeks after the second mailing will be analyzed to look at the following: Performance of the conjoint questions. The conjoint questions will be analyzed first looking at the percentage of respondents who selected an action plan over no action. If more than 80% of the sample selects the no-action plan, then the cost of the plans will be adjusted downward. A simple conditional logit will be estimated with the data to examine whether the other attribute levels are significant. If the levels are not significant, then the levels will be adjusted up or down to create greater differences between the plans.

  2. The difference between a 1-question version and a 2-question version.


After the initial data from the first 10% of the sample have been analyzed, the remaining 90% of the sample will be sent the notification post card, first mailing, reminder postcard with address for web version of survey, second mailing, and third mailing consisting of a reminder letter with a toll-free number and email address to request a new survey instrument. Twenty percent of the remaining nonrespondents will be sampled for our nonresponse sample. They will receive a Fed Ex letter, a shorter version of the survey, and a larger incentive.


First mailing: Sample households will receive a prenotification post card followed by the first mailing of the survey instrument.


Reminder postcard and address for web version of survey: A reminder postcard with the address for the web version of the survey and the respondent’s password will be sent to nonrespondents (the postcard will fold over so the password will not be visible).


Second mailing: The survey instrument will be mailed a second time to nonrespondents.

Third mailing: After the second mailing, the nonrespondents will be sent a reminder letter with a toll-free number and email address to request another copy of the survey if the household needs one.


Fourth mailing: After the third mailing, 20% of the nonrespondents will be sent a letter by Federal Express or Priority Mail offering a higher incentive to return a shorter version of the survey. DOI assumes there will be telephone numbers for 65% of the nonrespondents. For respondents with telephone numbers, the letter will be followed by a phone call from a live operator who will either talk to the household or leave a message reiterating the higher incentive and offering to mail another copy of the survey if the household needs one.

B5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Research Triangle Institute (RTI International) will collect the data and conduct the analysis. The lead individuals consulted and involved in collecting and analyzing the data include the following:

  • Carol Mansfield, 919-541-8053

  • George Van Houtven, 919-541-7150

  • Patrick Chen, 919-541-6309

  • Andy Peytchev, 919-485-5604

  • Amy Hendershott, 919-485-2703



References



1 The sample size equation from Orme (1998) is (number of levels)*500/(number of alternatives * number of tasks).

2 Personal communication, F. Reed Johnson, Oct 30, 2010.

24


File Typeapplication/msword
File TitleSupporting Statement for a New Collection RE: Visitor and Business Surveys for Cape Hatteras National Seashore
AuthorCindy Thomson
Last Modified ByRachel Drucker
File Modified2011-02-04
File Created2011-02-04

© 2024 OMB.report | Privacy Policy