Coral_Valuation_OMB_Supporting_Statement_3.11.2009_meadeNOtrackchanges032409_PartB

Coral_Valuation_OMB_Supporting_Statement_3.11.2009_meadeNOtrackchanges032409_PartB.pdf

Coral Reef Valuation Study

OMB: 0648-0585

Document [pdf]
Download: pdf | pdf
Supporting Statement for Paperwork Reduction Act Submission:
Coral Reef Valuation Study
OMB CONTROL No. 0648-xxxx

U.S. Department of Commerce
National Ocean and Atmospheric Administration
National Ocean Service
Office of National Marine Sanctuaries and Office of Response and Restoration
1305 East West Highway, SSMC4, 9th floor
Silver Spring, MD 20910
Contact: Norman Meade
(301) 713-4248 ext. 201
[email protected]
March 10, 2009

Table of Contents
A.

Justification ........................................................................................................................1
1.

Explain the circumstances that make the collection of information necessary .......1
Background ..............................................................................................................1
Request.....................................................................................................................1

2.

Explain how, by whom, how frequently, and for what purpose the
information will be used. If the information collected will be disseminated
to the public or used to support information that will be disseminated to the
public, then explain how the collection complies with applicable NOAA
Information Quality Guidelines ...............................................................................1
How the information will be collected.....................................................................1
The main survey instrument.....................................................................................1
General instructions to KN and Abt SRBI operations .................................4
Instructions/warm-up: (Screens 1 through Screens 2C) ..............................4
Part 1: Survey setup (Screens 3A through 3C) ............................................4
Part 2: Introduction (Screens 4A through 12B; Questions Q1
through Q5) ..................................................................................................5
Part 3: Overfishing (Screens 13A through 16D; Questions Q6
through Q7) ..................................................................................................6
Part 4: Ship accidents (Screens 17A through 19B; Questions Q8
through Q9) ..................................................................................................6
Part 5: Choice questions/follow-up evaluation (Screens 20A
through 41; Questions Q10 through Q28, A1 through A2a, and
D1 through D2) ............................................................................................7
Use of illustrations ...................................................................................................9
Experimental design...............................................................................................10
Use of stated choice questions ...............................................................................14
Survey mode ..........................................................................................................16
Pretest survey .............................................................................................16
Main survey ...............................................................................................16
Frequency of the information collection ................................................................18
How collection complies with NOAA information quality guidelines .................18
Utility .........................................................................................................18
Objectivity..................................................................................................18
Integrity ......................................................................................................19

3.

Describe whether, and to what extent, the collection of information
involves the use of automated, electronic, mechanical, or other
technological techniques or other forms of information technology .....................19
Automated, electronic data collection....................................................................19

4.

Describe efforts to identify duplication .................................................................20

i

5.

If the collection of information involves small business or other small
entities, describe the methods used to minimize burden........................................20

6.

Describe the consequences to the Federal program or policy activities
if the collection is not conducted or conducted less frequently .............................20

7.

Explain any special circumstances that require the collection to be
conducted in a manner inconsistent with OMB guidelines ...................................20

8.

Provide information on the PRA Federal Register Notice that solicited
public comments on the information collection prior to this submission.
Summarize the public comments received in response to that notice and
describe the actions taken by the agency in response to those comments.
Describe the efforts to consult with persons outside the agency to obtain
their views on the availability of data, frequency of collection, the clarity
of instructions and recordkeeping, disclosure, or reporting format (if any),
and on the data elements to be recorded, disclosed, or reported............................21

9.

Explain any decisions to provide payments or gifts to respondents, other
than remuneration of contractors or grantees.........................................................21
Cognitive one-on-one interviews ...............................................................21
Pretest survey .............................................................................................21
Main survey ...............................................................................................22
Survey-specific incentives .........................................................................22
Nonsurvey-specific incentives ...................................................................22

10.

Describe any assurance of confidentiality provided to respondents and
the basis for assurance in statute, regulation, or agency policy .............................23
KN procedures ...........................................................................................23
Abt SRBI procedures .................................................................................25

11.

Provide additional justification for any questions of a sensitive nature,
such as sexual behavior and attitudes, religious beliefs, and other matters
that are commonly considered private ...................................................................26

12.

Provide an estimate in hours of the burden of the collection of information ........26

13.

Provide an estimate of the total annual cost burden to the respondents or
record-keepers resulting from the collection (excluding the value of the
burden hours in #12 above)....................................................................................27

14.

Provide estimates of annualized cost to the Federal government ..........................27

15.

Explain the reasons for any program changes or adjustments reported in
Items 13 or 14 of OMB 83-I ..................................................................................27
ii

B.

16.

For collections whose results will be published, outline the plans for
tabulation and publication ......................................................................................28

17.

If seeking approval to not display the expiration date for OMB approval
of the information collection, explain the reasons why display would
be inappropriate .....................................................................................................28

18.

Explain each exception to the certification statement identified in
Item 19 of the OMB 83-I .......................................................................................28

Collections of Information Employing Statistical Methods .........................................29
1.

Describe (including a numerical estimate) the potential respondent
universe and any sampling or other respondent selection method to
be used. Data on the number of entities (e.g., establishments, State
and local governmental units, households, or persons) in the universe
and the corresponding sample are to be provided in tabular form.
The tabulation must also include expected response rates for the
collection as a whole. If the collection has been conducted before,
provide the actual response rate achieved ..............................................................29
This application is for the cognitive one-on-one interviews, a second
pretest, and the main survey study only .................................................................29
Cognitive one-on-one interviews ...............................................................29
Pretest survey implementation ...................................................................29
Main survey implementation .....................................................................29

2.

Describe the procedures for the collection, including: the statistical
methodology for stratification and sample selection; the estimation
procedure; the degree of accuracy needed for the purpose described
in the justification; any unusual problems requiring specialized
sampling procedures; and any use of periodic (less frequent than annual)
data collection cycles to reduce burden .................................................................30
Sample frame and sample selection .......................................................................30
Pretest survey .............................................................................................30
Main survey ...............................................................................................30
KN Panel sampling design for the main survey.....................................................31
ANES Web Panel recruitment response rate statistics ..............................32
Abt SRBI Panel sampling design for the main survey ..........................................33
Sample size ............................................................................................................34
Cognitive interviews ..................................................................................34
Pretest survey .............................................................................................34
Main survey ...............................................................................................34

iii

3.

Describe the methods used to maximize response rates and to deal with
nonresponse. The accuracy and reliability of the information collected
must be shown to be adequate for the intended uses. For collections
based on sampling, a special justification must be provided if they
will not yield “reliable” data that can be generalized to the universe studied .......36
Maximizing response rates ....................................................................................37
Nonrespondents......................................................................................................38

4.

Describe any tests of procedures or methods to be undertaken. Tests are
encouraged as effective means to refine collections, but if ten or more test
respondents are involved, OMB must give prior approval ....................................38

5.

Provide the name and telephone number of individuals consulted on the
statistical aspects of the design, and the name of the agency unit, contractor(s),
grantee(s), or other person(s) who will actually collect and/or analyze the
information for the agency .....................................................................................39

Bibliography ..................................................................................................................................41
Attachment 1: Coral Reef Survey Instrument
Attachment 2: Write-up of Pretest Results
Attachment 3: KN’s Member Bill of Rights
Attachment 4: Quality Assurance Procedures
Attachment 5: Illustrations
Attachment 6: Authorities
Attachment 7: Federal Register Notification

iv

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
1. Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection method to be used. Data on the number of entities
(e.g., establishments, State and local governmental units, households, or persons) in the
universe and the corresponding sample are to be provided in tabular form. The tabulation
must also include expected response rates for the collection as a whole. If the collection has
been conducted before, provide the actual response rate achieved.
This application is for the cognitive one-on-one interviews, a second pretest, and the main
survey only.
Cognitive one-on-one interviews
For the cognitive one-on-one interviews, we will recruit up to 32 panelists in Washington, D.C.
and Denver, CO from KN’s established Web-panel. 1 These recruits will be invited to a facility to
take the questionnaire online and to participate in a discussion with one of the Stratus Consulting
or NOAA researchers. The purpose of the discussion is to help the researchers test how well
respondents understood the information presented to them and to debrief on any other issues the
respondents had (e.g., wording issues).
Pretest survey implementation
For the pretest survey, we will interview a random sample of 385 panelists from KN’s
established Web-panel. Due to the nature of the ANES and SRBI panels, we cannot conduct the
pretest on these panels. The first pretest we conducted in 2006 resulted in a 65% completion rate,
which is higher than what KN typically gets. KN typically expects a 65% completion rate, and
we expect a similar completion rate for the second pretest. Based on this assumption, we expect
that we will have to send out 385 surveys in order to get 250 completed interviews (385 × 0.65 =
250). This sample size is feasible within the project’s budget, given the selected implementation
mode, and will provide enough observations for conducting simple summary statistical analyses
of the data (means, medians, standard deviations, maximums, and minimums) and for evaluating
the effectiveness and appropriateness of the experimental design for the main study.
Main survey implementation
The KN and Abt SRBI Panel samples will include the civilian, non-institutionalized population
age 18 or over, as defined by the universe of U.S. households that can be contacted by telephone
(106 million households in 2000).
The main survey will be administered to a sample that will be sufficient to produce completed
surveys from approximately 2,691 respondents. The main study will be administered to the 2,000
ANES Panel members and 990 MRI Panel members. Due to expected panel response rates, the
1. We believe that 32 cognitive one-on-one interviews will be enough to help us understand and resolve any
wording issues and to test respondents’ understanding of the material.

Page 5

expected number of completes is 2,691 (2,000 × 0.90 plus 990 × 0.90). This sample size is
feasible within the project’s budget, given the selected implementation mode, and will provide
sufficiently large numbers of observations for conducting simple summary statistical analyses of
the data (means, medians, standard deviations, maximums, and minimums) and the more
sophisticated econometric analyses need to arrive at total value estimates.
We anticipate an overall response rate of about 20% for the ANES Panel. This is based on an
expected 31% panel recruitment response rate (AAPOR Rate No. 3), 75% connection rate (agree
to join the panel and completed the first online demographic survey), and 85% survey
participation rate. The low overall response rate is due to the multistage construction of the KN
Panel.
For the in-person recruited MRI Panel, we anticipate an overall response rate of about 63%. This
is based on a 90% participation rate.
These estimates are based on the recruitment rates reported on other KN RDD and Abt SRBI inperson surveys and from participation rates reported in the industry (see the answer to Question 2
of this Supporting Statement).
2. Describe the procedures for the collection, including: the statistical methodology for
stratification and sample selection; the estimation procedure; the degree of accuracy
needed for the purpose described in the justification; any unusual problems requiring
specialized sampling procedures; and any use of periodic (less frequent than annual) data
collection cycles to reduce burden.
Following are descriptions of the sample frame, the sample selection process, and the process for
selecting the sample size that will be followed in the pretest and the main survey implementation.
Sample frame and sample selection
Pretest survey
The KN’s established Web panel sample is selected using directory-listed, RDD telephone
method, providing a probability-based starting sample of U.S. telephone households (96% of
population). The Web-enabled panel comprises both Internet and non-Internet households; KN
supplies the non-Internet households with an Internet appliance and Internet connection.
Main survey
The main survey sample frame is the U.S. civilian noninstitutionalized population age 18 or
over, as defined by the universe of U.S. households that can be contacted by telephone
(106 million households in 2000).
KN will select the ANES Panel sample using RDD telephone methodology, providing a
probability-based sample of U.S. telephone households (96% of population). Abt SRBI will
select the MRI Panel sample by using in-person recruiting methods, providing a multistage
probability sample of residential mailing addresses, described in detail later in this question. The
Page 6

KN and Abt SRBI Web-enabled panels comprise both Internet and non-Internet households. For
non-Internet households, KN will install MSN TV 2 devices using professional installers; Abt
SRBI will provide these households with a laptop and broadband Internet access.
Data will be collected from the Abt SRBI and KN Panels. In both samples, each household will
have an equal probability of entering the sample (except for households without working
telephones, which will have a zero probability of entering the telephone sample).
KN Panel sampling design for the main survey
The sample universe of the ANES Panel is the U.S. citizen population 18 and over as of
November 4, 2008. Teenagers who turned 18 prior to or on November 4, 2008 will be included
in the sample. KN will utilize list-assisted RDD sampling techniques on the sample frame
consisting of the entire U.S. residential telephone population. Only those banks of telephone
numbers (consisting of 100 telephone numbers) that have zero directory-listed phone numbers
will be excluded. The ANES Panel sample will be a stratified RDD sample of all residential
phone numbers in the U.S. where only two strata are necessary. The strata will be defined by
whether or not KN can find an address for the telephone number using a service that provides the
highest match rate available. Telephone numbers for which KN is able to recover a valid postal
address is about 70%. KN will select the sample of phone numbers with equal probability within
the two pre-identified strata. Stratum 1 includes all phone numbers that can be matched with
postal addresses. Stratum 2 are the remaining phone numbers that cannot be matched beforehand
to postal addresses. All numbers drawn from Strata 1 will be kept in the sample. One half of the
numbers, randomly selected from Stratum 2, will be kept in the sample.
Approximately 10 days prior to calling sampled phone numbers, the address-matched telephone
numbers are sent an advance mailing informing them that they have been selected to participate
in the Monthly Special Topics Study. The Stanford University Principal Investigator will sign the
advance letters. The respondents are told that the study is being created on behalf of Stanford
University, with collaboration from the University of Michigan and funding from the NSF. The
advance mailing will include a $2 cash incentive. The advance mailing will describe that their
participation in the study, will explain that there are a wide range of studies about which they
will have an opportunity to represent many people like themselves, and will cite their burden as
one survey per month. The advance letter describes that study participation is voluntary and
includes answers to frequently asked questions that respondents might have.
KN expects about 40% of the sampled phone numbers will be ineligible (not a household, nonworking phone number, non-residential phone number, non-English speaking, non-Citizen, etc.),
and that some households will initially refuse. Extra follow-up will be done with the initialrefusal households, including use of a special refusal conversion package. The refusal package
will contain a refusal letter tailored to the reason for refusal. A monetary incentive of $5 will be
enclosed. However, we anticipate some final refusals even with conversion efforts and have
provided for framed 8”x10” framed Certificates of Appreciation to be sent to the respondents
selected for the study. A special 1-800 number specific to the study will also be available for the
households to call that have questions and for households who wish to authenticate the
legitimacy of the study.
Page 7

A short interview (10 minutes) will be conducted with eligible, cooperating households. The
interview will include selected questions from national surveys to measure the attitudes of study
respondents and will include identifying and contact information needed by KN, such as survey
questions that collect information on all adults in the household. The interview will be conducted
with a randomly selected person age 18 as of November 4, 2008. If the selected study member is
a minor, then parental consent to interview the minor is obtained on the phone from a parent or
legal guardian. The telephone interviewer administering the recruitment survey instrument will
document the consent.
ANES Web Panel recruitment response rate statistics
Recruitment interviews were completed at 2,371 of the 12,809 sampled telephone numbers.
Completion of a recruitment interview is the operational definition of joining the panel. All
sample cases fall into one of four categories: complete interviews (2,371), eligible nonresponse
(808), unknown eligibility (5,601), and not eligible (4,029). Completed interviews are broken
down into three categories: those completed through the standard telephone interview (2,222),
those who initially refused but were converted to a completed interview (85), and those who
completed the interview through the internet (64).
`
`
`
`

Response rate (AAPOR RR3):
Refusal rate (estimated):
Cooperation rate (estimated):
Contact rate (estimated):

31%
38%
34%
92%

Table B.1 summarizes the disposition of the ANES Panel recruitment sample.
Table B.1. Final case-level disposition of ANES Panel Study
recruitment sample
Disposition

Number

Total sampled telephone numbers

12,809

Complete interviews

2,371

Standard telephone interview

2,222

Refusal conversion interview

85

Internet-only recruitment interview

64

Eligible non response

808

Eligible non-contacts

0

Eligible contacts not complete

808

Refusals, post-selection

558

Language barrier, post-selection

16

Physical or mental impairment, post-selection

25

MSN TV setup not possible, post-selection

19

Respondent never available, post-selection

190

Page 8

Table B.1. Final case-level disposition of ANES Panel Study
recruitment sample (cont.)
Disposition

Number

Unknown eligibility

5,601

Contacts

4,063

Refusals, pre-selection

2,376

Informant pre-selection contact, but never available

1,288

Language barrier, pre-selection

291

Physical or mental impairment, pre-selection

93

MSN TV setup not possible, pre-selection

15

Non-contacts

1,538

Computer/fax tone (on all attempts)

241

No answer (on all attempts)

198

Information never available, non-contact, pre-selection
Not eligible

1,099
4,029

Disconnected phone

3,457

Non-residential/business/government

518

Number changed

11

No age-eligible U.S. citizen in household

43
Source: ANES staff analysis is of the 2008-09 ANES Panel Study sample file.

Abt SRBI Panel sampling design for the main survey
Abt SRBI will draw a multistage probability sample of residential mailing addresses. A sampling
frame based on USPS mailing addresses will allow for the selection and enrollment of a sample
of eligible households in the panel. This address frame will be referred to as the Delivery
Sequence File (DSF). The target population will cover the 48 contiguous states and Washington,
DC.
Research on the use of the DSF as an address-sampling frame for area probability samples has
focused on the relative merits of using U.S. Census Bureau Census administrative units (blocks,
block groups, tracts, counties) or USPS units (ZIP codes, carrier routes). For example, at the
2007 Joint Statistical Meetings, papers on the use of the DSF focused on geo-coding errors
associated with assigning DSF addresses to Census Bureau geographic units such as Block
Groups. The use of USPS Zip Code carrier routes does not suffer from this problem, but it is
more difficult to apply the half-open interval in the field to add missed housing units to the
sample.
The basic design involves self-weighting, stratification, probability proportional to size
sampling, and multiple stages. Abt SRBI will use four stages of sampling. In the first stage, they
will elect 60 3-digit ZIP Code areas from a sampling frame of all 3-digit ZIP Code areas in the
Page 9

48 continuous states and DC. Principal sampling units (PSUs) will be sorted by geography (nine
Census Divisions), metropolitan status, and total number of residential addresses. A systematic
sampling scheme will be applied to sorted file with probabilities of selection being proportional
to the total number of residential addresses in the 3-digit ZIP Code area. Some 3-digit ZIP Code
areas may be sufficiently large to have more than one selection.
In the second stage, they will sample two 5-digit ZIP Codes per 3-digit ZIP Code area for 120 in
total. Abt SRBI will do this by preparing a complete list of 5-digit ZIP Codes in each PSU,
sorting them in numerical sequence (which reflects geography), and selecting two ZIP Codes
systematically using probabilities proportional to the total number of residential addresses in
each ZIP Code.
In Stage 3, Abt SRBI will sample two carrier routes per ZIP Code for a total of 240. They will
prepare a complete list of carrier routes in each ZIP Code, sorting them in numerical sequence to
reflect geography. Select two carrier routes systematically using probabilities proportional to the
total number of residential addresses in each carrier route.
In Stage 4, the final stage, Abt SRBI will obtain a complete list of all residential addresses in
each of the 240 carrier routes. A systematic sample of addresses will be drawn from each carrier
route. The target number of completed household interviews, the expected response rate, and the
expected vacancy rate have determined the sample size of addresses per carrier route. The initial
sample size of residential addresses is likely to be in the range of 1,300 to 1,400 housing units.
The target sample size for the study is approximately 990 completed household interviews.The
sample will be limited to households, with group quarters being excluded from the eligible target
population.
Sample size
Cognitive interviews
We intend to interview up to 32 KN Panel members in Washington, D.C. and Denver, Colorado.
This number is sufficient in order to test wording issues and respondents’ understanding of the
survey material.
Pretest survey
The expected number of completed surveys for the pretest survey will be approximately 250.
This number is sufficient in order to refine, if necessary, the experimental design.
Main survey
The intended number of completed surveys for the main survey will be approximately 2,691
(1,800 for the KN Panel and 891 for the Abt SRBI Panel). This sample size will be feasible
within the project’s budget, given the selected implementation mode, and will provide
sufficiently large numbers of observations for conducting statistical analyses.

Page 10

In the analysis of stated choice data, the question of how large the sample size should be to get
statistically significant results is common, but often difficult to answer. The question itself raises
a number of important issues (Orme, 1998):
`

What is being measured (e.g., preferences for a product versus differences in preferences
across people)?

`

What level of confidence is important for the conclusions to be meaningful?

`

What methodology do you intend to use?

This particular study also presents a number of potential issues to consider when developing the
specific alternatives for the choice questions, especially issues relating to the limited number of
alternative scenarios to be valued. 2
Determining the minimum sample size needed is partially based on statistics, but may also be
largely based on heuristics and experience. The available statistical literature on stated choice
sample sizes is quite limited (W. Adamowicz, University of Alberta, personal communication,
12/30/2004). For example, in Louviere et al.’s (2000) 400-page book, Stated Choice Methods:
Analysis and Application, only about 10 pages are devoted to sample size.
Both Orme (1998) and Louviere et al. (2000) demonstrate that, for estimating the probability of
the respondent choosing some alternative, the minimum sample size for a given level of
precision is a function of the choice probability itself, making the computation tautological and
circular. They also show mathematically that the optimal or minimum sample size is decreasing
in the number of replications or tasks (that is, choice questions) for each respondent. Orme
(1998) also shows mathematically that the sample size is a decreasing function of the number of
alternatives presented in each choice question, but increasing in the number of levels of the
choice-question attributes (e.g., dummy variables that take on one of two values require a sample
size smaller than a study with a variable taking on 10 values). Furthermore, if preference
heterogeneity exists in the sample (i.e., there are different kinds of people who care differently
about characteristics), a larger sample size will be needed because more sets of preference
parameters must be estimated (Orme, 1998; Louviere et al., 2000).
Rules of thumb for selecting sample size exist. For example, Sawtooth Software, a developer of
software popular for designing choice sets, recommends the following formula for choice-based
methods to obtain the minimum sample size:
(n × t × a)/c > = 500.
Where:
n
t

= minimum number of respondents
= number of tasks or “replications”

2. Not including the cost characteristic, there are eight scenarios to be valued, because three attributes are
dummy variables, and each takes on one of only two levels (2 × 2 × 2 = 8).
Page 11

a
c

= number of alternatives per task (not including “none” or the “status quo”)
= number of “analysis cells.”

When considering main effects, c is equal to the largest number of levels for any one attribute. If
considering all two-way interactions, c is equal to the largest product of levels of any two
attributes (Orme, 1998).
For the main survey, this calculation would be:
(2,691 × 3 × 2)/4 = 4,036
which is more than eight times the target level of 500, indicating that we have sufficient sample
size for a study of our design.
3. Describe the methods used to maximize response rates and to deal with nonresponse. The
accuracy and reliability of the information collected must be shown to be adequate for the
intended uses. For collections based on sampling, a special justification must be provided if
they will not yield “reliable” data that can be generalized to the universe studied.
Numerous steps have been and will be taken to maximize response rates and deal with
nonresponse behavior for the main survey. Descriptions of these efforts follow.
Maximizing response rates
The first step in achieving a high response rate is to develop an appealing questionnaire that is
easy for respondents to complete. We spent significant effort on developing an effective survey
instrument during Phase I. We hired experts on economic survey design and stated preference
techniques to assist in the design and testing of this survey. The survey instrument developed in
Phase I benefited from input on earlier versions from several focus groups and cognitive
interviews, and from peer review by experts in survey design and nonmarket valuation and
scientists who study coral reefs. In the Phase I focus groups and cognitive interviews, the
information presented was tested to ensure key concepts and terms were understood, figures and
graphics were tested for proper comprehension and appearance, and key economic and design
issues were evaluated. After testing the instrument with focus groups and cognitive interviews,
the survey was pretested using the KN’s Web-based Panel. 3 The result is a professional, highquality survey instrument. Since Phase I, we have made additional changes to the survey
instrument that will also be tested using cognitive one-on-one interviews and a second pretest
before implementing the main survey.
For both of the Web-based panels, KN and Abt SRBI will employ the practices for the Coral
Reef Valuation Study main survey that have been employed successfully on other projects that
have required OMB approval:

3. KN’s Web Panel is different from the ANES Panel that KN created to conduct the main survey from Phase
II.
Page 12

`

Field period of one month for the main survey

`

Use of the federal agency name in the email invitation

`

Email reminders 4

`

Telephone reminder calls to nonrespondents. 5

`

Both survey-specific and nonsurvey-specific incentives (as described in response to
Part A, Question 9) will be used to improve response rates.

These measures will provide a survey completion rate of 90% for the KN and Abt SRBI Panels.
Overall response rates are expected to be approximately 20% and 63% respectively.
Nonrespondents
Specific steps will be employed to assess the presence and extent of nonresponse bias. The
purpose of this exercise is not to adjust the estimates of economic value based on nonresponse
bias, but rather to test for differences between the two Web-based panels and for differences
between the U.S. Census and the two panels. Some of the steps involved to test for nonresponse
bias include the following:
`

Data from the screening interview for the ANES and MRI Panels will be compared to
each other and to Census figures to identify any systematic differences. The
characteristics of people who completed the interview and agreed to participate on panels
can also be compared with those who completed the interview but refused to participate
on panels.

`

A parallel type of comparison will be made with respect to answers to the attitudinal
questions asked of respondents and non-respondents during the initial panel recruitment
surveys. The distribution of responses to this question by respondents and
nonrespondents will be evaluated for the two groups (respondents and nonrespondents)
and compared with the GSS survey results. The demographic and attitudinal question
comparisons will enable us to assess how similar respondents and nonrespondents are to
each other and to the general population (except for the non-GSS attitudinal questions).

`

Another step that will be taken to evaluate the potential for nonresponse bias will be the
analysis of estimated values from the preference function as a function of time/sample
size. This approach essentially seeks to assess whether the estimated economic values
stabilize as additional sample is added over time.

4. For the ANES Panel, members will receive a pre-announcement email, an invitation email, and as many
reminder emails as is necessary. MRI Panel members will receive one prenotification email, one
announcement email, and then 4 email reminders.
5. For telephone reminder calls, Abt SRBI will call up to 15 times over the course of two weeks for any
particular wave. If they have a home and cell phone number listed, they will try both in any one call attempt.

Page 13

After taking these steps, we will evaluate the potential magnitude of nonresponse bias on the
valuation results.
4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as
effective means to refine collections, but if ten or more test respondents are involved, OMB
must give prior approval.
The methodological advance developed in this application is the direct comparison of the sample
representativeness, and potential difference in nonmarket valuation estimates, developed from an
RDD-recruited (ANES) and an in-person recruited (SRBI) sample concurrently administered
using an Internet mode. This study design has held the majority of survey design and
administration variables constant across the two sample recruitment methods. Results of this
comparisons will add to the currently available information on the effectiveness of using data
collected from an RDD-recruited Internet mode survey.
5. Provide the name and telephone number of individuals consulted on the statistical
aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other
person(s) who will actually collect and/or analyze the information for the agency.
Stratus Consulting Inc. of Boulder, Colorado, was selected by NOAA to conduct the study
through a competitive contract procedure. Mr. David Chapman of Stratus Consulting serves as
the Project Manager, and Dr. Robert Rowe of Stratus Consulting serves as Project Technical
Advisor. Both Dr. Rowe and Mr. Chapman have extensive experience in applied environmental
and natural resource economics involving the use of statistical methods. Contact information
follows:
Mr. David Chapman: 303-381-8289
Dr. Robert Rowe: 303-381-8000
Stratus Consulting hired Professor Emeritus Richard Bishop of the University of Wisconsin,
Department of Agricultural and Applied Economics, to serve as Principal Investigator. Professor
Bishop is a well-known environmental and natural resource economist and has conducted many
applied projects involving the use of statistical methods. Contact information follows:
Professor Richard Bishop: 608-238-7473
Stratus Consulting hired Dr. Roger Tourangeau, Director of the University of Maryland Survey
Research Center, to advise on sampling design issues, including statistical issues in sample
design. Contact information follows:
Dr. Roger Tourangeau: 301-314-7984
Stratus Consulting hired Dr. Barbara Kanninen, to advise on experimental design issues. Contact
information follows:
Dr. Barbara Kanninen: 703-536-6949
Page 14

The rest of the research team includes Norman Meade, Vernon (Bob) Leeworthy, Tony Penn,
and Steve Thur from NOAA.
Peer review team:
Richard Carson, University of California at San Diego
Stanley Presser, University of Maryland
In addition, the team has relied extensively on federal, state, and university coral reef researchers
and managers to develop foundation information for the survey and to check specific facts about
coral reef health and effects of protection mechanisms:
Alan Friedlander, PhD
Fisheries Ecologist, Oceanic Institute, Waimanalo, Hawaii
Representing NOAA’s National Centers for Coastal and Ocean Science
Steven O. Rohmann, PhD
Coral Mapping
NOAA/NOS/Special Projects
Richard Grigg, PhD
Professor of Oceanography
University of Hawaii
Charles Birkeland, PhD
Biologist
University of Hawaii
Paul Jokiel, PhD
Biologist/Coral Ecologist
University of Hawaii
David Gulko, PhD
Biologist/Coral Ecologist
Hawaii Department of Land & Natural Resources
Division of Aquatic Resources
Athleen Clark, PhD
Manager
Hawaii Department of Land & Natural Resources
Division of Aquatic Resources
Kim Holland, PhD
Biologist/Coral Ecologist
University of Hawaii

Page 15

Mike Hamnett, PhD
Director, Hawaii Coral Reef Initiative Research Program
University of Hawaii
Stratus Consulting has already entered a contract with KN to recruit for the cognitive one-on-one
interviews and to conduct the pretest and the main survey.
Bibliography
Adamowicz, W., D. Dupont, and A. Krupnick. 2004. The value of good quality drinking water to
Canadians and the role of risk perceptions: A preliminary analysis. Journal of Toxicology and
Environmental Health 67:1825-1844.
Adamowicz, W., J. Louviere, and M. Williams. 1994. Combining revealed and stated preference
methods for valuing environmental amenities. Journal of Environmental Economics and
Management 26:271-292.
Adamowicz, W., P. Boxall, M. Williams, and J. Louviere. 1998a. Stated preference approaches
for measuring passive use values: Choice experiments and contingent valuation. American
Journal of Agricultural Economics 80:64-75.
Adamowicz, W.L., P. Boxall, J. Louviere, J. Swait, and M. Williams. 1998b. Stated preference
methods for valuing environmental amenities. In Valuing Environmental Preferences: Theory
and Practice of the Contingent Valuation Method in the US, EC and Developing Countries, I.
Bateman and K. Willis (eds.). Oxford University Press, London, UK, pp. 460-479.
Baker, L., T.H. Wagner, S. Singer, and M.K. Bundorf. 2003a. Use of the Internet and email for
health care information: results from a national survey. Journal of the American Medical
Association 289:2400-2406.
Baker, L.C., M.K. Bundorf, S. Singer, and T.H. Wagner. 2003b. Validity of the survey of health
and the Internet, and Knowledge Network’s panel and sampling. Stanford, CA, Stanford
University, 2003. Available: http://www.knowledgenetworks.com/ganp/reviewer-info.html.
Accessed March 17, 2003.
Batsell, R.R. and J.J. Louviere. 1991. Experimental analysis of choice. Marketing Letters 2:199214.
Bausell, R.B. and Y. Li. 2002. Power Analysis for Experimental Research. Cambridge
University Press, Cambridge, UK.
Beggs, S.D., N.S. Cardell, and J. Hausman. 1981. Assessing the potential demand for electric
cars. Journal of Economics 4:87-129.
Breffle, W.S. and R.D. Rowe. 2002. Comparing choice question formats for evaluating natural
resource tradeoffs. Land Economics 78(2).

Page 16

Breffle, W.S., E.R. Morey, R.D. Rowe, and D.M. Waldman. 2005. Combining stated-choice
questions with observed behavior to value NRDA compensable damages: A case study of
recreational fishing in Green Bay and the Lower Fox River. In The Handbook of Contingent
Valuation, D. Bjornstad, J. Kahn, and A. Alberini (eds.). Edward Elgar Publishing, Northampton,
MA.
Cameron, T. and J.R. DeShazo. 2005. Sample Selection in a Major Consumer Panel: Assessment
and Correction Using Year 2000 Census Tract Characteristics and County-level Presidential
Voting Patterns (draft).
Cameron, T., W.D. Shaw, and S. Ragland. 1999. Nonresponse bias in mail survey data: Salience
vs. endogenous survey complexity. In Valuing the Environment Using Recreation Demand
Models, J.A. Herriges and C.L. Kling (eds.). Edward Elgar Publishing, Northampton, MA, pp.
217-251.
Cattin, P. and D.R. Wittink. 1982. Commercial use of conjoint analysis: A survey. Journal of
Marketing 46:44-53.
Elrod, T., J.J. Louviere, and K.S. Davey. 1992. An empirical comparison of ratings-based and
choice-based conjoint models. Journal of Marketing Research 30:368-377.
Gan, C. and E.J. Luzar. 1993. A conjoint analysis of waterfowl hunting in Louisiana. Journal of
Agricultural and Applied Economics 25(2):36-45.
Green, P.E. and V. Srinivasan. 1990. Conjoint analysis in marketing: New developments with
implications for research and practice. Journal of Marketing October:3-19.
Heckman, J. 1979. Sample selection bias as a specification error. Econometrica 47(1):153-161.
Hensher, D.A. 1994. Stated preference analysis of travel choices: The state of practice.
Transportation 21:107-133.
Holmes, T.P. and W.L. Adamowicz. 2003. Attribute-based methods. In A Primer on Nonmarket
Valuation, P.A. Champ, K.J. Boyle, and T.C. Brown (eds.). Kluwer Academic Publishers,
Dordrecht, pp. 171-220.
Huber, J., W.K. Viscusi, and J. Bell. 2004. The Value of Regional Water Improvements: Further
Evidence. Presented at the Valuation of Ecological Benefits Conference, U.S. Environmental
Protection Agency. October.
Johnson, F.R. and W.H. Desvousges. 1997. Estimating stated preferences with rated-pair data:
Environmental, health, and employment effects of energy programs. Journal of Environmental
Economics and Management 34:79-99.
Johnson, F.R., W.H. Desvousges, E.E. Fries, and L.L. Wood. 1995. Conjoint Analysis of
Individual and Aggregate Environmental Preferences. Triangle Economic Research Technical
Working Paper No. T-9502, Carey, NC.
Page 17

Kanninen, B. (ed.). 2007. Valuing Environmental Amenities Using State Choice Studies. 1st
Edition. Springer Publications. Dordreich, The Netherlands
Kline, J. and D. Wichelns. 1996. Measuring public preferences for the environmental amenities
provided by farmland. European Review of Agricultural Economics 23:421-436.
Krupnick A. and M.L. Cropper. 1992. The effects of information on health risks valuations.
Journal of Risk and Uncertainty 5:29-48.
Lareau, T.J. and D.A. Rae. 1998. Valuing WTP for diesel odor reductions: An application of
contingent ranking technique. Southern Economics Journal 55(3):728-742.
Layton, D. and G. Brown. 1998. Heterogeneous Preferences Regarding Global Climate Change.
Presented at NOAA Applications of Stated Preference Methods to Resource Compensation
Workshop, Washington, DC.
Louviere, J.J. 1988. Conjoint analysis modeling of stated preferences. Journal of Transport
Economics and Policy 10:93-119.
Louviere, J.J. 1992. Experimental choice analysis: Introduction and overview. Journal of
Business Research 24:89-95.
Louviere, J.J. 1994. Conjoint Analysis. In Advances in Marketing Research, R. Bagozzi (ed.).
Blackwell Publishers, Cambridge, MA.
Louviere, J.J. and G. Woodward. 1983. Design and analysis of simulated consumer choice or
allocation experiments: An approach based on aggregated data. Journal of Marketing Research
20:350-367.
Louviere, J.J., D.A. Hensher, and J. Swait. 2000. Stated Choice Methods: Analysis and
Application. Cambridge University Press, Cambridge, UK.
Mackenzie, J. 1993. A comparison of contingent preference models. American Journal of
Agricultural Economics 75:593-603.
Magat, W.A., W.K. Viscusi, and J. Huber. 1988. Paired comparison and contingent valuation
approaches to morbidity risk valuation. Journal of Environmental Economics and Management
15:395-411.
Mathews, K.E., W.H. Desvousges, F.R. Johnson, and M.C. Ruby. 1997. Using Economic
Models to Inform Restoration Decisions: The Lavaca Bay, Texas Experience. TER technical
report prepared for presentation at the Conference on Restoration of Lost Human Uses of the
Environment, Washington, DC. May 7-8.
Morey, E.R., T. Buchanan, and D.M. Waldman. 1999a. Happy (hypothetical) Trails to You: The
Impact of Trail Characteristics and Access Fees on a Mountain Biker’s Trail Selection and
Consumer’s Surplus. Working paper, University of Colorado, Boulder.
Page 18

Morey, E.R., K.G. Rossmann, L. Chestnut, and S. Ragland. 1999b. Estimating E[WTP] for
reducing acid deposition injuries to cultural resources: Using choice experiments in a group
setting to estimate passive-use values. Chapter 10 in Valuing Cultural Heritage: Applying
Environmental Valuation Techniques to Historic Buildings, Monuments and Artifacts, S. Narvud
and R.C. Ready (eds.). Edward Elgar Publishing, Cheltenham, UK and Northampton, MA.
Morikawa T., M. Ben-Akiva, and D. McFadden. 1990. Incorporating Psychometric Data in
Econometric Travel Demand Models. Prepared for the Banff Invitational Symposium on
Consumer Decision Making and Choice Behavior.
Opaluch, J.J., S.K. Swallow, T. Weaver, C.W. Wessells, and D. Wichelns. 1993. Evaluating
impacts from noxious facilities: Including public preferences in current siting mechanisms.
Journal of Environmental Economics and Management 24:41-59.
Orme, B. 1998. Sample Size Issues for Conjoint Analysis Studies. Sawtooth Software Research
Paper Series, Sawtooth Software, Inc.
Rae, D.A. 1983. The value to visitors of improving visibility as Mesa Verde and Great Smokey
National Parks. In Managing Air Quality and Scenic Resources at National Parks and
Wilderness Areas, R.D. Rowe and L.G. Chestnut (eds.). Westview Press, Boulder, CO, pp. 217234.
Roe, B., K.J. Boyle, and M.F. Teisl. 1996. Using conjoint analysis to derive estimates of
compensating variation. Journal of Environmental Economics and Management 31:145-150.
Ruby, M.C., F.R. Johnson, and K.E. Mathews. 1998. Just Say No: Assessing Opt-Out Options in
a Discrete-Choice Stated-Preference Survey of Anglers. TER Technical Working Paper No. T9801. Triangle Economic Research, Durham, NC.
Singer, E. 2002. The use of incentives to reduce nonresponse in household surveys. In Survey
Nonresponse, R.M. Groves, D.A. Dillman, J.L. Eltinge, and R.J.A. Little (eds.). Wiley, New
York, pp. 163-178.
Swait, J., W. Adamowicz, and J. Louviere. 1998. Attribute-Based Stated Choice Methods for
Resource Compensation: An Application to Oil Spill Damage Assessment. Prepared for
presentation at the Natural Resources Trustee Workshop on Applications of Stated Preference
Methods to Resource Compensation, Washington, DC. June 1-2.
Viscusi, W.K., W.A. Magat, and J. Huber. 1991. Pricing environmental health risks: Survey
assessments of risk-risk and risk-dollar trade-offs for chronic bronchitis. Journal of
Environmental Economics and Management 21:32-51.
Wittink, D.R. and P. Cattin. 1989. Commercial use of conjoint analysis: An update. Journal of
Marketing 53:91-96.

Page 19


File Typeapplication/pdf
File TitleSUPPORTING STATEMENT
AuthorRichard Roberts
File Modified2009-03-24
File Created2009-03-24

© 2024 OMB.report | Privacy Policy