SSL_SS_Parts_A&B_final

SSL_SS_Parts_A&B_final.pdf

Steller Sea Lion Protection Economic Survey

OMB: 0648-0554

Document [pdf]
Download: pdf | pdf
SUPPORTING STATEMENT
STELLER SEA LION PROTECTION ECONOMIC SURVEY
OMB CONTROL NO.: 0648-xxxx

A. Justification
1. Explain the circumstances that make the collection of information necessary.
Steller sea lions (Eumetopias jubatus) live in the North Pacific Ocean and consist of two distinct
populations, the Western stock and Eastern stock, which are separated at 144º W longitude. As a
result of large declines in the populations since at least the early 1970s, in April 1990 the Steller
sea lion (SSL) was listed as threatened throughout its range under the Endangered Species Act
(ESA) of 1973 (16 U.S.C. 35). The decline continued through the 1990s for the Western stock in
Alaska, which was declared endangered in 1997, while the Eastern stock remained listed as
threatened. Both the Western and Eastern stocks are also listed as depleted under the Marine
Mammal Protection Act (MMPA) of 1972 (16 U.S.C. 1362). Commercial fishing in Alaska
competes for the same fish species SSLs eat and is believed to be an important factor
contributing to the continued decline of the Western stock population.
The National Marine Fisheries Service (NMFS) is the primary agency responsible for the
protection of marine mammals, including Steller sea lions. Multiple management actions have
been taken (71 FR 1698, 69 FR 75865, 68 FR 204, and 68 FR 24615), and are being
contemplated, by NMFS to protect and aid the recovery of the SSL populations. These actions
differ in: 1) the form they take (limits on fishing to increase the stock of fish available for Steller
sea lions to eat, area restrictions to minimize disturbances, etc.), 2) which stock is helped, 3)
when and how much is done, and 4) their costs. In deciding between these management actions,
policy makers must balance the ESA and MMPA goals of protecting Steller sea lions from
further declines with providing for sustainable and economically viable fisheries under the
Magnuson-Stevens Fishery Conservation Act (P.L. 94-265). Since Steller sea lion protection is
linked to fishery regulations, decision makers must comply with several federal laws and
executive orders in addition to the ESA and MMPA, including: Executive Order 12866 (58 FR
51735) which requires regulatory agencies to consider costs and benefits in deciding among
alternative management actions, including changes to fishery management plans made to protect
Steller sea lions.
Public preferences for providing protection to the endangered Western and threatened Eastern
stocks of Steller sea lions are primarily the result of the non-consumptive value people attribute
to such protection. Little is known about these preferences, yet such information is needed for
decision makers to more fully understand the trade-offs involved in choosing between
alternatives. How much the public is willing to pay for increased Steller sea lion stock sizes or
changes in listing status, as well as preferences for geographic distribution, is information that
can aid decision makers to evaluate protection actions and more efficiently manage and protect
these resources, but is not currently known. A general population survey is needed that will
collect information that provide insights into public values for protection of Steller sea lions and
the impacts of that protection.

1

2. Explain how, by whom, how frequently, and for what purpose the information will be
used. If the information collected will be disseminated to the public or used to support
information that will be disseminated to the public, then explain how the collection
complies with all applicable Information Quality Guidelines.
The information collection consists of implementing a mail survey on a sample of U.S.
households. We will mail questionnaires to members of the sample; in addition, we will send
follow-up mailings to encourage response. Among the follow-up efforts will be a telephone
contact with those sample households for whom we have telephone numbers. We will try to
obtain selected survey information during this telephone follow-up to aid in evaluating nonresponse behavior.
There are three survey instruments that each present one of several possible future trajectories for
Steller sea lion populations: increasing, stable, or decreasing. Since scientists do not currently
know whether current population trends will continue or change in the future, several survey
instruments were developed to enable us to account for this uncertainty. The “decreasing”
version assumes the population of Western stock of Steller sea lions will decrease in the future,
the “stable” version assumes the Western stock population stays at approximately the same size,
and the “increasing” version assumes the Western stock population increases over time. These
three cases span the range of the most likely future scenarios. Treating the future uncertainty
through different survey versions allows us to build the uncertainty directly into the analytic
framework since separate welfare estimates can be generated for each survey version. Except for
the future population projections, the three survey versions are identical and thus will be
discussed generically below. Any non-trivial differences will be highlighted. The follow- up
telephone interview script is also described below.
Mail Questionnaire
Survey responses gathered from the mail questionnaire include information about the following:
a. Public preferences regarding the protection of the Western stock of Steller sea lions.
b. Public preferences regarding the protection of the Eastern stock of Steller sea lions.
c. The factors that affect the public’s preferences for protecting Steller sea lions, such as the
geographic distribution of the two stocks, listing status and population size of the two
stocks, and protection costs.
d. Information on general attitudes toward protecting threatened and endangered species.
Stated preference response data collected through the survey will be used by NMFS to estimate a
preference function for explaining choices between protection programs that differ in the levels
of population sizes, ESA listing status, geographic distribution, and costs. This estimated
function will provide NMFS and the North Pacific Fisheries Management Council (NPFMC)
with information on public preferences and values for alternative Steller sea lion protection
programs, and what factors affect these values. This information can then be compared with
program costs and other impacts when evaluating protection alternatives.
The following is a discussion of how particular questions in the mail questionnaire will be
ultimately used. Generally, the survey asks respondents for information regarding their
knowledge and opinions of Steller sea lions, other endangered species, other seals and sea lions,
2

Alaska commercial fisheries, and potential goals and impacts of management options available
to protect the endangered population of Steller sea lions, in addition to standard sociodemographic information needed to classify respondents. It is divided into eight sections.
Section 1: The Issue: Threatened and Endangered Steller Sea Lions
The first section identifies the Steller sea lion as a species protected under the Endangered
Species Act and presents information about the Endangered Species Act (ESA), including
definitions for “endangered” and “threatened” species, which are important to the policy
questions in the survey. The introductory material also presents a breakdown of how many
species are protected under the ESA to help place Steller sea lions in context as one of many
ESA-protected species. Finally, the introduction identifies that the ESA requires reasonable
actions be taken, which begins to motivate the questions about alternative actions to consider.
The section also lists reasons people may care about threatened and endangered species and the
types of costs that result from protecting them.
•

Q1 asks how positive or negative the respondent’s reaction is when they think about the
Endangered Species Act. This simple question identifies people’s general feelings
toward endangered species protection. It provides an easy start to the process of thinking
about threatened and endangered species, and it sets a tone of neutrality by allowing
positive and negative reactions right from the start. In initial testing and from the pretest
implementation results, responses to this question were good predictors of how
respondents would answer the stated preference questions (see Appendix).

•

To put the issue of protecting threatened and endangered species in the context that there
are many social issues (each with costs), and thus to reduce survey “importance bias”, Q2
asks the respondent whether we are spending too much, about the right amount, or too
little on seven public policy issues. This question repeats a General Social Survey (GSS)
question, which also allows for a comparison of attitudes for survey respondents versus
the GSS survey results. The same question is asked in the telephone survey for nonrespondents.

•

After providing some general reasons for and against protecting threatened and
endangered species (again providing a neutral perspective), Q3 addresses the importance
to the respondent of general protection of threatened and endangered species, and
whether protecting jobs is more or less important than threatened and endangered species
protection to the respondent. Responses to this question were also found to be correlated
with response patterns to stated choice questions in initial testing (see Appendix).

Section 2: Seals and Sea Lions in the U.S.
To properly value Steller sea lions, it is vital to accurately define the “good” to be valued (i.e.,
the results of Steller sea lion protection in this case) and to provide the context within which it
exists to ensure that respondents fully understand what they are to value. Part of the process of
providing context for the valuation involves discussing the species that may serve as substitutes
in individual’s minds for Steller sea lions. In focus groups, a natural set of substitutes that
people identified for Steller sea lions is other seals and sea lions that exist near where Steller sea
lions live.
3

This section provides some facts about seals and sea lions in the U.S., as well as pictures and
facts about the species that reside along the Pacific Coast and in Hawaii. It also illustrates that
some species have recovered after protection actions were taken, demonstrating that such actions
can work, and that the Steller sea lion is one of three seal and sea lion species that are protected
by the ESA.
•

Q4 is used to determine whether respondents had prior experience with seals or sea lions,
and aids in encouraging respondents to review the information provided.

Section 3: Some Steller Sea Lion Facts
This brief section introduces several facts about Steller sea lions. This information sets the stage
for the Steller sea lion versus commercial fishing conflicts, as Steller sea lions are large and eat a
lot, don’t migrate (and thus one population will not replace the other), and serve an uncertain role
in the ecosystem.
•

Like Q4, Q5 is intended to get respondents to begin thinking about Steller sea lions and
determine whether they are familiar with Steller sea lions prior to the survey.

Section 4: The Western and Eastern Stocks of Steller Sea Lions
This section describes why Steller sea lions are divided into the Western and Eastern stocks,
provides a map identifying where the stocks are located, presents a graphic that illustrates the
population trends of each stock in the past and into the future, and identifies what has been done
to protect Steller sea lions in the past and the current ESA listing and population trend. This and
the next section define the baseline of current and expected future conditions with current
management programs, which is required for proper valuation of alternative levels of protection.
•

Q6, which asks whether the respondent has ever lived in or visited areas where the
Western stock lives, is intended to get the individual to review the map that indicates
where the Western and Eastern stocks are and relate the map to their own experiences.

•

Respondents are asked how concerned they are about each stock in Q7. This information
serves dual purposes. First, it encourages the respondent to read and understand what is
occurring with each stock, and second, provides information that can be used to check for
consistency of preferences with responses to stated preference questions.

Section 5: Steller Sea Lions and Commercial Fishing
In this section, the link between commercial fishing and Steller sea lions is explained, and the
fishery management actions that make up the status quo protection measures are introduced.
•

Q8 asks respondents to indicate how concerned they are about two impacts of protecting
Steller sea lions, lost commercial fishing jobs and higher fish prices. This question is
important because it familiarizes the respondent with the costs of protecting Steller sea
lions to the fishing industry and to households, thus setting up the mechanism through
which individuals would pay for further protection (i.e., the payment vehicle ) in the
4

stated preference questions. Like Q7, this question can be used to assess internal
preference consistency with responses to stated preference questions. Together with Q9,
the question serves the purpose of acknowledging that there are costs to protecting Steller
sea lions and informing the respondent about these costs. This is important for
maintaining a neutral stance regarding protection and minimizing information bias,
particularly in light of the fact that several people in earlier testing did not feel that
protecting Steller sea lions was important.
Section 6: New Steller Sea Lion Protection Actions
This section introduces the idea that more can be done to protect the Western stock, introduces
the payment vehicle, and sets the stage for asking about specific protection alternatives in the
stated preference questions.
Q9 continues the cognitive process of reviewing and responding to elements of the scenario setup and provides another cross-check to the responses to the stated choice questions (Q10, Q12,
Q14). Respondents are asked to indicate the degree to which they agree or disagree with two
statements, which differs across survey versions. In the “decreasing version,” the first states that
more should be done to ensure the Western stock is no longer endangered even if it costs more
money, while the second states that as long as the Eastern stock recovers, it doesn’t matter if the
Western stock remains endangered. Agreeing with the first statement indicates a willingness to
spend money to protect the Western stock of Steller sea lions. Disagreeing with it suggests
individuals may not choose costly programs to help the species. Agreeing with the second
statement explains why some people may not wish to spend additional money to protect the
Western stock. Disagreement with the second statement suggests a concern for the Western
stock independent of what happens to the Eastern stock. In the other two versions, the first
statement says that more should be done to ensure the Western stock recovers even if it costs
more money, while the second states that as long as the Eastern stock recovers, it doesn’t matter
if the Western stock recovers. As in the decreasing version, agreeing with the first statement
indicates a willingness to spend money to protect the Western stock, while agreeing with the
second may explain why some people may not wish to spend money to protect the Western
stock.
Section 7: What Alternatives Do You Prefer?
This section contains the stated preference questions, which are in a choice experiment, or stated
choice, framework. The section begins with instructions for answering the questions and a
budget reminder. It is followed by the three stated choice questions (Q10, Q12, Q13), an openended comment question (Q11), and follow- up questions (Q14, Q15). The information from
these questions will be used to estimate a Steller sea lion protection preference function.
•

In each of the three choice questions (Q10, Q12, and Q13), respondents are confronted
with three alternatives that differ in what they do and how much they cost, the current
Steller sea lion protection program (Alternative A), which is the status quo alternative,
and two others that do more and cost more (Alternatives B and C). These alternatives are
described by the ir expected results with respect to the following attributes:

5

1.
2.
3.
4.
5.
6.

Western stock ESA listing status
Western stock total population size
Areas where the Western stock will live
Eastern stock ESA listing status
Eastern stock total population size
Added household cost 4

Respondents are then asked to choose the alternative they most prefer, and which they
least prefer. The status quo is always the first option to make it easy for respondents to
select it (and reduce any unintended bias in selecting alternatives to do more and spend
more), and to allow rank ordering of alternatives B and C relative to the baseline
(Alternative A), which provides statistical efficiency gains over paired choices. Note that
the status quo alternative will differ across the decreasing, stable, and increasing versions
of the survey.
The primary objective of most stated preference studies is to value changes in outcomes.
In our case, the changes in outcomes of interest are population numbers and ESA listing
status (endangered, threatened, and recovered). In some studies, one or more specific
programs are defined to achieve the proposed changes in outcomes to make the valuation
scenario more concrete and realistic for respondents. Respondents then value programs,
thus indirectly valuing the underlying change in outcomes associated with the programs.
However, values for changes in outcomes may be contaminated by respondent’s values
for other perceived positives and negatives about the programs proposed to reach the
outcomes. In this study, in Section 6 of the survey we identify the types of efforts
(programs) that have been and can be used to obtain changes in SSL populations and
status. However, we do not tie specific outcomes to specific programs for two reasons.
First, in our testing, respondents were comfortable directly addressing changes in
outcomes (the ultimate objective) without further complicating the design by specifying
which programs would be required for each of the many scenarios of interest. And
second, separating the outcomes from specific programs allows the policy makers and
resource managers to select the best programs to bring about the desired outcomes
without weakening the application of the valuation results.
The selection of the set of non-cost attribute leve ls in the experimental design relied
heavily on input from technical reviewers at the Alaska Fisheries Science Center,
particularly scientists who study Steller sea lions. Based on their input and information
from technical reports and published studies, the range of population levels and statuses
was developed to reflect the most reasonable range of outcomes over the next 60 years
for the current and alternative programs under consideration. The cost levels were
selected to cover the range of WTP with a sufficiently wide range of costs to include
several cost amounts that are expected to exceed most respondent’s maximum WTP
based on pretest results. The combinations of attribute levels seen in the survey are
determined using efficiency-based statistical design methods.

4

In earlier cognitive interviews, individuals were specifically asked in what form they believed they would be
paying for Steller sea lion protection programs. The vast majority responded that the added cost in the choice
questions simply represents money out of their pocket, mostly in the form of federal taxes, but also from some
additional expenditures on seafood products.

6

•

Q11 provides respondents space to comment on their answers to Q10. It can provide
insights into the individual’s thought process used in answering Q10, and subsequently
help identify valid and invalid responses. Second, it provides the opportunity for
individuals to express how they feel about being asked this type of question. This is
especially important for those that clearly dislike some element of the question. This
comment question is not repeated for other choice questions because experience indicates
little additional information is gained from repeating the question.

•

In Q14, respondents are asked to agree or disagree with several statements that are used
to help address several concerns about people’s responses, including: Whether
respondents feel it is their responsibility to pay for Steller sea lion protection at all
(potential protest), whether respondents had enough information to make an informed
choice (the effect of uncertainty on values), whether respondents were paying just for
Steller sea lions or if they believed other species were being protected by the alternatives
considered (potential part-whole embedding), whether respondents believed the federal
government could effectively manage the Steller sea lion protection programs to bring
about the results being valued (potential protest), and whether respondents feel they
should not have to pay more federal taxes for any reason (potential protest).

•

Q15 identifies how confident individuals are about their answers to the stated preference
questions. Combined with other answers (such as Q14 item 2), we can evaluate the
impact of uncertainty on valuation results, such as the mean and variance of estimated
values. Results can also be reported with and without respondents who self- report that
they are “not at all confident” in their answers.

Section 8: About You and Your Household
This final section consists of eleven questions, H1 – H11, that collect information about the
respondent and the respondent’s household to be used as explanatory variables in the stated
preference model, for comparing the sample to the population (coverage or sampling bias), and
for comparing respondents to non-respondents (non-response bias). To the extent possible, the
questions and response categories parallel those used by the Census Bureau to allow the most
direct comparisons.
•

Socioeconomic and demographic information collected includes gender (H1), age (H2),
household size (H3), employment status (H4), education (H6), household ownership
status (H7), ethnicity (H9), race (H10), and income (H11).

•

Respondents are also asked if they, or any family members, have been employed in the
commercial fishing industry (H5) to identify individuals who may view protecting Steller
sea lions as a public bad instead of a public good.

•

The number of listed telephone numbers in the household is asked for in H8. This
information is useful for understanding the probability that the household was chosen for
the sample.

Telephone Follow-Up
7

Following the initial mailing and postcard reminder, we will contact non-respondents by
telephone to encourage them to complete the mail survey5 and to collect limited information
from those who decide not to participate in the mail survey at all. 6 The information provided by
these non-respondents can be compared with that from respondents to address issues concerning
non-response bias. Selected socioeconomic and demographic questions, along with a few key
attitudinal questions, are asked to statistically test whether non-respondents differ from
respondents with respect to these characteristics. The attitudinal questions include a version of
Q1 from the mail questionnaire. Responses to this question have been shown to be correlated to
responses to stated preference questions (see Appendix). A question used in the General
Statistical Survey (GSS) is also included to enable comparison of non-respondents with a large,
readily-available statistical survey estimate generally regarded as representative of the general
U.S. population. This information can be used to evaluate and adjust the results for potential
non-response bias among sample members.
3. Describe whether, and to what extent, the collection of information involves the use of
automated, electronic, mechanical, or other technological collection techniques or other
forms of information technology.
The pretest survey will not utilize any specialized information technology.
4. Describe efforts to identify duplication.
The economics literature was consulted extensively to identify studies that valued Steller sea
lions. To date, there has only been one effort, aside from the proposed data collection, to provide
economic value information for Steller sea lions. During the summer and fall of 2000, a
contingent valuation7 -based Steller sea lion survey was conducted by the University of Alaska at
Fairbanks (UAF). The study’s results are reported in Turcin (2001), Giraud, Turcin, Loomis,
and Cooper (2002), and Turcin and Giraud (2003). There are several deficiencies in the survey
instrument that mitigates the usefulness of the estimated welfare estimates for Steller sea lion
protection. Four of the main shortcomings of the survey are the following:
1. The public good being valued is the additional protection (above the then-current level of
protection) provided by a single “Expanded Federal Steller Sea Lion Recovery Program”
that would result in the recovery of the Western stock to some unspecified population
level, in some unspecified locations, at some unspecified time period (and without
consideration to the concurrent status of the Eastern stock). Because the projected
baseline for the Western stock without additional protection efforts is poorly specified
(and does not comport with current projected baselines), the protection program results
are imprecisely defined, and do not consider many of the policy attributes of real concern
(such as the Eastern stock status, and alternative Western stock listing status targets such
as threatened, population size, and locations), the resulting welfare estimate is difficult to
interpret and has limited usefulness for policy purposes.

5

Those needing a replacement survey will be mailed one following the telephone interview.
In the telephone follow-up, a limited amount of information will also be collected from those agreeing to return
the mail survey.
7
Contingent valuation is a survey-based economic technique for the valuation of non-market resources, typically
environmental areas.
6

8

2. The information presentation has important limitations. The distinction between all (both
western and eastern populations of) Steller sea lions and the western stock of Steller sea
lions is blurred, as the terms “western population” and “Steller sea lion” are used
interchangeably. Additionally, the threatened status of the Eastern stock, and the Eastern
stock generally, is not mentioned despite the potential substitution relationship between
the populations. This brings into question the proper interpretation of the estimated
economic value (whether the values are significantly biased upward, as our focus groups
and cognitive interviews suggest).
3. Substit ution reminders are not provided. In particular, Steller sea lion population trends
are not put into context with respect to other species, which may be problematic if people
view other marine mammal species as substitutes for the Steller sea lion. The absence of
this contextual background brings into question the validity of responses to the valuation
question.
4. The survey instrument does not reflect a state-of-the-art design. It uses small font sizes,
employs large and complicated text passages, and has numerous typos that may cause
respondents to skip important information or lose interest.
Although there is only one existing survey effort to understand the value of Steller sea lions,
there are numerous examples of studies conducted to estimate the non-consumptive value of
other endangered species and marine mammals. Examples include Bosetti and Pearce (2003),
Langford, et al. (2001), Jakobsson and Dragun (2001), Fredman (1995), Hagen, et al. (1992),
among others. All these studies utilized contingent valuation methods. As a result, they are
unable to fully analyze marginal values of attributes of the species protection. The proposed
study departs from those in the existing literature in its use of a stated choice framework that
allows marginal value s of attributes of protection programs to be estimated (a more detailed
literature review is included in this submission). This added information should provide decision
makers with better information about how much the public would benefit from programs that
lead to differing results, and thus represents a flexible tool for management.
5. If the collection of information involves small businesses or other small identities,
describe the methods used to minimize burden.
The collection does not involve small businesses or other small identities.
6. Describe the consequences to Federal program or policy activities if the collection is not
conducted or is conducted less frequently.
If the collection is not conducted, the North Pacific Fishery Management Council (NPFMC) and
NMFS will have to rely on the 2000 UAF survey for information on public values for Steller sea
lion protection to consider along with other important information in decisions about Steller sea
lion management alternatives. As noted above, this survey has several major deficiencies that
bring into question the accuracy and utility of the results. Importantly, the UAF results have
limited application for incorporating public preferences and values concerning marginal tradeoffs between mana gement alternatives since the estimated public value is associated with a
single management alternative.
7. Explain any special circumstances that require the collection to be conducted in a
manner inconsistent with OMB guidelines.
9

The collection is cons istent with Office of Management and Budget (OMB) guidelines.
8. Provide a copy of the PRA Federal Register notice that solicited public comments on the
information collection prior to this submission. Summarize the public comments received
in response to that notice and describe the actions taken by the agency in response to those
comments. Describe the efforts to consult with persons outside the agency to obtain their
views on the availability of data, frequency of collection, the clarity of instructions and
recordkeeping, disclosure, or reporting format (if any), and on the data elements to be
recorded, disclosed, or reported.
A Federal Register notice (see Attached) solicited comments on the information collection. A
subsequent correction was published to provide additional details of the survey and clarify its
scope and purpose. A number of comments were received in response to the Federal Register
notice and correction. About 850 form letter-style e- mails were received that expressed support
for protecting Steller sea lions. These letters urge NOAA to increase Steller sea lion protection
actions. Since they did not specifically address any aspect of the data collection, no formal
response was made. Several other comments involved questions about the scope and purpose of
the data collection, as well as specific questions about the population from which the sample
would be drawn. These comments were responded to individually, but a correction notice was
deemed necessary to avoid additional comments along these lines and to generally clarify these
issues. Another commenter provided some opinions about stated preference valuation methods
and economic preferences in the context of the survey and implored the agency to be explicit
about the possible limitations of valuation techniques in measuring economic preferences for
Steller sea lion protection. A response was sent to this individual thanking him for the comments
and providing assurances that the assumptions made in the analysis and limitations of the results
would be made clear in reporting the results to avoid misuse.
There were also several requests for copies of the survey instrument. Draft versions of the
survey instrument were provided to several individuals for review purposes. Two sets of
comments were received within the comment period. The first was provided by Dr. Richard
Wallace, a Professor of Environmental Studies at Ursinus College. The second set of comments
was received from the Humane Society of the U.S. Additional comments received from the
Marine Mammal Commission were received after the official comment period closed. These
comments and the corresponding responses are included in this submission.
The survey instrument and implementation plan have benefited from input and guidance from
numerous individuals outside the Agency. Dr. David Layton, Associate Professor of Public
Affairs, University of Washington, and Dr. Robert Rowe of Stratus Consulting, Inc., a leading
economics consulting firm, have been principal participants in the design and testing of the
survey instrument. Both have extensive experience in designing and testing economic surveys of
non- market goods. Dr. Roger Tourangeau, Senior Research Scientist at the Survey Research
Center of University of Michigan and Director of the Joint Program in Survey Methodology at
the University of Maryland, reviewed the survey instrument and provided guidance on survey
administration. Dr. Gardner Brown, Professor Emeritus of Economics, University of
Washington provided input on the survey instrument design and content, and participated in
some pretesting activities. Dr. Richard Bishop, Professor Emeritus of Agricultural and Applied
Economics at the University of Wisconsin and senior consultant with Stratus Consulting, and Dr.
10

Vic Adamowicz, Professor of Rural Economy at the University of Alberta, reviewed and
commented on the survey design and stated preference questions. Dr. David Chapman of Stratus
Consulting contributed to the design of the survey instrument through his involvement
moderating focus groups and conducting cognitive interviews to test the survey instrument.
In addition, the survey instrument presents the latest information on Steller sea lions, current
population trends, alternative management options, and likely impacts of management options.
To ensure that the information is as accurate as possible, numerous Steller sea lion researchers,
fisheries biologists, and other researchers reviewed the survey instrument. In fact, the survey
instrument underwent significant review internally by a committee of NMFS biolo gists and
fisheries researchers that included Dr. Doug DeMaster, Dr. Rich Ferrero, Dr. Pat Livingston, Dr.
Tom Gelatt, Mr. Lowell Fritz, and Dr. Ron Felthoven. Additional review was provided by Dr.
Tom Loughlin and Dr. Libby Logerwell of NMFS, Dr. Bill Wilson of NPFMC, and Mr. Chris
Oliver, the executive director of the NPFMC.
9. Explain any decisions to provide any payment or gift to respondents, other than
remuneration of contractors or grantees.
Inclusion of an incentive acts as a sign of goodwill on the part of the study sponsors and
encourages reciprocity of that goodwill by the respondent. Singer (2002) provides a
comprehensive review of the use of incentives in surveys. She notes that giving respondents a
small financial incentive (even a token amount) in the first mailing increases response rates in
mail-based surveys and are cost-effective. Such prepaid incentives are more effective than larger
promised incentives that are contingent on completion of the questionnaire. In tests conducted
by Lesser, et al. (1999), including a $2 incentive in a mailing with four contact points was shown
to increase response rates by an additional 19 to 31 percentage points. Thus, even a small
upfront incentive typically is more cost effective than additional follow-up steps that are often
considered.
To encourage participation in the mail survey, an honorarium of $10 will be given to the
participants in the initial mailing. Results from the pilot pretest implementation (conducted
under OMB Control No.: 0648-0511) indicated that the $10 incentive led to a statistically higher
response rate compared to the $2 and $5 treatments at the 1% and 10% levels, respectively.
Moreover, the $10 incentive was the only one to achieve a response rate over 50%, which will be
critical to make the results more defensible in the professional peer review process. See the
Appendix for details of the response rates achieved using different monetary incentive amounts.
There are several reasons why we believe inclusion of both a financial incentive and follow-up
contacts will be needed to reach desired response rates. First, the survey is about an unfamiliar
issue to many Americans. As such, the chance that respondents will not be motivated to
complete the survey is higher than for a survey on a more familiar subject (such as a survey of
licensed anglers about managing local fishing sites). Second, although every attempt is being
made to ensure the survey is easy to read, understand, and complete, the amount of information it
needs to present and the number of questions it needs to ask contribute to a 16 page survey
requiring more respondent attention than some surveys. For these reasons, we expect both
incentives and follow-up contacts will be required to obtain a suitable response rate and to
evaluate potential non-response biases.
11

10. Describe any assurance of confidentiality provided to respondents and the basis for
assurance in statute, regulation, or agency policy.
In the cover letter accompanying each mailing, respondents will be told that their name and
address information will be kept separate from their responses and that only their responses will
be given to researchers. The cover page of the survey will also include the following statement :
Your name and address will be kept separate from your responses. Only your responses will
be delivered to the researchers for analysis.
A similar statement is made in the telephone survey. Following completion of the data
collection, the survey firm will delete any information identifying individuals (i.e., name and
addresses) before any data file is delivered to NMFS or any other participating researchers and
agencies.
11. Provide additional justification for any questions of a sensitive nature, such as sexual
behavior and attitudes, religious beliefs, and other matters that are commonly considered
private.
There are no questions of a sensitive nature asked in the survey.
12. Provide an estimate in hours of the burden of the collection of information.
The mail survey will be sent to a random sample of approximately 5,000 addresses. The random
sample will be purchased from Survey Sampling, International. 8 Based on previous experience,
up to 15% of these types of samples can be expected to be bad or unusable addresses, which
mean the number of households receiving the survey, will be approximately 4,250. We expect a
final response rate of at least 57 percent (of the valid sample) based on the pilot pretest
implementation results, leading to over 2,423 respond ing households returning completed
surveys. For comput ing burden hours, we assume no more than 2,500 households will respond,
1,750 completed from the initial mailing and postcard reminder, 350 completed following
contacts via phone, and 300 completed following the second full mailing. 9 The cover letter will
solicit the participation of an adult head of the household to complete the survey. While our
experience has been that respondents typically complete the survey in 20 to 25 minutes, we
assume 30 minutes to conservatively compute the potential burden hours. As a result, those
ultimately completing the survey are expected to contribute up to 1,250 hours to the overall hour
burden.
Following the initial mailing and postcard reminder, we expect approximately 70% of all
8

We collected information about the national sampling frames of several candidate vendors including Acxiom,
Experian, Survey Sampling Int’l, and Genesys. All had high population coverage rates (85% to 95%), but varied in
the methods used to assemble lists and in the percent of their population with telephone numbers. Of the vendors
evaluated, only SSI did not remove households from their sampling frame that were in the National Do Not Call
Registry (which does not apply to research surveys). As a result, they were the vendor chosen. This was the vendor
used in the pretest implementation. Their general population samples are generated from telephone listings and
other proprietary databases, and updated with the USPS Delivery Sequence File and National Change of Address
(NCOA) databases. The database has approximately 85% coverage of all U.S. households.
9
The calculations for numbers of responses by survey stage are conservatively estimated based on achieved
response rates in the pilot pretest implementation (see the Appendix).

12

expected completes, or 1,750 households, to have returned completed surveys. Households that
have not responded after the initial mailing and postcard reminder will be contacted by
telephone, encouraged to complete and return the survey, and asked to answer a few questions,
even if they indicate they will be returning the survey. Thus, the telephone follow- up serves the
dual purpose of increasing the number of mail responses and gathering information by telephone
needed to estimate the impact of non-response. Households that need a replacement
questionnaire will be identified and sent a new one. The phone interview is expected to take 6
minutes on average to complete, and we expect to attempt to reach and complete interviews with
at most 50% of the 2,500 potential respondents remaining after the initial mailing and postcard
reminder, or up to 1,250 individuals, for a total of 125 hours. 10
The final contact is a second full mailing to all households that were not successfully contacted
in the telephone interview, could not be contacted in the telephone interview due to an invalid or
missing telephone number, or were interviewed and indicated they would return the survey but
had not. While the telephone follow- up is expected to lead to 350 completed surveys, this final
mailing is expected to lead to 300 completed surveys.
The total number of unique respondents to all survey contacts will be 3400, including those who
complete only the short telephone interview. This number consists of respondents who return
the questionnaire (2500) and respondents who do not return the questionnaire but do provide
some survey information during the telephone contact (900). This assumes that 20% of the
sample, or 850 households, will be unreachable in the phone contacts and will not return a
completed survey. The total hour burden is estimated to be 1,325 hours.

Survey instrument

Estimated number of
respondents

Estimated time per
respondent
(minutes)
30

Estimated total
annual burden hours
(hours)
875

Mail survey (from
1,750
initial mailing and
postcard reminder)
Mail survey (from
350
30
175
phone contacts)
Follow- up phone
1,250a
6
125
survey
Second mail survey
300
30
150
b
Total
3,400
1,325
a
Number of successful phone contacts of households that have not returned completed surveys
following initial mailing and postcard reminder.
b
Total respondents reflect the total sample size minus the households that do not complete the
mail survey or phone interview.
10

Although we will attempt to reach all households in the sample that have not returned a completed survey to this
point, we do not expect to be able to reach more than 1,250 in a timely and affordable manner.

13

13. Provide an estimate of the total annual cost burden to the respondents or recordkeepers resulting from the collection (excluding the value of the burden hours in #12
above).
No additional cost burden will be imposed on respondents aside from the burden hours indicated
above.
14. Provide estimates of annualized costs to the Federal government.
Annual cost to the Federal government of the pretest is approximately $200,000 divided as
follows: $195,000 in contract award money and $5,000 in staff time and resources. Contractor
services include conducting the survey implementation.
15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14
of the OMB Form 83-I.
This is a new collection, and is thus a program change. Reasons for this collection were outlined
in Items 1 and 2.
16. For collections whose results will be published, outline the plans for tabulation and
publication.
A NMFS processed report is planned that documents the sampling procedures and response rates
and provides statistical summaries (i.e., means, variances, and frequency distributions) of data
collected in the survey. This report is not expected to receive outside peer review. However,
internal reviews will be done.
The econometric analysis of the stated preference choice experiment data will be reported in one
or more papers that will be submitted for publication at leading environmental economics peerreviewed journals, such as the Journal of Environmental Economics and Management, Land
Economics, or Environmental and Resource Economics.
17. If seeking approval to not display the expiration date for OMB approval of the
information collection, explain the reasons why display would be inappropriate.
This item is not applicable, as the expiration date for OMB approval of the information
collection will be shown on the survey.
18. Explain each exception to the certification statement identified in Item 19 of the OMB
Form 83-I.
There are no exceptions to Item 19 of the OMB Form 83-I.

14

B. Collections of Information Employing Statistical Methods
1. Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection method to be used. Data on the number of entities
(e.g. establishments, State and local governmental units, households, or persons) in the
universe and the corresponding sample are to be provided in tabular form. The tabulation
must also include expected response rates for the collection as a whole. If the collection has
been conducted before, provide the actual response rate achieved.
The potential respondent universe is all U.S. households (approximately 106 million according to
the 2000 Census). A stratified random sample of approximately 800 Alaska households and
4,200 non-Alaska U.S. households will be used. Alaskan households are oversampled to ensure
the inclusion of their preferences, since they are potentially more directly affected by actions to
protect Steller sea lions and are likely to have more familiarity with Steller sea lions. The nonAlaska U.S. household sample is larger, recognizing the importance of sample size
considerations for the ultimate goal of generating reliable national estimates.
For the collection as a whole, a response rate of approximately 57% is anticipated. This is the
response rate achieved for the pilot pretest implementation treatment employing a $10 monetary
incentive (see Appendix).
2. Describe the procedures for the collection, including: the statistical methodology for
stratification and sample selection; the estimation procedure; the degree of accuracy
needed for the purpose described in the justification; any unusual problems requiring
specialized sampling procedures; and any use of periodic (less frequent than annual) data
collection cycles to reduce burden.
The survey will use a stratified random sample of approximately 5,000 households purchased
from a professional sampling vendor (see footnote 4). The population is stratified into Alaska
and non-Alaska households with the Alaska household stratum consisting of approximately 800
households and the non-Alaska stratum consisting of approximately 4,200 households. The
advance letter and cover letter accompanying the initial mailing will solicit the participation of a
male or female head of household to complete the survey.
For each stratum, a sample of households will be purchased. Up to 15% of the purchased sample
may be invalid, leading to valid samples of 680 and 3,570, respectively, for the two strata.
Survey responses will be used to statistically estimate a valuation model using a random utilitybased multinomial choice model to assess the statistical significance of the set of attributes as
contributors to the respondent’s preferences for protecting Steller sea lions. Given the expected
response rates, the sample sizes described above should be sufficiently large for this modeling
and for data analysis generally. Assuming a conservative sample size estimate of 2100, each
with three stated preference choice question responses per respondent (i.e., responses to Q10,
Q11, and Q12), will result in 6300 (non- independent) observations. This provides a very large
amount of observations with which to estimate the valuation function. To our knowledge, this
sample size exceeds most, if not all, sample sizes for peer reviewed public good valuation
studies. Summary statistics (means, medians, standard deviations, minimums, and maximums)
will be calculated for responses to questions as well.
15

3. Describe the methods used to maximize response rates and to deal with nonresponse.
The accuracy and reliability of the information collected must be shown to be adequate for
the intended uses. For collections based on sampling, a special justification must be
provided if they will not yield “reliable” data that can be generalized to the universe
studied.
Numerous steps have been, and will be, taken to maximize response rates and deal with nonresponse behavior. These efforts are described below.
Maximizing Response Rates
The first step in achieving a high response rate is to develop an appealing questionnaire that is
easy for respondents to complete. Significant effort has been spent on developing a good survey
instrument. Experts on economic survey design and stated preference techniques were hired to
assist in the design and testing of the survey. The current survey instrument has also benefited
from input on earlier versions from several focus groups and one-on-one interviews (verbal
protocols and cognitive interviews), and peer review by experts in survey design and non- market
valuation, and by scientists who study Steller sea lions, other marine mammals, and fisheries. In
the focus groups and interviews, the information presented was tested to ensure key concepts and
terms were understood, figures and graphics (color and black and white) were tested for proper
comprehension and appearance, and key economic and design issues were evaluated. In
addition, cognitive interviews were used to ensure the survey instrument was not too technical,
used words people could understand, and was a comfortable length and easy to complete. The
result is a high-quality and professional- looking survey instrument.
The implementation techniques that will be employed are consistent with methods that maximize
response rates. Implementation of the mail survey will follow the Dillman Tailored Design
Method (2000), which consists of multiple contacts. The specific set of contacts that will be
employed is the following:
i. An advance letter notifying respondents a few days prior to the questionnaire
arriving. This will be the first contact for households in the sample.
ii. An initial mailing sent a few days after the advance letter. Each mailing will contain
a personalized cover letter, questionnaire, and a pre-addressed stamped return
envelope. The initial mailing will also include a $10 incentive.
iii. A postcard follow-up reminder to be mailed 5-7 days following the initial mailing.
iv. A follow-up phone call to encourage response. Individuals needing an additional
copy of the survey will be sent one with another cover letter and return envelope.
v. A second full mailing will be sent using USPS certified mailing to all individuals
who have not returned the survey to date, including individuals who we were unable
to contact in the first phone interview.
Non-respondents
To better understand why non-respondents did not return the survey and to determine if there are
systematic differences between respondents and non-respondents, those contacted in follow-up
phone call(s) and identified as non-respondents will be asked a few questions to gauge their
16

reasons for not responding to the mail survey. These include select socioeconomic and
demographic classification questions and a few attitudinal questions. Information collected from
non-respondents will aid in improving the survey implementation and to correct for non-response
bias.
Specific steps that will be employed to assess the presence and extent of non-response bias are
the following:
•

As a first step, demographic characteristics collected from respondents and nonrespondents will be used in two comparisons: a comparison of respondents to nonrespondents and a comparison of respondents to U.S. Census data. For respondents, age,
gender, income, and education information will be available from the completed survey.
The same information will be available from non-respondents who participate in the
telephone interview. A comparison of the demographic differences may indicate how
respondents and non-respondents are different with respect to these characteristics. We
will also compare demographic information for survey respondents with U.S. Census
data to evaluate sample representativeness on observable data.

•

A parallel type of comparison will be made with respect to answers to the attitudinal
questions asked of respondents and non-respondents. One of these questions is the
General Social Survey question (Q2 in the mail surveys and Q1 in the telephone
interview). The distribution of responses to this question by respondents and nonrespondents will be evaluated for the two groups and compared with the GSS survey
results for the most recent occurrence of this question. Q1 in the mail surveys and Q2 in
the telephone interview are the same and thus allow another means to compare
respondents and non-respondents. The demographic and attitudinal question
comparisons will enable us to assess how similar respondents and non-respondents are to
each other and to the general population (except for the non-GSS attitudinal questions).

•

Another step that will be taken to evaluate the potential for non-response bias will be the
analysis of estimated values from the preference function as a function of time/sample
size. This approach essentially seeks to assess whether the estimated economic values
stabilize as additional sample is added over time. In some surveys, estimated economic
values (i.e., willingness to pay) decrease for respondents who return the survey later,
perhaps reflecting that early responders may be more interested in the topic and thus have
higher values. By analyzing how WTP changes during response waves, we can evaluate
the potential presence and significance of this effect on population wide estimates.

After taking the steps above, we will evaluate the potential magnitude of potential non-response
bias on the valuation results. If the potential is large, we will evaluate additional actions, such as
employing the approach of Cameron, Shaw, and Ragland (1999) (or newer approaches along
these lines) to explicitly account for sample selection in the model estimates. Their approach
extends the general Heckman (1979) sample selection bias correction model to the specific case
of mail survey non-response bias. The approach involves using zip code level Census data as
explanatory variables in the sample selection decision to explain an individuals’ propensity to
respond to the survey.

17

4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as
effective means to refine collections, but if ten or more test respondents are involved OMB
must give prior approval.
Several focus groups with fewer than ten members of the general public were conducted during
the survey design phase (prior to the formal pretest) to test concepts and presentation of elements
of the survey. These focus groups were conducted in Seattle and Denver. The survey instrument
was then further evaluated and revised using input from one-on-one interviews conducted in
Anchorage, Denver, Sacramento, and Rockville (Maryland). Both verbal protocol (talk aloud)
and self-administered interviews were conducted, both with follow-up debriefing by team
members. Moreover, the survey design and implementation plan have benefited from reviews
conducted by academics with expertise in economic survey design and implementation.
More recently, a focus group held in Seattle was conducted to further evaluate the changes made
to the survey instrument since the formal pretest.
5. Provide the name and telephone number of individuals consulted on the statistical
aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other
person(s) who will actually collect and/or analyze the information for the agency.
Several individuals were consulted on the statistical aspects of the design:
Dr. David Layton
Associate Professor of Public Affairs
University of Washington
(206) 324-1885
Dr. Robert Rowe
Chairman of the Board
Stratus Consulting, Inc.
(303) 381-8000
Dr. Roger Tourangeau
Director, Joint Program in Survey Methodology
University of Maryland and
Senior Research Scientist, Survey Research Center
University of Michigan
Dr. Dan Lew
Economist
NOAA Fisheries
(206) 526-4252
Dr. David Layton, Dr. Robert Rowe, Dr. William Breffle (Stratus Consulting) and Dr. Dan Lew
will be involved in the analysis of the data.
PA Consulting conducted the pilot pretest implementation under OMB Control No.: 0648-0511,
but no contractor has been selected for the full implementation yet.
18

References:
Bosetti, V. and Pearce, D. (2003) “A study of environmental conflict: the economic value of Grey Seals in
southwest England.” Biodiversity and Conservation. 12: 2361-2392.
Cameron, Trudy A., W. Douglass Shaw, and Shannon R. Ragland (1999). “Nonresponse Bias in Mail Survey Data:
Salience vs. Endogenous Survey Complexity.” Chapter 8 in Valuing Recreation and the Environment: Revealed
Preference Methods in Theory and Practice, Joseph A. Herriges and Catherine L. Kling (eds.), Northampton,
Massachussetts: Edward Elgar Publishing.
Dillman, D.A. (2000) Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons.
Fredman, P. (1995) “The existence of existence value: a study of the economic benefits of an endangered species.”
Journal of Forest Economics. 1(3): 307-328.
Giraud, K., Turcin, B., Loomis, J., and Cooper, J. (2002). “Economic benefits of the protection program for the
Steller sea lion.” Marine Policy. 26(6): 451-458.
Hagen, D., Vincent, J., and Welle, P. (1992) “Benefits of preserving old-growth forests and the spotted owl.”
Contemporary Policy Issues. 10: 13-25. (1992),
Heckman, James J. (1979). “Sample Selection Bias as a Specification Error.” Econometrica, 47(1): 153-162.
Jakobsson, K.M. and Dragun, A.K. (2001) “The worth of a possum: valuing species with the contingent valuation
method.” Environmental and Resource Economics. 19: 211-227.
Langford, I.H., Skourtos, M.S., Kontogianni, A., Day, R.J., Georgiou, S., and Bateman, I.J. (2001) “Use and nonuse
values for conserving endangered species: the case of the Mediterranean monk seal.” Environment and Planning A.
33: 2219-2233.
Lesser, V., Dillman, D.A., Lorenz, F.O., Carlson, J., and Brown, T.L. (1999). “The influence of financial
incentives on mail questionnaire response rates.” Paper presented at the meeting of the Rural Sociological Society,
Portland, OR.
Singer E. (2002). “The use of incentives to reduce nonresponse in household surveys.: In Survey Nonresponse, ed.
R Groves, D Dillman, J Eltinge, R Little, pp. 163-78. New York: John Wiley & Sons
Turcin, B. (2001) “Dichotomous choice contingent valuation willingness to pay estimates across geographically
nested samples: case study of Alaskan Steller sea lion.” Master’s thesis, University of Alaska, Fairbanks.
Turcin, B. and Giraud, K. (2003) “Motivations in willingness to pay estimates across geographically nested
samples: case study of Alaskan Steller sea lion.” Working paper, Department of Resource Economics and
Development, University of New Hampshire.

19

APPENDIX

Some Formal Pilot Pretest Results
Monetary incentives and response rates
Under OMB Control #648-0511, two pilot pretests were conducted to test survey protocols with
particular emphasis on determining the effect on response rates of three monetary incentive
amounts, $2, $5, and $10. The first pilot survey employed the $2 and $5 treatments, while the
second pilot survey (conducted under a worksheet change) used a $10 incentive for all
respondents.
Total response rates (calculated as the number of comple tes over the total eligible respondents)
for each treatment are listed in Table A-1.

Table A-1. Response Rates by Incentive Amount
Incentive Amount
Response Rate
(Total eligible sample size)
$2
34.9%
(192)
$5

49.0%
(200)

$10

57.0%
(142)

Statistical tests of differences between the response rates of the three treatments suggest that the
$5 treatment and $10 response rates are significantly larger than the $2 treatment, with
corresponding p-values of 0.00235 and 0.000281, respectively (for a one-sided statistical test
with a null hypothesis of equal response rates). In addition, the $10 treatment response rate is
statistically different from the $5 treatment response rate at the 10% level (p-value of 0.0711).
To further assess differences between the responses achieved by incentive amount, we examined
the item non-response rate for critical questions (i.e., the choice questions in the pretest survey,
Q11, Q13, and Q14). Across incentive amounts, these questions had a very high item response
rate, with less than 5% non-response to any of these questions. The lowest item non-response
rates (1.23%) were seen in the $10 treatment for all of these questions except for one. Across all
questions, the $10 treatment had the fewest cases of questions with item non-response rates
exceeding 5% (only 2 questions compared with 3 for the $2 treatment and 6 for the $5
treatment). While these results are based on small samples, they do suggest the larger incentive
amount may help reduce item non-response rates for key questions.

20

Monetary incentives and willingness to pay
To assess whether the samples obtained using different monetary incentives lead to different
estimates of willingness to pay (WTP), we estimated several simple linear main effects
conditional logit models. For the policy scenario that assumes the most conservation benefits,
the estimated WTPs from these models for the $2, $5, and $10 samples are not statistically
different due to very wide confidence bounds. These noisy results are consistent with other
models that were run and can be attributed to the combined effects of small pretest samples and a
limited experimental design that was not designed solely to estimate WTP (it was also used to
address scope effects and other methodological issues).

Correlation between general attitudinal questions and stated preferences
Results from the pretest implementation provide additional evidence of the correlation of both
questions Q1 and Q3 with the choice question responses (Q11, Q13, and Q14). Tables A-2 and
A-3, which summarize responses to the choice questions over the distribution of responses to Q1
and Q2, display clear and consistent trends in support of correlation.
As shown in Table A-2, respondents with positive (negative) feelings about the Endangered
Species Act (Q1) are much more likely to say the status quo (SQ) alternative is the worst (best)
option in the choice questions (Q11, Q13, and Q14 in the pretest survey). 11

Table A-2. Evidence of Correlation Between Responses to Q1 and Choosing the Status Quo
Alternative in Stated Preference Choice Questions from Pretest Implementation
Q1 - When you think of the Endangered
Species Act, how positive or negative is your
general reaction?

No. of individuals

Percent indicating SQ Percent indicating SQ
is worst choice
is best choice

Mostly positive

72

78.7%

8.3%

Somewhat positive

43

62.0%

20.9%

Neutral

22

47.0%

51.5%

Somewhat negative

6

55.6%

33.3%

Mostly negative
4
8.3%
83.3%
Note: Responses are pooled over the three choice questions. Percentages in the last two columns do not sum to
100% due to the percentage of respondents that indicated the status quo is neither the best nor the worst choice.

Moreover, Table A-3 shows that individuals who disagreed with the statement that “Protecting
threatened and endangered species is important to me” generally did not choose the status quo
alternative as the worst choice, which is consistent with the idea that these individuals would
11

There are minor variances to the trends in the data in this table and in Table A-3. These are not surprising given
the small sample sizes in the pretest, the fact that differences in respondent incomes and other variables were not
accounted for, and the responses were not carefully examined for response validity based on other criteria, all of
which will be done in the full study when using Q1 and Q3 to evaluate choice question responses .

21

generally prefer to not spend money on alternatives that do more to protect Steller sea lions.
Conversely, those agreeing with the statement are more likely to indicate the SQ option is the
worst choice. More generally, if respondents agreed (disagreed) that protecting threatened and
endangered species was important, they were much more likely to say the SQ option was the
“worst” (“best”) option among those presented in the choice questions (Q11, Q13, and Q14).

Table A-3. Evidence of Correlation Between Responses to Q3 (part 1) and Choosing the Status
Quo Alternative in Stated Preference Choice Questions from Pretest Implementation
Q3 – Protecting threatened and endangered
species is important to me

No. of individuals

Percent indicating SQ Percent indicating SQ
is worst choice
is best choice

Strongly disagree

5

13.3%

86.7%

Somewhat disagree

5

33.3%

46.7%

Neither agree or disagree

16

33.3%

54.2%

Somewhat agree

63

64.6%

20.1%

Strongly agree
64
80.7%
9.4%
Note: Responses are pooled over the three choice questions. Percentages in the last two columns do not sum to
100% due to the percentage of respondents that indicated the status quo is neither the best nor the worst choice.

Responses by Survey Stage
In each survey treatment, the majority of completed surveys were returned before the telephone
interviews were conducted. Specifically, 78% of all completes were received for the $10
treatment, 83% for the $5 treatment, and 82% for the $2 treatment. Following the telephone
contact, another 14%, 6%, and 18% of completes were received. The remainder of completes in
each treatment was received after the second full mailing was sent out.
As Table A-4 shows, the $10 incentive led to higher response rates in the earlier survey stages
than the other incentive levels, which suggests the higher incentive will lead to lower costs of
follow-up activities due to fewer non-respondents that remain to be contacted.

Table A-4. Percent of Mail Outs Received by Survey Stage (number of completes in parentheses)
Survey Stage

$2 incentive

$5 incentive

$10 incentive

Initial mailing and
postcard reminder

28.65% (55)

40.50% (81)

44.37% (63)

Telephone interviews

6.25% (12)

3.00% (6)

7.75% (11)

Second full mailing

0.00% (0)

5.50% (11)

4.93% (7)

Total response rate

34.90% (67)

49.00% (98)

57.04% (81)

22

Changes to Survey Instrument Following the Formal Pretest
The following changes were made to the survey instrument following the formal pretest
implementation:
•
•

•

For the reasons discussed on page 2 of the supporting statement, a total of three survey
versions were developed that embodied different assumptions about the likely future ESA
status and population size of the Western stock of Steller sea lions.
Q2 in the pretest survey instrument was replaced with a General Social Survey-based
question that enables a means of comparison with another nationwide general population
survey. As with the original Q2 used in the pretest instrument, the question also acts to
remind respondents that the issue in the survey (protection of threatened and endangered
species) is only one of several social issues they may care about, as discussed on page 3
of the supporting statement.
Q10 in the pretest survey instrument was intended to gather data on preferences for
protecting Steller sea lions in areas that may be more costly to protect, and was used to
set up an attribute of the protection alternatives in the stated preference choice questions.
From the pretest results and focus group testing, this question and the issues it raises do
not appear to be a major issue for respondents and sometimes leads to confusion.
Therefore, Q10 and its associated attribute in the choice questions were dropped from the
survey instrument for the final implementation. Removing the issue from the survey
greatly simplifies the choice questions and allows us to increase resources devoted to
understanding differences in willingness to pay associated with the different Western
stock baselines.

23


File Typeapplication/pdf
File TitleSSL_SS_Parts_A&B_final.doc
AuthorDan Lew
File Modified0000-00-00
File Created2007-03-01

© 2024 OMB.report | Privacy Policy