2233ss04b

2233ss04b.pdf

EPA's WaterSense Program (Renewal)

OMB: 2040-0272

Document [pdf]
Download: pdf | pdf
PART B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS
Part B applies to three WaterSense data collection efforts: a census survey of all partners and two
consumer surveys, one of which uses sampling techniques. Section I describes the required
elements for the annual reporting process, Section II discusses these elements for the consumer
awareness phone survey, and Section III discusses these elements for the Internet-based
consumer survey.
SECTION I: ANNUAL REPORTING CENSUS
B1.

Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection method to be used. Data on the number of
entities (e.g., establishments, state and local government units, households, or
persons) in the universe covered by the collection and in the corresponding sample
are to be provided in tabular form for the universe as a whole and for each of the
strata in the proposed sample. Indicate expected response rate for the collection as a
whole. If the collection has been conducted previously, include the actual response
rate achieved during the last collection.

WaterSense will collect data from all organizational partners (promotional partners,
manufacturers, and retailers/distributors,) via annual reporting each year from 2010 - 2013.
Beginning in 2011, irrigation partners, will also be required to report. As of April 2009,
WaterSense has 1,286 partners. This is a census survey and no sampling methods will be used.
Table B-1 summarizes the number of partners by type
B-1: WaterSense Partners by Partner Type, April 2009
Partner Type
Number
Promotional Partners

419

Manufacturers

83

Retailers/Distributors

104

Irrigation Partners

680

Total

1,286
Table B-2 summarizes the actual response rate received during previous data collections.

B-2: WaterSense Partner Reporting in 2008 and 2009
Partner Type
Promotional Partners
Manufacturers

10/30/09

Reporting Year
2009
2008
2009

Response Rate
52 percent
30 percent
65 percent

Page 1

Partner Type
Retailers/Distributors
Irrigation Partners
Total

B2.

Reporting Year
2008
2009
2008
2008-2009
2009
2008

Response Rate
41 percent
30 percent
20 percent
N/A – reporting not required
51 percent
31 percent

Describe the procedures for the collection of information including: (a) statistical
methodology for stratification and sample selection, (b) estimation procedures, (c)
degree of accuracy needed for the purpose described in the justification, (d) unusual
problems requiring specialized sampling procedures, and (e) any use of periodic
(less frequent than annual) data collection cycles to reduce burden.

The WaterSense Helpline will receive Promotional Partner Annual Reporting forms via
fax, e-mail or U.S. mail. It is the Helpline’s responsibility to log the data. WaterSense contractor
will receive shipment and sales data from manufacturer and retailer/distributor partners. The data
will be recorded on annual reporting forms and received via the U.S. mail or private delivery
service. EPA anticipates that most, if not all, partners will request that shipment and sales data be
handled as CBI. WaterSense procedures assume that all shipment and sales data will be handled
as CBI. Upon receipt of an annual reporting form with shipment or sales data, the WaterSense
data manager provides a listing of reporting partners to the helpline staff. Helpline staff verify
eligibility of the partner to report by checking the listing of partners in the manufacturer or
retailer/distributor category. If the partner is not eligible to report, helpline staff contact the
partner to discuss their partnership category and revise the partner’s designation in Salesforce (if
necessary) or inform the data manager and the helpline manager that the partner incorrectly
submitted data. If the partner is not eligible to report, the data manager will not include the
partner’s data in subsequent analyses and note the decision on the partner’s reporting form. For
manufacturers, the data manager will conduct one additional step: the data manager, or her
designee, will contact the licensed certifying body that issued the WaterSense label to the
manufacturer, to ensure that the manufacturer is in fact the holder of the original certification
file. This step reduces the chances of double counting shipment data submitted by the original
manufacturer and others who rebrand and resell the products. After confirming eligibility to
submit data (i.e., data are from a bona fide partner, shipment data are from a manufacturer
partner who holds the original certification file), the data manager will review each submission
to determine:
• Are there internal inconsistencies in the report (e.g., number of WaterSense products
shipped (sold) exceeds the total number of products shipped (sold))?
• Are the data unrealistic for the given partner (e.g., are the shipments (sales) excessive for
what might be expected for a company of size reporting)?
• Is there evidence that the partner has provided estimated data rather than actual?
• Is there evidence that the partner has included international shipments or sales?

10/30/09

Page 2

If the answer to any of the above questions is yes, ERG will contact the partner to clarify. If the
partner cannot sufficiently explain the data, they will not be included in subsequent data
analyses.
B3.

Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information must be shown to be adequate
for intended use. For collections based on sampling, a special justification must be
provided for any collection that will not yield “reliable” data that can be generalized
to the universe studied.

Upon joining WaterSense, EPA requires partners to commit to submitting annual reports.
The importance of the annual reporting process is stressed to partners upon joining. To maximize
response EPA conducts the following activities:
• Require the submission of an annual reporting form for those partners wishing to be
considered for a WaterSense award
• Host a partner conference call to answer partner questions about the form
• Send emails and a postcard to remind partners of upcoming due dates
• Remind partners about reporting in quarterly newsletter
• Follow-up phone calls to non-reporting partners
• Coordination with a plumbing trade association whose members are also WaterSense
partners to encourage submission of annual reporting forms
• Require that an annual reporting form is completed in order to receive an award or
recognition from EPA/WaterSense
B4.

Describe any tests of procedures or methods to be undertaken. Testing is
encouraged as an effective means of refining collections of information to minimize
burden and improve utility. Tests must be approved if they call for answers to
identical questions from 10 or more respondents. A proposed test or set of tests may
be submitted for approval separately or in combination with the main collection of
data.

EPA provided draft versions of the annual reporting form to several partners (fewer than
10) for comment and revised the form accordingly. Upon the completion of each data collection
cycle, EPA examines the responses, form, and procedures and identifies potential changes for the
following year based on common questions or issues that arise.
B5.

Provide the name and telephone number of individuals consulted on statistical
aspects of the design and the name of the agency unit, contractor(s), grantee(s), or
other person(s) who will collect and/or analyze the information for the agency.

Laura Harwood of Eastern Research Group (ERG) is the task manager for the census
survey and can be reached at (703) 841-0589. Roy Sieber serves as ERG’s program manager and
can be reached at (703) 633-1614. Dr. Lou Nadeau is an ERG senior economist and can be
reached at (781) 674-7316.

10/30/09

Page 3

SECTION II: CONSUMER AWARENESS PHONE SURVEY
B.1.

Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection methods to be used. Data on the number of
entities (e.g., establishments, State and local government units, households, or
persons) in the universe covered by the collection and in the corresponding sample
are to be provided in tabular form for the universe as a whole and for each of the
strata in the proposed sample. Indicate expected response rates for the collection as
a whole. If the collection had been conducted previously, include the actual response
rate achieved during the last collection.

The target population for these two surveys will be households in the United States. This
corresponds to the target population of the WaterSense program. The sampling population will
be adults living in households that are surveyed. There are 116.8 million households in the
United States according to the most recently available Census Bureau numbers. 1 From this, EPA
will select a sample of 400 respondents for each survey. The justification for this sample size
appears in response to Question B.2 below. The sampling frame will be a list provided by a
telephone survey contractor. Lists of U.S. households are readily available for this type of
survey.
EPA has not previously conducted these surveys, thus it cannot estimate the expected
response based on past experience. Evaluation practitioners have documented that survey
response rates have been declining worldwide in recent years. 2 In a recent study 3 of response
rates for 205 telephone surveys conducted in the same survey lab over a three-year period, study
authors found that response rates varied from fewer than 10 percent to greater than 90 percent
response, with the largest number of surveys falling in the 25 – 45 percent range. Further, the
study authors found that response rates are affected by contextual variables including:
•
•
•
•
•

Salience of the survey to the population
Survey length
Type of sample (listed versus random-digit dialing)
Minutes per piece of sample (effort)
Amount of time the survey was in the field.

A ten-minute increase in survey length results in a 7 percent decrease in the response rate. Since
the proposed WaterSense customer awareness survey is short, EPA anticipates a relatively high
response rate. In addition, EPA will take actions to maximize response rate (see discussion in
Section B.3 below). Given these factors, EPA estimates a response rate of approximately 40
percent for the consumer awareness survey.

1

http://www.census.gov/population/socdemo/hh-fam/cps2008/tabAVG1.xls.
See de Leeuw, Edith, and Wim de Heer. 2002. “Trends in Household Survey Nonresponse: A Longitudinal and
International Comparison.” In Survey Nonresponse, ed. Robert M. Groves, Don A. Dillman, John L. Eltinge, and
Roderick J. A. Little, pp. 41–54. New York: Wiley; Singer, Eleanor. 2006. “Introduction - Nonresponse bias in
household surveys.” Public Opinion Quarterly, 70(5):637-645. DOI:10.1093/poq/nfl034.
3
McCarty, C., M. House, et al. (2006). “Effort in Phone Survey Response Rates: The Effects of Vendor and ClientControlled Factors.” Field Methods 18(2): 172-188.
2

10/30/09

Page 4

B.2.

Describe the procedures for the collection of information including:
•
•
•
•
•

Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequently than annual) data collection cycles to
reduce burden.

Statistical Method for Stratification and Sample Selection
This section discusses the statistical methods for stratification and sample selection used
in the survey. Before discussing those methods, the section begins by explaining EPA’s approach
to estimating the sample size for this collection.
Sample Size Estimation
This section provides estimates of sample size needed to meet a set of specific statistical
criteria for the survey data. Choosing a sample size is a three-step process. First, statistical
criteria are set; second, an initial sample size is chosen based on the criteria; and third, if
necessary, the initial sample size is adjusted based on the population size. The third step,
however, is not needed for this survey due to the large universe size. The section begins with a
discussion of the statistical criteria and then discusses the estimated initial and adjusted sample
sizes.
The statistical criteria used in choosing a sample size are:
•

Precision—The maximum difference in the parameter of interest between an estimate
for that parameter obtained from the sample and the value of that parameter in the
population.

•

Confidence—The probability of correctly accepting a true hypothesis.

•

Power—The probability of correctly rejecting a false hypothesis.

Simultaneously setting values for each criterion will generate a sample size estimate. In
general, however, there are limited acceptable choices in terms of power and confidence and,
therefore, most of the emphasis is in choosing a reasonable precision. To select sample size using
these criteria, EPA has used statistical power analysis techniques (Cohen, 1988). The use of each
in choosing a sample size is discussed in what follows.
Precision
The precision of a sample is the maximum difference (in units of the key question) that
one is willing be away from the population parameter of interest. The parameter of interest for

10/30/09

Page 5

this survey is awareness of the WaterSense program. There are two key aspects of this
parameter:
•

The level of awareness (i.e., what percentage of households are aware of the
program?), and

•

Whether or not awareness is increasing over time (i.e., is there a statistically
significant increase in awareness between surveys?)

The first reflects estimation of a proportion (percentage of households aware of the program) and
the second reflects a change in a proportion over time. Thus, precision will need to be in terms of
percentage points. EPA has determined that the sample size should provide valid data for the
second aspect (change in awareness over time), but should also provide reasonable reliable data
to determine the level awareness also. EPA has determined that the sample should be able to
detect three to five point change in awareness over time. 4 For example, if awareness is found to
be 5 percent in this survey and a subsequent survey finds awareness to be 10 percent, the sample
sizes for each survey should be large enough to find that change to be statistically significant.
Power
The power of a statistical test is one minus the probability of a Type II error (not rejecting
a false hypothesis), or the probability of correctly rejecting a false hypothesis. In this sense,
power corresponds to finding the specified effect size (i.e., precision) when that effect in fact
exists. Traditional hypothesis tests set, by default, power to 50 percent. Following Cohen’s
(1988) suggestion, EPA will use 80 percent power. Thus, in terms of the second aspect identified
above under precision (change in awareness over time), the sample size will have an 80 percent
chance of detecting the specified change in awareness if that change actually occurred.
Confidence
Confidence is the probability of accepting a true hypothesis. For purposes of sampling,
confidence defines the likelihood that the population mean will be contained in the interval
around the sample mean defined by the precision for the sample. EPA will use 90 percent
confidence in setting the sample size. Furthermore, since EPA is interested in increases in
awareness, EPA will use a one-sided confidence interval.
Sample Size Estimates
The initial sample size estimates are based on the methods of Cohen (1988), Chapter 6,
which provides power analytic methods for detecting differences between proportions over time.
EPA combined these methods with the precision, power, and confidence setting discussed above
and an assumption on the current level of awareness in the universe. Setting a sample size for
proportions requires either having some knowledge of proportion for the population beforehand,
or making an assumption on the proportion for the population. EPA expects that awareness of
4

EPA has specified a range for precision due to uncertainty over the baseline level of awareness. This will be
discussed in more detail below under sample size estimation.

10/30/09

Page 6

WaterSense is fairly low among the universe and has assumed it to be between two and ten
percent. Thus, the sample size should be able to detect three to five percent increase in awareness
(precision) between surveys from the assumed baseline levels of awareness at 90 percent
confidence and 80 percent power. Table B-3 provides sample sizes needed to detect the increases
in awareness for the assumed range of baseline awareness (for the set power and confidence).
Based on this table, EPA has selected a sample size of 400 households. Sample sizes in the table
that are less than 400 households are depicted in bold italics. As can be seen, a sample of 400
households will allow for detection of a five point increase in awareness from all baseline levels,
detection of a four point increase in awareness for baseline awareness in the two to five percent
range of baseline awareness, and detection of a three point increase in awareness for very low
baseline awareness (2 percent).
Table B-3. Sample Sizes Needed to Detect the Specified
Precision Values for Assumed Values of Baseline Awareness
Between Two and Ten Percent at 90 Percent Confidence and
80 Percent Power
Assumed
Precision: Increase in Awareness to be
Baseline
Detected
Level of
Three
Four
Five
Awareness in
Percentage
Percentage
Percentage
the Universe
Points
Points
Points
2%
323
203
143
3%
419
257
178
4%
512
310
212
5%
602
360
244
6%
689
409
275
7%
774
457
306
8%
857
503
335
9%
938
548
364
10%
1016
592
391
Note: The values in Table B-3 are calculated using the following
formula:
n=

(

n0.1

100 2 arcsin p + r − 2 arcsin p

)

2

where n0.1 is equal to 902 (in this case) and is derived from Cohen,
Chapter 6, table 6.4.1 (third panel) and reflects a base sample size
for one-sided 90 percent confidence and 80 percent power, p are
the baseline awareness values, and r are the precision values.
Sample Allocation
EPA will allocate the sample across the United States in proportion to the distribution of
households in the United States.

10/30/09

Page 7

Estimation Procedure
Estimation of population parameters (means, totals, and variances) will be done by
appropriately weighting the sample values. The weight for any responding sample unit can be
calculated as:
weight = (weight associated with sampling) × (weight that adjusts for nonresponse)
Estimation of these weights are discussed below, followed by calculation of weighted estimates
for the population parameters.
The weights associated with sampling can be estimated as the reciprocal of the selection
probabilities. These can be estimated as:
whj =

Nh
= wh
nh

(6)

where Nh is the population for stratum h and nh is the sample chosen from the stratum h. The
second equality indicates that the weight for each sample unit from a specific stratum will be the
same as all of sampled units from that stratum. 5
To account for nonresponse, EPA will use a weighting class adjustment procedure.
Specifically, EPA assumes that the probability that any population unit responds to the survey is
the product of the selection probability (above) and a response probability. To estimate response
probabilities, EPA will divide the respondents in each survey into a small number of classes
(strata). The exact classes for each survey will depend on the nature of the data available for each
group. The response probabilities are then:

θk =

( sum of weights for respondents in k )
( sum of weights for selected sample in k )

(7)

where k indexes the nonresponse weighting classes. Thus, the weight for a respondent in class k
and stratum h becomes:
~ = Nh 1
w
hk
nh θ k

(8)

The weighted total for any question in these surveys can be found using the following
formula: 6
5

This will not be true once weighting for nonresponse is done.
A number of questions asked in the surveys can be thought of as yes/no questions (e.g., explicit yes/no questions as
well as ‘check all that apply’ questions). The ‘totals’ from these will be the number that respond ‘yes’ to the
question.

6

10/30/09

Page 8

Y$ =

H

Jh

∑ ∑ w~
h =1 j =1

hj

y hj

(9)

where Y$ is the weighted estimate of the population total for variable/question y, h indexes strata,
H is the total number of strata used in the survey, j indexes sample respondents within each
~ is the nonresponse-adjusted
stratum, Jh is the total number of respondents in stratum h, w
hj
sampling weight for the jth unit in stratum h, 7 and yhj is the value for the variable/question y for
the jth unit in stratum h. Estimates of the population means can then be found as:
Y$ =

Y$
H

Jh

∑∑

h =1 j =1

(10)
~
w
hj

Estimation of the variance for the expressions in equations (9) and (10) will require more
complex methods. The variance of both expressions will need to account for (a) the fact that each
represents a sample estimate of a population parameter and (b) the fact that the response weights
defined in (7) are sample quantities (i.e., they are not fixed). To calculate these variances, EPA
will use a bootstrap method. Bootstrap methods involve repeated sub-sampling (with
replacement) of the selected sample and calculating the variance from the repeated samples. In
theory, if the sample is reflective of the population distribution, then repeated sub-sampling of
the sample will result in an unbiased and reliable estimate of the variance. OLMS will use the
Rao and Wu (1988) method for bootstrap estimation of a variance. This method is summarized in
Lohr (1999, p. 307). 8
Degree of Accuracy
See above.
Unusual Problems
None.
Use of Periodic Rate Less Than Annually
Data collected under these surveys are one-time and are thus less frequent than annual.
B.3. Describe methods to maximize response rates and to deal with issues of
non-response. The accuracy and reliability of information collected must be shown to be
adequate for intended uses. For collections based on sampling, a special justification must
7

Additionally, because equation (8) defines weights for each respondent, the class-level indexing in equation (8) can
~ ).
be replaced by respondent-level indexing. That is, equation (8) defines a value for each respondent ( w
hj
8

The method of Rao and Wu (1988) has been coded into available software packages.

10/30/09

Page 9

be provided for any collection that will not yield “reliable” data that can be generalized to
the universe studied.
EPA will employ a number of procedures to maximize response rates and to mitigate issues
associated with non-response. A number of nonresponse issues may arise during the data collection
process. Table B-4 summarizes issues related to nonresponse and the approach to handling those
issues. EPA will analyze the known data on non-respondents (e.g., region, size, etc) for patterns.
This analysis will indicate whether the data collected through the survey will have any potential
biases.
Table B-4. Nonresponse Issues and Techniques Used to Minimize the Impact of Those
Issues
Nonresponse Issue
Techniques to Be Used to Minimize Impact of
Nonresponse
Refusals—Observational unit
refuses to take survey

• EPA will be using a professional survey firm that is
skilled in converting refusals.
• EPA will replace refusals with similar households (e.g.,
within the same geographic region).
• EPA will develop a questionnaire that limits the burden
imposed on observational units.

Not available—Observational
unit not available at the time the
phone survey firm calls

• The survey firm will call the household back up to three
times before considering them non-respondents and
excluding them from the sample.
• EPA will replace refusals with similar households (e.g.,
within the same geographic region).

Refusal to answer specific
questions—Observational unit
refuses to answer specific
questions

• EPA will be using a professional survey firm that is
skilled in converting refusals.
• EPA will develop a questionnaire that limits the burden
imposed on observational units.

B.4.

Describe any tests of procedures or methods to be undertaken.

EPA will pre-test the survey instruments using a small set of households. The purpose of
the pre-tests will be to assess the questionnaire validity in collecting the necessary data.
No other tests of procedures or methods will be used for these surveys. EPA expects that
the sampling scheme and the implementation process are relatively straightforward and should
be accomplished without need for testing the methods.
B5.

Provide the name and telephone number of individuals consulted on statistical
aspects of the design and the name of the agency unit, contractor(s), grantee(s), or
other person(s) who will collect and/or analyze the information for the agency.
Dr. Lou Nadeau, ERG, 781-674-7316

10/30/09

Page 10

SECTION III: CONSUMER SURVEY (INTERNET-BASED)
B1.

Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection method to be used. Data on the number of
entities (e.g., establishments, state and local government units, households, or
persons) in the universe covered by the collection and in the corresponding sample
are to be provided in tabular form for the universe as a whole and for each of the
strata in the proposed sample. Indicate expected response rate for the collection as
a whole. If the collection has been conducted previously, include the actual response
rate achieved during the last collection.

The purpose of this survey is to increase understanding of market barriers that exist with
water-efficient products and services, develop messages that will address these barriers, and test
the results of the findings in key markets. Participants will be invited to provide feedback on
promotions and messaging.
Approximately four hundred US adults will be invited to participate in the research effort.
Only homeowners will be sampled, since renters are unlikely to be responsible for purchasing
low-flow plumbing equipment. Data will be stratified by whether the individuals live in water
rich or water poor areas of the nation.
The goal of this research is to determine which methods of communicating water
efficient-products are most effective (i.e., translate into the most positive attitudes toward and
intentions to buy) to U.S. consumers. We are not trying to estimate a population parameter (e.g.,
such as general awareness of a program or logo), rather we are comparing efficacy of
communication techniques. The survey will utilize 20-30 respondents in each communication
condition.
The sample will be drawn using an electronic panel of homeowners and will not be using
a random sample; thus, information regarding nonresponse is not available. Nonresponse bias is
of concern when attempting to generalize from the sample estimate to a population parameter,
but becomes somewhat unimportant when the goal is to determine differences between treatment
groups.
B2.

Describe the procedures for the collection of information including: (a) statistical
methodology for stratification and sample selection, (b) estimation procedures, (c)
degree of accuracy needed for the purpose described in the justification, (d) unusual
problems requiring specialized sampling procedures, and (e) any use of periodic
(less frequent than annual) data collection cycles to reduce burden.

a) A statistical methodology will not be used for stratification and sample selection since the
goal of the research is to determine differences between groups.
b) Multivariate Analysis of Variance (MANOVA) and Multivariate Analysis of Covariance
(MANCOVA) will be used to test for differences between groups and to control for
covariates.

10/30/09

Page 11

c) A “by-invitation-only” only panel recruitment method is used by the on-line panel provider
that will be collecting the data for this study. This means that only pre-validated individuals
participate. Thus, a cross-section of “real” consumers, as opposed to regular survey takers,
will be included in the sample. The panel to be used does not employ an “open” online panel
recruitment method which simply allows individuals to self-select into the study.
d) There are no unusual problems that will require specialized sample procedures.
e) Data will not be collected periodically. This is a one-time study.
B3.

Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information must be shown to be adequate
for intended use. For collections based on sampling, a special justification must be
provided for any collection that will not yield “reliable” data that can be generalized
to the universe studied.

Since the purpose is to test for differences between groups, generalizability to the
population at large (i.e., external validity) is less important than assurance that the experimental
condition created the effect (internal validity). In such situations, nonresponse bias is not a major
concern in the study design.
Incentives and reward programs are used to increase the response rate. Specifically,
Qualtics (the panel owner) compensates the panel members based on the amount of time they
spend answering questions. For each unit of time, panel members earn online currency which
they can redeem for their choice of a variety of rewards.
B4.

Describe any tests of procedures or methods to be undertaken. Testing is
encouraged as an effective means of refining collections of information to minimize
burden and improve utility. Tests must be approved if they call for answers to
identical questions from 10 or more respondents. A proposed test or set of tests may
be submitted for approval separately or in combination with the main collection of
data.

Pretesting of the questionnaire design and procedures will be conducted in two different
ways. First, using the methods described in Dillman (2000), a verbal protocol will be used with
less than ten home owners to gain a better understanding of how the target population is
interpreting questions and whether the appropriate response categories are included. Second, the
on-line survey will be pretested with a group of students studying social marketing to insure that
it is working properly, is user-friendly and that the data can be downloaded adequately.
*Don A. Dillman (2000) Mail and Internet Surveys: The Tailored Design Method. John Wiley & Sons, Inc., NY,
NY.

B5.

Provide the name and telephone number of individuals consulted on statistical
aspects of the design and the name of the agency unit, contractor(s), grantee(s), or
other person(s) who will collect and/or analyze the information for the agency
Laura Harwood, ERG, (703) 841-0589, Dr. Lou Nadeau, ERG, (781) 674-7316

10/30/09

Page 12

Qualtrics Corporation will be collecting the data via an electronic consumer panel. The firm’s
phone number is 1-800-340-9194.
REFERENCES
Cohen, Jacob, 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition,
Lawrence Earlbaum Associates Publishers, Hillsdale, N.J.
Lohr, Sharon, 1999. Sampling: Design and Analysis, Duxbury Press, Pacific Grove, CA.
Rao, J.N.K., and C.F.J. Wu, 1988. “Resampling Inference with Complex Survey Data,” Journal
of the American Statistical Association, 83: 231-241.

10/30/09

Page 13


File Typeapplication/pdf
File TitleOMB Forms Justification Package
AuthorSTRANG_B
File Modified2009-10-30
File Created2009-10-30

© 2024 OMB.report | Privacy Policy