0815_Supporting_Statement_PART_B_2019_REVISED_CLEAN_042220

0815_Supporting_Statement_PART_B_2019_REVISED_CLEAN_042220.pdf

National Panel of Tobacco Consumer Studies

OMB: 0910-0815

Document [pdf]
Download: pdf | pdf
U.S. FOOD AND DRUG ADMINISTRATION
NATIONAL PANEL OF TOBACCO CONSUMER STUDIES
SUPPORTING STATEMENT PART B
OMB CONTROL NO. 0910-0815

TABLE OF CONTENTS
Section

Page

Part B: Collection of Information Employing Statistical Methods ........................................... 3
B.1

Respondent Universe and Sampling Methods.......................................................... 3
B.1.1
B.1.2
B.1.3
B.1.4
B.1.5

B.2

Information Collection Procedures ........................................................................ 13
B.2.1
B.2.2
B.2.3
B.2.4

B.3

Overview of the Sample Design ...............................................................3
Stratified Four-stage Sample Design and Sample Selection ....................3
Recruitment Response Rates ....................................................................9
Precision and Statistical Power .................................................................9
Panel Replenishment...............................................................................11
Weighting Plan .......................................................................................13
Initial Implementation of the Panel ........................................................14
Panel Recruitment and Replenishment ...................................................16
Panel Maintenance ..................................................................................23

Methods to Maximize Response Rates and Assess Non-Response Bias ............... 24
B.3.1
B.3.2
B.3.3
B.3.4
B.3.5

Response Rates .......................................................................................24
Nonresponse Bias Assessment................................................................28
Compare Respondent and Population Benchmarks ................................30
Weight Adjustment to Minimize Nonresponse Bias ..............................31
Nonresponse Bias Assessment Results ...................................................31

B.4

Tests of Procedures ................................................................................................ 34

B.5

Individuals Consulted on Statistical Aspects and Individuals Collecting
and/or Analyzing Data............................................................................................ 35

References ............................................................................................................................... 37

1

EXHIBITS
Number

Page

Table 1. 2010 U.S. Census Data ..................................................................................................... 5
Table 2. 2006-2011 American Community Survey (ACS) 5-year Summary File ......................... 6
Exhibit B.1-1. Sample Sizes in Sampling Domains ....................................................................... 8
Exhibit B.1-2. Actual Panel Recruitment Response Rates ............................................................. 9
Exhibit B.1-3. Relative Standard Errors/Power to Compare Prevalence Estimates ..................... 11
Exhibit B.1-4. Estimated Sample Sizes for Yearly Sample Replenishment ................................. 12

2

PART B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL
METHODS
B.1

Respondent Universe and Sampling Methods
This section describes the sample design for establishing the panel, including the four-

stage sample design, sample selection at each stage, sample sizes, and precision and statistical
power. The section also describes sample replenishment plans for the panel.
B.1.1 Overview of the Sample Design
The target population for the panel is tobacco users aged 18 years and older in housing
units and in noninstitutionalized group quarters in the 50 states and the District of Columbia. A
stratified four-stage sample design was used, to recruit approximately 4,000 adult tobacco users
into the sample panel. Eighty (80) primary sampling units (PSUs) were selected at the first stage,
3 census block groups (CBGs) within each selected PSU at the second stage, approximately 152
housing units (HUs) within each selected CBG at the third stage, and a maximum of one adult
tobacco user from an eligible HU at the fourth stage. To successfully recruit 4,000 adult tobacco
users for establishing the panel, we selected 43,123 HUs for screening and recruiting. This
included 6,852 HUs selected from a reserve sample to increase the number of young adults
enrolled in the panel. Full details of the sample design are presented in Attachment 5.
The main goal of the design was to select a sample of all tobacco users in the nation
representing the full range in that population with respect to behavior patterns, knowledge, and
attitudes. Another objective was to design a sample that was efficient and cost-effective. This
was the motivation behind the strategies for stratification, stratum allocation, and PSU design.
B.1.2 Stratified Four-stage Sample Design and Sample Selection
The four-stage sample design and the probabilities proportional to size (PPS) measure
selection method applied at the first and second stages, where the number of tobacco users is
used as the size measure, ensure a near equal probability selection method (epsem) within each
of the four design domains:
•

18- to 25-year-olds, low socioeconomic status (SES)

•

18- to 25-year-olds, non-low SES

•

26 years of age or older, low SES

3

•

26 years of age or older, non-low SES

The epsem sample minimizes the unequal weighting effect (UWE), thereby maximizing
the precision of estimates for those domains. In addition, selecting the same number of CBGs
within a PSU and equally allocating HU samples to each CBG provide for a consistent workload
for each field interviewer in every PSU and more efficient field management.
Sampling PSUs at the First Stage: At the first stage, a sample of 80 PSUs in 50 states
and Washington, DC, was drawn. Traditionally PSUs have been defined as one county or groups
of counties because that is the administrative unit for which Census data are readily available.
However, counties have very large variation in population sizes (varies from 82 to 9,818,605
among 3,143 counties) and large variation in number of estimated tobacco users 1 (varies from 17
to 1,074,654). As a result, some large counties were selected in the PSU sample with certainty;
certainty PSUs could cause more variation in sample weights. To avoid undesirable effects
caused by the large variation in population size or number of estimated tobacco users, we created
customized PSUs by combining small contiguous counties and splitting large counties based on
the number of estimated tobacco users in each county. Small counties were combined to have at
least 2,000 2 tobacco users, while large counties with more than 31,000 tobacco users were
divided into areas comprising census tracts within a county. Strata were defined based on various
factors related to tobacco use, as well as geography. The 80 PSUs were then allocated
proportionally to the strata. The PSU sample with PPS of tobacco users was selected within each
stratum, the size measure being the estimated number of adult tobacco users in a PSU.
Sampling CBGs at the Second Stage: At the second stage, CBGs were sampled within
the PSUs selected from the first stage. A CBG is a cluster of census blocks generally containing
between 600 and 3,000 people, with an average size of about 1,500 people. It is the smallest
geographic entity for which the decennial census and American Community Survey (ACS)
tabulate and publish sample data. We sampled three CBGs per PSU using the PPS method, with
the size measure being the estimated number of adult tobacco users in a CBG.

1

The number of tobacco users for each county is estimated using the results from the predictive modeling as
described in Section 2.1.3.
2 The cutoff value of 2,000 and 31,000 tobacco users correspond to the 25 percentile and 90 percentile of the
distribution of county-level estimated number of tobacco users.

4

The size measure, namely the number of tobacco users in a PSU or a CBG, is not readily
available. A predictive model, shown below, was developed to estimate the tobacco use
prevalence rate for each CBG using National Adult Tobacco Survey data including race/ethnicity
and SES. The estimated CBG-level tobacco user rate can be used with the population counts in
each CBG to estimate the number of tobacco users for each CBG. The number of estimated
tobacco users for each CBG can be aggregated to estimate the number of tobacco users for
census tracts and counties.
We fit a logistic regression model, using smoking status as the dependent variable and the
Census and ACS block group level variables in Table 1 as the independent variables. To fit the
model we used SAS software LOGISTIC procedure. The model has the form:
logit ( p ) = β o + β 1 X 1 +  + β n X n .

The independent variables are the n variables ( X 1  X n ) that come from the Tables 1 and 2
below.
Table 1. 2010 U.S. Census Data
2010 Census Variable

Variable Type

Population count of the block group

Continuous

Household count of the block group

Continuous

African-American proportion of the block group

Continuous

Hispanic proportion of the block group

Continuous

Rural proportion of the block group

Continuous

Median age of the block group

Continuous

Children per household of the block group

Continuous

Adults per household of the block group

Continuous

Total housing units of the block group

Continuous

Occupied household proportion of the block group

Continuous

Occupied households with a mortgage proportion of the
block group

Continuous

5

Table 2. 2006-2011 American Community Survey (ACS) 5-year Summary File
2006-2011 ACS Variable

Variable Type

Proportion of population with less than a high school degree
in the block group

Continuous

Proportion of population with a college degree or higher in
the block group

Continuous

Proportion of the population that lived in the same house one
year ago in the block group

Continuous

Proportion never married in the block group

Continuous

Proportion now married in the block group

Continuous

To evaluate whether oversampling geographic areas with higher density of tobacco users
can significantly improve cost efficiency without unduly decreasing design efficiency, the
contractor conducted several simulation experiments of oversampling tobacco-user-concentrated
PSUs and/or block groups to optimally balance the cost efficiency and design efficiency. The
simulation results showed that oversampling block groups or oversampling both PSUs and block
groups achieved small gains in cost savings, but also suffered an associated statistical penalty as
loss of design efficiency. Considering the gain of oversampling is relatively small, and the loss
of design efficiency due to oversampling, a decision was made not to oversample PSUs and/or
CBGs with higher prevalence rates.
Sampling Housing Units at the Third Stage: The third stage involved selecting housing
units within the selected second-stage CBGs. The sample of households was drawn from the
contractor’s in-house, nationally-representative Enhanced Address-based Sampling (ABS) listing
of all addresses in the United States. The foundations of this high-quality ABS frame are sourced
from commercially available versions of the U.S. Postal Service’s (USPS) Computerized
Delivery Sequence (CDS) file. The CDS file is available through nonexclusive license
agreements with qualified private companies and includes variables such as vacancy/seasonal
status, address type (city-style, P.O. box, etc.), single/multifamily, and high-rise. The contractor
supplements the CDS file with the No-Stat file that contains over 9 million primarily rural
mailing addresses. The union of these files accounts for all postal delivery points, giving nearcomplete coverage of U.S. addresses (Iannacchione, 2011). The contractor licenses both files
from one of only two nationally qualified vendors and receives monthly updates.

6

The quality of the national ABS frame is enhanced by appending ancillary information
from public and private sources, including geographic and demographic data from sources such
as the U.S. Census Bureau, U.S. Department of Agriculture, National Oceanic and Atmospheric
Administration, and U.S. Bureau of Labor Statistics, and hundreds of person-level characteristics
sourced from private databases such as Acxiom, updated monthly. These data include elements
for each person in the household, including name, age, child age range, race/ethnicity, and SES
data such as education and income. There is also a household size variable modeled by Acxiom.
Addresses have been geocoded into census geography to develop area information. This allows
aggregate neighborhood information (county, zip code, tract, census block group, block) to be
created based on the variables collected in the American Community Survey and the Census.
ABS has emerged as a high-coverage, cost-effective sampling frame for in-person, mail,
and multimode surveys. It is a much cheaper alternative to the traditional counting and listing
method. The ABS coverage in the majority of CBGs is high; however, as expected, the ABS
coverage was low in rural CBGs. We estimated the expected ABS coverage rate for each
sampled CBG, calculated as the ratio of the number of city-style mailing addresses on the ABS
list to the estimated number of HUs in the CBG. If the expected ABS coverage was greater than
50%, the ABS list was supplemented with addresses identified through the Check for Housing
Units Missed (CHUM) procedure. The CHUM procedure, developed at RTI (McMichael et al.,
2008), is similar in concept to the Half-open Interval procedure in that the interviewers search
the selected HU and the prescribed area up to the next HU on the frame, whether or not the next
HU is sequentially next on the list. Interviewers also check a subset of sample blocks so that
housing units in blocks with no city-style addresses on the Computerized Delivery Sequence
have a chance of selection. CHUM takes geocoding error into account and gives every housing
unit one chance of selection with known probability. CHUM is most effective when monitored
and conducted in a separate field visit from the survey interviewing, but it is far less costly than
enhanced listing because only small portions of the geographical areas are searched, while still
giving all housing units a chance of selection through the corresponding sample HUs and
subsampled blocks. And, because it is conducted after HUs are selected and not at the framebuilding stage, the results are more up to date. The CHUM instrument is included in Attachment
1.

7

The improved list served as the frame for CBGs having coverage rates at or above the
coverage threshold. For CBGs having ABS coverage less than the coverage threshold, traditional
field enumeration, that is, counting and listing, was used to develop the HU frame. We estimated
that ABS and the CHUM would be used in approximately 90% of the CBGs, and counting and
listing would be used in the remaining 10% of CBGs. On average, 152 HUs were selected using
a systematic random sampling method from each CBG.
Sampling Adult Tobacco Users at the Fourth Stage: At the final stage, we sampled at
most one adult tobacco user from an eligible HU into the panel. The target sample of 4,000 and
actual sample of 3,893 adult tobacco users were distributed disproportionately to four sampling
strata called domains. The four domains were formed by the cross-classification of two age
groups (18–25, 26 or older) and two SES categories (low SES, non-low SES). The sample
allocation is displayed in Exhibit B.1-1.
Exhibit B.1-1. Sample Sizes in Sampling Domains
Target Sample Size

Actual Sample Size

N

Prop

N

Prop

416
624

10%
16%

394
490

10%
13%

26+, Low SES

1,184

30%

1,352

35%

26+, Non-Low SES

1,776

44%

1,657

43%

18–25
26+

1,040
2,960

26%
74%

883
3,010

23%
77%

Low SES

1,600

40%

1,746

45%

Non-Low SES

2,400

60%

2,147

55%

Total

4,000

100%

3,893

100%

Domain
18–25, Low SESa
18–25, Non-Low SES

a

Low SES is defined as household income less than $30,000.

We screened household members for SES (combined household income less than
$30,000, or greater than or equal to $30,000), age, and tobacco use status.
As shown in Exhibit B.1-1, to achieve the target sample sizes in four domains, adult
tobacco users aged 18–25 were oversampled, in particular users aged 18–25 with non-low SES,
while tobacco users aged 26 or older were undersampled. The probabilities of an adult tobacco
user being selected for the panel are different and they are predetermined. A young adult user
8

with non-low SES has the highest probability, and an older adult tobacco user with low SES has
the lowest probability of being selected in the sample. Poisson sampling was used to determine
the rate at which persons in each domain were selected. These sampling rates were continuously
monitored and adjusted during data collection to ensure that the target number of tobacco users
in each domain were obtained with a minimum amount of screening. When smokeless tobacco
users were identified during screening, they were assigned higher probabilities than regular
tobacco users in the same domain, therefore increasing their chance of being selected. As noted
earlier, no more than one tobacco user was selected from an eligible housing unit.
B.1.3 Recruitment Response Rates
We understand that for the
survey data results to be credible,

Exhibit B.1-2. Actual Panel Recruitment
Response Rates

generalizable, and able to withstand
scientific scrutiny, high response rates
must be obtained. Our recruitment
protocol is designed to achieve higher
response rates than online panels that

Response Rates

Percentage

Occupied Household Rate (A)

84

Screening Response Rate (B)

79

Eligibility Rate (C)

81

Household Initiation Rate (D)

80

recruit by telephone or use opt-in
methodology.
Exhibit B.1-2 shows the actual response rates at each stage in the recruitment process
using our currently approved technical approach. The occupied household, screening, eligibility,
and household initiation rates reflect our experience in establishing the panel. The Occupied
Household Rate (A) indicates the number of dwelling units occupied by residents. The Screening
Response Rate (B) reflects the number of households that were successfully screened as eligible
or ineligible. The Eligibility Rate (C) is the number of households with an eligible member. The
Household Initiation Rate (D) is the number of eligible household members who completed the
full enrollment process (enrollment and baseline surveys). We have assumed similar rates for the
panel replenishment efforts.
B.1.4 Precision and Statistical Power
This section provides the statistical basis and justification for the original panel size at
establishment. These calculations and justifications remain relevant for the replenished panel that

9

will result from ongoing and future replenishment efforts. Based on the target sample sizes
presented in Exhibit B.1-1, the relative standard error (RSE) and the minimum power of
detecting 7% of difference at the 0.05 significance level for proportion estimates within various
domains are estimated and displayed in Exhibit B.1-3. To illustrate, we use three proportion
estimates (p = 0.1, p = 0.3, and p = 0.5). The average RSE over all proportions in Exhibit B.1-3
is 6.5%; this is considered to be reasonably good for a survey with a total sample size of 4,000.
Similarly, the power of detecting a 7% difference within SES, age group, and sex domains is also
high. However, the statistical power within race/ethnicity and tobacco product domains is lower
because of smaller sample sizes in some of those categories.

10

Exhibit B.1-3. Relative Standard Errors/Power to Compare Prevalence Estimates
Relative Standard Error
Minimum Powerc
for Domain Prevalence
of Detecting 7% a
Estimates
Difference within
Domain (p=0.5)
p = 0.1 p = 0.3 p = 0.5

Sample
Sizea

Estimated
Deffb

Effective
Sample
Size

• Low SES

1,440

1.3

1,108

9.0%

4.6%

3.0%

• Non-Low SES

2,160

1.3

1,662

7.4%

3.7%

2.5%

• 18–25

936

1.5

624

12.0%

6.1%

4.0%

• 26–44

1,241

1.5

827

10.4%

5.3%

3.5%

• 45+

1,423

1.5

949

9.7%

5.0%

3.2%

592

1.5

395

15.1%

7.7%

5.0%

2,586

1.5

1,724

7.2%

3.7%

2.4%

422

1.5

281

17.9%

9.1%

6.0%

• Male

1,936

1.5

1,291

8.4%

4.3%

2.8%

• Female

1,664

1.5

1,109

9.0%

4.6%

3.0%

2,778

1.5

1,852

12.0%

6.1%

4.0%

• Cigar

759

1.5

506

10.4%

5.3%

3.5%

• Smokeless

482

1.5

321

9.7%

5.0%

3.2%

Domain
SES Status

95.3%

Age Group

75.9%

Race/Ethnicity
• NH-Black
• NH-Others
• Hispanic

44.3%

Sex
93.3%

Tobacco Product
• Cigarette

50.7%

a

Assuming a 90% response rate to the survey. Sample sizes for race/ethnicity, sex, and tobacco product were
estimated from the 2010 TUS-CPS.

b

Deff = design effect, which measures the loss of efficiency resulting from the use of cluster sampling and unequal
selection probabilities, instead of simple random sampling.
c

Differences in percentage estimates will be detected at the 0.05 level of significance.

B.1.5 Panel Replenishment
We recognize that some panel members will leave the panel because of nonresponse at
each wave of Web surveys, and have allowed for a 35% yearly attrition rate. To maintain a panel
with a constant number of members and the baseline distribution of age group and SES, we are

11

implementing sample replenishment as needed to address panel attrition. We selected extra
CBGs per PSU when the CBG samples were selected for establishing the main panel and use one
or two CBGs per PSU each year for the sample replenishment. The estimated yearly sample sizes
for sample replenishment are provided in Exhibit B.1-4, assuming the same recruitment response
rates as in Exhibit B.1-2 for the main panel, and are equally allocated when replenishment is
conducted. We will set aside a 20% reserve sample yearly (about 2,500 housing units) in the
event estimated eligibility and/or response rates are lower than expected during panel
replenishment.
Exhibit B.1-4. Estimated Sample Sizes for Yearly Sample Replenishment
Sample

Sample Size

Selected HUs

15,624

Occupied Hus

10,937

Screened Hus

8,749

Eligible Hus

1,750

Selected Tobacco Users

1,750

Recruited Tobacco Users

1,400a

a

Will be allocated to four design domains to maintain the same age group and SES status distribution as for the
established panel. The design provides for replenishment to be conducted, as needed, based on panel attrition rates.

The first panel replenishment effort was initiated in July 2019 and is ongoing as the time
of this renewal information collection request. While panel member requests to disenroll from
the panel have been infrequent since panel establishment, a lower than estimated response rate at
Study A and the additional elapsed time expected between Study A and Study B prompted us to
initiate the first panel replenishment effort in order to refresh the panel with new members in
advance of Study B. As discussed in Section B.3.1, we experienced a lengthy and unanticipated
delay between panel establishment in 2016-2017 and the first panel study (Study A) in 2018.
This negatively impacted panel member engagement and willingness to respond to the Study A
survey request. Section B.3.1 details additional steps we have taken, based on lessons learned, to
re-engage establishment panel members in preparation for the launch of Studies B and C.

12

B.2

Information Collection Procedures
This section describes the procedures for panel recruitment, maintenance, and

replenishment, including the weighting plan, panel screening, enrollment, and retention
strategies, and efforts to maximize response rates.
B.2.1 Weighting Plan
This section describes the weighting plan for the main panel sample and the individual
experimental and observational studies, taking into account the complex sample design, panel
replenishment efforts, nonresponse, and attrition from the panel.
B.2.1.1

Weighting the Main Panel Sample

Sample weights are needed to adjust for the sampling approach and nonresponse. They
are developed for every member of the main panel, reflecting the varying probability of selection
discussed in Section B.1, and adjustments for unit nonresponse, coverage error, and extreme
weight values. The weights account for the disproportionate sampling of various subgroups of
interest resulting from the sample design, and the bias that can be introduced by screening and
interview nonresponse. These weights for the main panel members will be used in all subsequent
studies after adjusting them for nonresponse at each study.
B.2.1.2

Weighting the Sample of the First Study

For the first study, the weights for main panel members were adjusted for nonresponse. In
addition, to compensate for potential coverage error, a poststratification adjustment was
implemented. An adjustment of extreme weights was also implemented.
B.2.1.3

Weighting the Sample of Subsequent Studies

For each subsequent study, sample weights will be developed for both cross-sectional and
longitudinal data analyses.
1. Cross-Sectional Analysis Weights—In developing the cross-sectional analysis
weights for a study, the sample replenishment should be accounted for if recent
sample replenishment was implemented. The design weights will be calculated
for each new sample member in the same manner as the design weights were
computed for the main panel sample. The final weights from the first study or
previous study sample, combined with the design weights for the recent sample
replenishment, will be the initial weights for post-survey weight adjustments.

13

These weights will be adjusted for nonresponse and coverage error, with an
extreme weight adjustment applied if required. The fully adjusted weights can be
used independently of prior studies for cross-sectional analysis at each study.
2. Longitudinal Analysis Weights—In addition to the cross-sectional weights for
each experimental and observational study, longitudinal weights may be
developed for longitudinal and trend analyses. Longitudinal weights differ from
cross-sectional weights in that they account for the joint probabilities of response
or study combinations. For example, the first and second study longitudinal
weights adjust by the joint probability or propensity of responding to both studies.
Separate longitudinal weights will be calculated for comparing any two studies.
Longitudinal weights can also be computed for simultaneously analyzing all
studies or any combination of those studies together. We will work with the
contractor to determine the desired set of longitudinal analysis weights as the
experimental and observational studies are implemented.
The most current version of NCHS’ National Health Interview Survey, will be used at
that time as the source for control totals to perform the poststratification adjustment to reduce
coverage error and variance of survey estimates (currently 2018). The WTADJUST procedure in
SUDAAN (RTI, 2010) can be used for nonresponse, poststratification, and extreme weight
adjustments.
B.2.2 Initial Implementation of the Panel
A phased approach to panel recruitment and implementation was followed. During the
initial implementation period (approximately the first six weeks), we conducted testing of panel
procedures for process improvement. This included evaluating the materials, procedures, and
systems used to conduct the CHUM, screen and recruit panel members, review participation
requirements and obtain informed consent for Web or mail participation, instruct participants on
accessing and completing the baseline survey and subsequent experimental and observational
studies via the panel Website or mail, and initiate participation in the panel. The initial
implementation period also evaluated procedures for equipping and training select eligible adult
tobacco users with loaned tablet computers to facilitate Web survey access while they are in the
panel. During this initial implementation period, a portion of the national ABS sample was
fielded across two sites with 123 original addresses in each. A total of 17 adult tobacco users
were recruited during the initial implementation phase to serve in the first cohort of the panel.
These panel members were retained in the panel, and their data were retained for use.

14

During the 6-week initial implementation period, both the mail and field screening
protocols were implemented. For the in-person household visits, field interviewers used panel
recruitment materials and protocols to visit sampled addresses, determine whether they serve
occupied residential dwelling units, conduct the CHUM procedure, administer the field screening
interview to identify eligible adult household members, and, if found, invite the selected eligible
household member to join the panel. As part of this process, interviewers administered the
enrollment questionnaire to consenting panel members and trained them on procedures for
logging in and completing panel studies via the Web, including the initial baseline survey and
future experimental and observational studies. Protocols for identifying and enrolling panelists
who required mail mode or a loaned tablet computer to facilitate Web participation were also
followed.
The objectives of the testing during this initial implementation period were to improve
panel recruitment and implementation processes. This included:
•

Examining the effectiveness of the recruitment materials and protocols in gaining
cooperation and addressing questions that prospective panel members may have about
their participation.

•

Identifying any software or hardware problems interviewers experience during the
recruitment process, including adding missed housing units through the CHUM,
doorstep screening of households, and administration of the enrollment questionnaire
(in both English and Spanish) to recruited panel members.

•

Gauging the ease or difficulty with which respondents access and complete the
baseline survey online, if participating via Web, with particular attention paid to the
effectiveness of the training delivered by the interviewer and any usability issues
panel members experience in logging into the panel Website and navigating through
the Web survey application.

•

Testing the procedures for ensuring that panel members are Web-enabled, including
being able to receive panel emails and other information.

•

Identifying respondent concerns about the informed consent protocol, incentive
protocol, or other aspects of the panel recruitment process that may hinder long-term
commitment. This includes concerns about the tablet equipment agreement if the
panel member is being offered the loan of a tablet computer to facilitate Web access
while in the panel.

•

Launching the first self-administered survey (the baseline survey) and monitoring
responsiveness.

15

•

Evaluating the effectiveness of initial nonresponse prompting protocols.

At the conclusion of the initial implementation period, a telephone debriefing was
conducted with interviewers to discuss lessons learned, problems experienced in the field, and
ways to mitigate them during the remainder of the panel recruiting effort. Information gathered
informed any needed refinements to the English and Spanish recruiting and screening protocols.
FDA submitted a nonsubstantive change request to OMB for changes to the protocol, materials,
and survey instruments. OMB was informed of the package prior to submission. As noted above,
participants recruited during this initial implementation period were retained in the panel, and
their data were retained for use. They receive the same study requests as all other panel
members.
B.2.3 Panel Recruitment and Replenishment
The array of respondent materials used during panel establishment, including lead letters,
a study brochure, consent forms, nonresponse letters, and various reminder postcards and other
forms, will be used during panel replenishment and maintenance. These are provided in
Attachment 3 (English-language versions) and Attachment 4 (Spanish-language versions). A
custom-designed panel logo has also been created for use on all respondent materials and the
study Website to help panel members easily recognize study correspondence and materials
through a form of “brand” recognition.
B.2.3.1

Panel Screening and Recruitment

As noted in Section A.2.3, eligibility screening of prospective households for the panel is
conducted in two phases. Sampled households first receive a brief mail screener designed to
determine whether there are any age-eligible adult tobacco users residing in the home. The mail
screening operation is designed to reduce the number of sampled addresses that require an inperson screening visit, thereby reducing data collection costs. The mail screening instrument
includes a cover letter explaining the purpose of the survey contact and requesting the household
complete and return the questionnaire in the enclosed postage-paid envelope. The letter and mail
screener are printed in both English and Spanish. As a token of appreciation for completing the
mail screening survey, the mail screening package includes a $2 prepaid cash incentive.
Following this initial mailing, a post-card reminder is sent to all nonresponding households to
serve as both a reminder and a thank you for completing the survey. A second mail screener

16

questionnaire is sent to any remaining nonresponding households following the postcard
reminder. This additional survey mailing does not include the $2 prepaid cash incentive. Based
on our experience at panel establishment, we anticipate achieving at minimum a 25% response
rate for the mail screening questionnaire.
An in-person field screening visit is made by an interviewer to all households that report
one or more eligible adult tobacco users in their completed mail screener. Additionally, all
nonresponding households are visited in an effort to complete the screening in-person and collect
the data needed to assess eligibility. Households that complete the mail screener but report no
adult tobacco users are eliminated from the field screening operation. However, as a quality
control check of the mail screening results, a 10% sample of these households is selected for an
in-person visit in an effort to validate the mail screening data. Households with eligible sample
members identified during the quality control check are considered for the panel. Field screening
is conducted using the interviewer’s tablet computer.
Lead letters are mailed to all sampled addresses that require in-person screening,
including those that do not return the mail screener. When making in-person visits, field
interviewers provide a copy of the lead letter (if needed) and study brochure to legitimize his/her
visit and help answer questions posed by the household. The lead letter and study brochure are
available in English and Spanish. As needed, the interviewer also presents his/her letter of
authorization to verify he/she is working legitimately for the contractor. When attempting
contact, field interviewers leave “Sorry I Missed You” (SIMY) cards when encountering
situations where no one is home at the time of their visit.
If a household is found to include one or more eligible adult members, the field screening
application may select one eligible adult to receive the panel invitation. The interviewer then
administers the enrollment interview to verify the demographic and tobacco use data collected in
the screener, review the panel participation requirements, including length of commitment,
frequency of contact, and incentives participants can expect to receive while in the panel, obtain
informed consent to join the panel, and collect detailed contact information to facilitate
subsequent contact while in the panel. Data from the enrollment interview, specifically
information about access to and comfort level with computers and availability of Internet access
in the home or on a personal computing device, informs the decisions about the mode of

17

participation (Web or mail) that should be offered to the sampled adult. Once received by the
contractor, the enrollment data are also used to identify and select the subset of eligible adults
who are not Internet-capable and are disinterested in mail mode participation, but who may be
successful Web panelists if provided with a reliable means of accessing the Internet and thus the
panel Website. Appointment reminder cards are provided to eligible adults who are not
immediately available but instead request a future appointment for the panel enrollment
interview. Appointments cards are available in English and Spanish.
Once enrolled, the interviewer instructs the panel member on the procedures for
accessing the panel Website (if participating via Web) and completing the baseline survey on
his/her own. The baseline survey includes a brief tutorial that allows the panel member to
practice answering sample survey questions. For those panelists who are enrolled as mail
participants (maximum of 800 panelists), the baseline survey is administered by the field
interviewer using his/her tablet computer. The interviewer may also administer the survey to
those panelists offered the loan of the tablet, if needed. All screening, enrollment, and baseline
instruments are available in both English and Spanish.
In the event reliable Internet connectivity cannot be established during the enrollment
visits to the home, interviewers are equipped with paper back-up copies of the baseline survey to
record the panel member’s answers. This allows the interviewer to complete the enrollment
process with the panel member. The interviewer subsequently transfers the information from the
paper questionnaire into the Web survey and returns the paper form to the contractor for receipt
and secure storage.
As noted in Section A.2.1, we anticipate offering the loan of a Web-enabled tablet
computer to a subset of the eligible adult tobacco users who are likely to be successful Web
participants but who do not have the means—that is, no access to a computer, data-plan-enabled
cellular device, or the Internet in their home. Providing access to a tablet computer while in the
panel allows these panel members to participate online. This is an important step in mitigating
coverage and nonresponse bias and helps maximize the number of panelists who can receive
stimuli (e.g., media images) electronically for the experimental and observational studies. We
have allowed for a maximum of 400 panel members, or approximately 10% of the panel, to
participate using a tablet computer loaned by the project. These adults are identified from

18

screening and enrollment data collected by the field interviewer and subsampled by contractor
statisticians. We will enroll a maximum of 800 mail mode participants if we find a higher
percentage of panel members express a preference for this mode.
Those eligible to receive the tablet computer offer are contacted again in-person to
discuss the tablet option and attempt to complete the enrollment process. As part of this effort,
the interviewer completes the panel consent process, delivers the tablet, provides a short training
on the use of the device, and has the panel member review and complete the equipment
agreement form governing the use and care of the device and the protocol for returning the tablet
at the end of their panel participation. The interviewer instructs the panelist on how to log into
the panel website with the tablet computer and assists with completion of the baseline survey, as
needed. The interviewer is available to answer any questions the panel member may have about
navigating the website or completing the self-administered survey. All panel members receive a
“cheat sheet” which includes tips for accessing the panel Website. Additionally, panel members
who receive a tablet computer loan are provided with a tablet user “cheat sheet” which contains
general use guidance. Both of these documents are available in English and Spanish.
As described in Section A.2.3, interviewers complete a short observation questionnaire at
the conclusion of the enrollment process and upon leaving the panel member’s home. About one
week after enrollment, panel members are also contacted by the contractor to thank them for
their participation in the panel. The contact mode varies based on the panel member’s
participation mode. For example, Web participants receive an email or text message from the
contractor, while mail mode participants receive a thank you letter. Panel members who are
using a loaned tablet are called by the recruiting interviewer to thank them for enrolling and to
help address any problems they may have experienced with the device.
B.2.3.2

Informed Consent Procedures

Verbal consent for the field screening interview is obtained from a knowledgeable adult
household member who agrees to respond to housing unit eligibility screening questions. Adult
tobacco users who are selected for and agree to enroll in the panel undergo a more
comprehensive 3-step consent process. This includes (1) obtaining verbal consent for the
enrollment interview, (2) obtaining verbal consent for the use of computer audio recorded
interviewing (CARI) during portions of the enrollment interview, and (3) obtaining written

19

consent for the 3-year panel participation (Web or mail). For those adults offered the loan of a
tablet computer while in the panel, the consent process also includes review and completion of
the equipment agreement form. Consent forms are available in both English and Spanish.
Consent will also be obtained for each of the experimental and observational studies
conducted with the panel. The Web questionnaires will include an introductory question that
requires panelists to actively consent (answer “yes” or “no”) to participate in each study. Mail
mode participants will be informed that their completion and return of the mail survey form
indicates their consent to participate.
Near the end of their 3-year panel commitment period, panel members may be invited to
continue their participation in the TCS for up to three years through a web/mail re-consent
process. Web re-consent would involve reading the re-consent script and actively consenting
(answering “yes” or “no”) to continue participation in the panel. Mail re-consent would involve
signing and returning the re-consent form to the contractor. As part of their panel enrollment
consent, and the re-consent process (if implemented), panel members will be informed that a
Certificate of Confidentiality exists for this research. Panel members will also be informed that
TCS researchers may use, share, or release their deidentified panel data for similar research in
the future without obtaining additional informed consent.
B.2.3.3

Interview Content

Two questionnaires are used in the eligibility screening of prospective households. The
mail screener, estimated at 2 minutes in length, collects high-level information about the number
of adult household members and their current use of cigarettes, cigars or little cigars, and
smokeless tobacco. Enumeration of the household and selection of an eligible tobacco user is
accomplished as part of the subsequent in-person field screening visit. The field screening
questionnaire, which averages 9 minutes to complete, is used to verify that the address serves an
occupied housing unit, determine if there are any missed housing units within the structure,
enumerate adult members of the household, and determine whether any of the rostered adults are
current tobacco users. The questionnaire collects data on adult household members’ current
tobacco use (cigarettes, cigars or little cigars, and smokeless tobacco) for panel eligibility
purposes, and basic demographic information about each adult household member to inform
sample selection, including the oversampling of young adults 18-25 years of age. The screening

20

information determines whether an adult is selected from the household and invited to join the
panel.
The enrollment questionnaire, which averages 18 minutes to complete, collects data to
verify eligibility information collected during screening, establish the panel participation mode
(Web, mail, Web via loaned tablet), obtain informed consent, and maintain contact with the
panel member over time. Data from the survey is also used to inform future support needs and to
establish important benchmarks for subsequent analyses, including examination of demographic
characteristics of survey nonrespondents and panel members who attrite over time.
The baseline questionnaire, which averages 6 minutes to complete, collects more detailed
information about the panel member’s tobacco use history to establish important tobacco use
benchmarks for subsequent analyses. The questionnaire also collects additional information to
gauge panel members’ comfort level with computers. The baseline survey provides important
covariates for nonresponse adjustments, to correct for bias due to wave nonresponse.
The interviewer observation questionnaire captures the interviewer’s observations about
the panelist’s enrollment process and risk of attrition from the panel. The questionnaire also
captures any questions or issues reported by panel members using loaned tablets.
Panelists are asked to confirm or update their contact information, including name,
address, telephone number, and contact information for up to two people named in the
enrollment survey as being able to help locate them if they move. These requests for contact
information are folded into experimental and observational studies or other forms of planned,
non-survey contacts (see Section B.2.4). Up to 8 experimental and observational studies will be
conducted with the panel. The study questionnaires, which are expected to average 15–20
minutes in length and vary in content, will assess tobacco consumers’ responses to new and
existing warning statements and labels on product packaging and in advertisements;
communication about harmful and potential harmful constituents in tobacco products; and
perceptions of tobacco products, advertising, and marketing. The first of these panel studies,
Study A “Brands and Purchasing Behavior,” was included in the currently approved information
collection request. Study A focused on consumer purchasing behavior, tobacco brands, and use
of coupons and price promotions for tobacco products. The purpose of the study was to collect
information about panel member’s tobacco product brand loyalty and more accurate measures of

21

their tobacco product consumption. Study B “Coupons and Free Samples” and Study C
“Consumer Perceptions of Product Standards” are included in this renewal information
collection request. Study B will be an observational study offered to all panelists that will
provide a more in-depth examination of tobacco product promotions, namely free samples and
coupons, after the ban on distribution of free samples of tobacco products (with the exception of
certain smokeless tobacco exemptions) that went into effect when FDA finalized the “Deeming
Rule” on August 8, 2016 that extended FDA’s regulatory authority to all tobacco products.
Study C will be an experimental study examining how a hypothetical tobacco product standard
may impact consumers’ perceptions, attitudes, and tobacco use behavioral intentions.
Several additional questionnaires are used to support the data collection operations. These
include a Tracing/Nonresponse Follow-up Questionnaire completed by field interviewers who
conduct in-person tracing or nonresponse follow-up of panel members, and brief telephone
verification surveys for use in verifying the quality of field interviewer performance during the
panel screening and enrollment operations.
Attachment 1 includes copies of the English-language versions of the screening,
enrollment, baseline, interviewer observation, and Study B-C questionnaires. The questionnaires
used for in-person tracing/nonresponse follow-up and telephone verification of field interviewer
performance are also included. Attachment 2 provides copies of the Spanish-language
questionnaires.
B.2.3.4

Spanish Translation

All questionnaires and panel member materials (e.g., lead letters, brochures, consent
forms, FAQs) are available in both English and Spanish. The contractor’s translation
professionals are native speakers from Mexico, Peru, Venezuela, and other countries who are
skilled at producing Spanish translations that are grammatically and terminologically accurate.
The goal in performing the translations is to produce materials that remain true to the intent of
the English documents yet provide the information to non-English speakers in both a
linguistically and culturally appropriate way. A multistep, forward translation procedure that
involved a careful review of the source documents, examination of key terminology and research
of any unfamiliar vocabulary, translation, editing by a second native-speaking translation

22

professional, proofreading, and final quality control review was used for the translation of panel
participant materials.
In addition to providing Spanish-language translation services, contractor language
specialists also conduct the training of bilingual field interviewers, conduct quality control
reviews of Spanish-language interviews, and support calls to the panel’s toll-free number from
Spanish-speaking panel members.
B.2.4 Panel Maintenance
Maintaining frequent contact and providing readily available support to panel members
throughout their time in the panel is critical to minimizing attrition and achieving high response
rates for each study. The literature on panel maintenance is growing, but there is still much to be
learned about optimal strategies for maintaining a healthy and productive panel, especially one
that is focused on a subpopulation such as tobacco users. A comprehensive, multipronged
approach is being used to maintain the panel and minimize attrition throughout the study period.
Panel maintenance activities, conducted in non-study months, involve the following types
of contacts: email, text, mail, or telephone correspondence from the contractor to ensure contact
information is accurate, provide study updates and findings, or announce upcoming study
requests.
An extensive support network is deployed for the data collection and panel maintenance
operations to assure respondents that we are invested in them and provide prompt response to
time-sensitive survey requests. This includes:
•

Ongoing sampling support to select survey samples, replace sample members who
attrite, and refresh the sample as needed.

•

Ongoing programmer support to maintain the survey control and case management
systems, send e-mail and text prompts and automatic survey notifications by
telephone, and troubleshoot system issues in the field.

•

Ongoing triage support available through e-mail or a toll-free number that rings to a
help desk operated during normal business hours, and in-house referral to project staff
who can address questions about the survey content or process, or to technical support
staff who can respond to hardware, connectivity, or other technical issues.

•

Follow-up by contractor technical support personnel for more challenging problems
that require further investigation.

23

•

In-person follow-up by field interviewers to help troubleshoot technical problems in
person, including providing retraining on procedures for accessing and completing
the Web surveys.

Increased support is also provided to panel members who experience technical
difficulties during the initial weeks of the panel or who are perceived by interviewers as being at
greater risk of attrition, in particular due to perceived discomfort with the Internet, computers, or
the initial self-administered survey task (baseline survey). Increased support is also provided to
the subset of panelists who are loaned tablet computers to facilitate online survey completion.
This may include a telephone call or visit from the field interviewer within 2–3 days after
recruitment to confirm that the panel member is able to log in to the panel Website successfully
on his/her own and to inquire about any technical or usability issues. Panel members are also
provided with answers to frequently asked questions (FAQs), a troubleshooting guide (“cheat
sheet”) that allows them to investigate and resolve more common technical problems on their
own, and contact information for contractor support personnel during recruitment. Copies of
these items are included in Attachments 3 and 4 with other panel member materials.
Additionally, links on the panel Website provide ready access to the FAQs online as well as a
quick means of e-mailing contractor support staff with questions or technical support inquiries.
At an early point in the planning process, the question arose as to whether to retain or
drop panelists who stop using tobacco. Because of recidivism rates, it was decided to retain all
enrolled panel members regardless of changes in their tobacco use patterns. Subsampling of
panelists may be implemented, however, for specific experimental and observational studies that
are intended solely for current users of one or more specific tobacco products.
B.3

Methods to Maximize Response Rates and Assess Non-Response Bias

B.3.1 Response Rates
The incentive strategy, described in detail in Section A.9 and Attachment 6, is a key
component of our overall approach to maximizing response rates. We believe that incentives are
critical to recruiting the desired number of panel members, obtaining their commitment for the
full 3-year period, and maintaining their active involvement in the experimental and observation
studies while in the panel. Moreover, providing older, less technically savvy adults with an
alternative means to comfortably participate (mail mode) is also important to gaining and

24

maintaining cooperation long-term. Additionally, loaning a select group of eligible adults a Webenabled tablet computer for use while in the panel is a practical, effective, and reliable means of
minimizing bias while maximizing response via Web to the planned studies.
Several additional strategies are used for reducing nonresponse, the primary one being inperson recruitment of panel members which we believe leads to significantly larger recruitment
rates than would be achieved if sample members were contacted via mail, telephone, or web.
Others include:
•

Training field interviewers thoroughly on panel recruitment methods and available
resources and processes to (1) overcome respondent objections, (2) resolve restricted
access problems, (3) safely and successfully work in dangerous neighborhoods, and
(4) reach difficult-to-contact respondents such as those seldom at home.

•

Use of the study logo on all respondent materials and panel Website to maximize
brand recognition.

•

Using lead letters, study brochures, e-mails, and text messages to address frequently
asked questions about the panel or individual studies.

•

Emphasizing privacy in all aspects of the panel experience.

•

Using tailored nonresponse letters addressing specific reasons for nonparticipation
(see Attachments 3 and 4) at both the screening level as well as during the enrollment
phase.

•

Implementing field supervisor review and approval of all noninterview cases.

•

Hiring sufficient numbers of bilingual interviewers so cases are rarely lost because of
a Spanish-language barrier.

•

Designing study protocols and questionnaires that simplify the respondent task.

•

Providing easy access to project and information technology (IT) staff to address
technical or other questions (see, for example, online technical support request form
and password reset scripts in Attachments 3 and 4).

Tracking of movers is also critical to achieving high response rates and maintaining the
panel. Detailed contact information is collected and maintained for each panel member by the
panel contractor, including name, address, e-mail addresses, telephone numbers, and contact
information for relatives or friends who will know how to reach the panel member in the event of
a move. A unique 8-digit identification number is assigned to each sample member and used for
storage and retrieval (see A.10: Assurance of Privacy Provided to Respondents for more detail).

25

The locator data are updated periodically as part of each experimental or observational study.
Panel members are also provided with a means to update their contact information on the panel
Website at any time, and encouraged to notify the contractor about upcoming moves or name,
address, or telephone number changes via the panel Website. Additionally, forwarding
information and address corrections are requested with any communications provided to panel
members via the U.S. Postal Service.
The contractor deploys both centralized tracing and in-person field tracing to maximize
location rates and minimize sample attrition. Tracing professionals in the contractor’s call center
track hard-to-locate sample members using an extensive array of interactive tracing databases
and other resources to generate new leads and contact panelists who have relocated. Field
interviewers are trained on in-person tracing techniques, including strategies for generating new
contact leads from current residents and neighbors of the panelist’s last known address, as well
as relatives and other contact persons, postal carriers, and other local, community sources. Field
staff training sessions include reviews of general tracing procedures and locating strategies that
are tailored to specific populations, such as low-income and minority populations.
The overall unweighted enrollment response rate for panel establishment was 82.1%. The
response rates varied by panel member demographic characteristics, and ranged from a low of
72.3% for the 65 years and older subgroup to a high of 90.8% for the African American (nonHispanic) population. We expect to achieve similar response rates for the current replenishment
sample as well as for future replenishments.
As described earlier in Section B.1.5, there was a lengthy and unanticipated delay
between the establishment of the panel and the launch of the first panel study. This extended
period of panel member inactivity had a negative impact on panel member engagement and their
responsiveness to the Study A survey request. Despite extensive panel member nonresponse
prompting and tracing, including telephone and field interviewer prompting, many panel
members were unwilling to complete the web or mail survey. As a result, the overall unweighted
response rate for Study A (43.3%) was lower than originally estimated.
We have taken several important steps to address the challenges experienced in Study A
and those anticipated with the delay between Study A and Study B, including implementing
measures to re-engage panel members and reduce the time between future survey requests. First,

26

we have developed and included in this renewal request several new respondent materials
designed to legitimize and reinforce the importance of this research for panel members. These
materials will be used as part of the contractor’s overall panel outreach and prompting approach.
In addition, we have included the next two experimental and observational studies (Studies B
and C) in this renewal request so they can be conducted in quick succession over the next 12
months (October 2019 – September 2020). Providing panel members with an opportunity to
receive the $15 cash or digital gift card incentive for multiple surveys in a relatively short
amount of time will be an additional means of re-engagement. Approximately one week before
each study launches, all panel members will receive a heads-up email, text, auto-call, or letter
alerting them to the upcoming study and encouraging them to share any updates to their contact
information in advance of the study. As part of each study, all panel members will also be given
an opportunity to confirm or update their contact information to facilitate the receipt of the
incentive payment as well as subsequent panel communications.
We are also currently conducting an extensive advance tracing operation for
establishment panel members prior to the launch of Study B. This includes telephone tracing by
the contractor’s Call Center and tracing operations personnel, Call Center interactive database
tracing to identify new location leads, and in-person tracing by the contractor’s field
interviewers. The goal of this effort is to reconnect with each panel member and confirm or
update their contact information in advance of Study B. When panel members are located, they
are being updated on the timeline for the upcoming panel studies and reminded about how to
participate online (if web mode participant) or by mail. Panel member tracing, nonresponse
prompting, and Helpdesk support will continue throughout Study B and C data collections to
maximize participation for each survey.
Beyond these measures, and as noted in Section B.1.5, we are currently undertaking the
first panel replenishment effort to replace panel members who have attrited. The newly enrolled
panel members will receive their initial panel survey (Study B), followed by Study C, within a
few months of their enrollment. These panel members will also receive the heads-up
announcements alerting them to the impending launch of each study. We believe the
combination of these measures will position us to achieve higher response rates in subsequent
studies, and have assumed an 80% response rate for Studies B and C.

27

B.3.2 Nonresponse Bias Assessment
We studied and measured nonresponse bias at the original recruitment stage, at Study A,
and plan to do so for each panel replenishment phase. We will also assess nonresponse bias for at
least several future experimental or observational studies. Extensive analysis of nonresponse
cases and panel members who leave the panel early will be conducted to inform subsequent
refusal conversion and panel replenishment activities. This includes development of propensity
models predicting the likelihood of panel attrition as a function of demographic characteristics,
interviewer observations of the recruitment experience and likelihood of attrition, and historic
panel behavior to identify cases that may need additional contacts and/or interviewer effort to
remain in the panel.
We recognize that some panel members will request to end their participation in the panel
early, before the end of their 3-year period. We will respect panel members’ decisions to leave
the panel early and will provide them a formal disenrollment letter thanking them for their
participation and will send any outstanding incentive payments they are owed at the time of their
withdrawal. Other panel members may demonstrate their lack of continued interest through a
pattern of nonresponse across multiple studies or lack of responsiveness to panel maintenance or
nonresponse follow-up contacts. We will assess each situation individually and make case-level
decisions about whether or when to cease contact. If a decision is made to halt further contact
efforts, the panel member will be sent a disenrollment letter along with any outstanding incentive
payments they are owed. English and Spanish-language versions of the disenrollment letters are
provided in Attachments 3-44, 3-45, 4-44, and 4-45.
There are two contributing components to the nonresponse bias, nonresponse rate and the
difference between responses from respondents and nonrespondents (Kish, 1965). If both
components are small, then the bias should be negligible. For bias to be significant, a large
nonresponse rate should exist, and/or a large difference between the responses between
respondents and nonrespondents. For example, the nonresponse bias would be large if older
respondents tend not to respond and their tobacco use patterns are different from younger
respondents.
Although response rates have been used as a key measure of data quality (Biemer &
Lyberg, 2003), low response rates are not generally predictive of the nonresponse bias (Groves &

28

Peytcheva, 2008). Researchers have explored alternative indicators to detect nonresponse bias
(Wagner, 2012). We use the standard methods for assessing the nonresponse bias due to the unit
nonresponse: response rate subgroup analysis, indirect comparisons of survey outcomes, and
comparison of sample survey outcomes with corresponding population benchmarks. (Wagner,
2012). We believe that these three approaches identify major sources of nonresponse bias and
suggest corrective strategies. There are several stages involved in developing and maintaining
the panel. The stage most at risk for nonresponse bias is the original recruitment which is
expected to experience the lowest response rate. Consequently, this is the stage on which we
focus most of our efforts, especially since all subsequent panel surveys and estimates are based
on the original recruitment stage. However, we reiterate that a strictly representative panel is not
required for the majority of the work that is currently planned.
B.3.2.1

Compare Response Rates for Subgroups

In this first method, we calculate and compare response rates for some key characteristics
(e.g., household size, socioeconomic status, race/ethnicity, geographic location, urbanicity) that
are available for both respondents and nonrespondents in the frame files. Because the
contractor’s maintained frame is ABS-based with considerable amount of appended data, we
have an ample supply of indicators to be used in this analysis.
Response rate differences in those key characteristics provide insights into possible
nonresponse bias to the extent those attribute characteristics are correlated with the survey
outcomes. We also use those characteristics as independent variables and the response indicator
as the dependent variable to fit a logistic regression model. The predicted response
probability/propensity is estimated from the model, and the weighted (design-based weights is
used) standard deviation of the estimated response propensities is calculated, S(p). Then the Rindicator (Schouten et al., 2009) is calculated as R(p) = 1-2S(p), where 1 indicates good
representativeness and 0 indicates poor representativeness.
B.3.2.2

Compare Differences of Survey Outcomes Indirectly

For the second method, we use two approaches to assess the nonresponse bias by
comparing survey outcomes between respondents and nonrespondents indirectly. Some
nonresponse models suggest that those units that require more efforts to respond—for example,
more callbacks, incentives, refusal conversion—are similar to the units that do not respond (Lin

29

& Schaeffer, 1995). Thus, the first approach involves categorizing the respondents according to
the level of efforts (LOE), such as number of contact attempts, ever refused, early or late
responder, and comparing survey estimates (weighted by design-based weights) for each
category. The differences among LOE categories can give a reasonable indicator of the
magnitude and direction of nonresponse bias.
The second approach is based on the findings of stochastic nonresponse models that
nonresponse bias of a mean is a function of the correlation between response propensity and the
survey variables of interest (Bethlehem, 2002). We use logistic regression to estimate the
response propensities for all respondents and examine the correlation between the predicted
propensity and the survey outcome variables. Each respondent has a propensity score as well as a
value for major outcome variables; correlation between propensity and outcome variable
suggests presence of nonresponse bias. Another approach is to divide the response units into
various propensity groups according to their response propensities and compare the survey
estimates over propensity groups. Either high correlation between survey outcomes and predicted
propensities or differences of survey estimates among different propensity groups may suggest
nonresponse bias exists in the panel data.
B.3.3 Compare Respondent and Population Benchmarks
We also measure nonresponse bias directly by comparing our panel participants’
distributions with distributions based on the corresponding target population. In this case, since
we are dealing with the specific population of tobacco users, we use benchmark data from a
major national survey such as the NHIS. This serves as the source of our gold-standard
distributions and we measure the extent to which our panel participants approximate those target
distributions. We use unweighted data to make these comparisons. For example, we compare the
distribution of the panel characteristics with the corresponding NHIS distribution of tobacco
users. This analysis jointly evaluates gender, age, socioeconomic status, race/ethnicity, and
region. Significant differences on any of these variables indicates presence of nonresponse bias
which should be flagged and quantified. Furthermore, once we identify differences in the joint
characteristics of the two populations, we are in a position to use those variables for calculating
adjustment weights. A final comparison of weighted panel distributions with benchmark targets
confirms that the weighting process has brought the sample data in line with the gold standards

30

and thus eliminated the bias associated with the variables used in the weighting process. As
described in Section B.3.5, analysis of the original panel points to very low levels of nonresponse
bias using unweighted survey data, and that low level becomes even smaller when we use
weighted data.
B.3.4 Weight Adjustment to Minimize Nonresponse Bias
The results of nonresponse bias analyses inform whether nonresponse bias exists, the
magnitude of the bias if it exists, and possible methods for reducing the bias. The design weights
are adjusted for nonresponse, and nonresponse adjusted weights are further poststratified to ACS
total population and housing unit counts for important characteristics. We calculate weights
using the contractor’s proprietary software SUDAAN which uses generalized exponential
modeling (Folsom & Singh, 2000) to adjust design weights for nonresponse and coverage
imbalance to control all the variables that show different response rates or variables that relate to
the survey outcome variables. We expect that the nonresponse and poststratification adjustments
to the weights reduces the nonresponse bias. However, we recognize that the nonresponse and
poststratification adjustments cannot eliminate nonresponse bias completely and thus will take
that into consideration in analysis of the study data.
B.3.5 Nonresponse Bias Assessment Results
Based on our analyses at panel establishment, we concluded that the response rates were
relatively high across most domains, leaving limited room for significant nonresponse bias. We
also concluded that there is little evidence of significant nonresponse bias in the distribution
patterns of the sample population. Users can be confident that the impact of nonresponse bias on
analyses involving the entire sample was relatively minor. However, for some of the smaller
domains (e.g., Asians), the response rate was relatively low and there is more room for
nonresponse bias.
At panel establishment, we first measured response rates at two stages—screening and
enrollment—for the total sample and for various demographic domains. The results indicated
that at both stages the overall and domain response rates were approximately 80%. Some
domains (e.g., Asians) had lower response rates at the enrollment stage, but in general the
response rates were relatively high, thus mitigating the risk of nonresponse bias.

31

We then measured nonresponse bias at the screening and enrollment stages. For the
screening interview, we compared the set of screening respondents and their demographic
distributions with comparable distributions for the entire population using data from the ACS
(2011-2015). Statistical tests of the TCS-ACS difference were not significant at the 5% level.
During the enrollment stage, the sample size was limited to those who answered the field
screening questionnaire, were deemed eligible for the panel, and agreed to join the panel. For
these cases, we had a two-pronged strategy for measuring nonresponse bias. We first compared
respondents with any nonrespondents for whom we had basic demographic information from the
screener. There is little evidence of significant nonresponse bias introduced at this point in the
panel creation process. The weighted results tell a very similar story.
We then compared the final panel of responders with the comparable set of responders on
the 2015 NHIS. In this case, we focused on cigarette users for two reasons: (1) they represented
the vast majority of our panel, and (2) we could readily obtain NHIS data for that population.
The underlying distributions of cigarette smokers in the TCS panel very closely track the
corresponding distributions from the NHIS. In looking at weighted results, we found that the
weighted estimates more closely resemble the NHIS benchmarks than do the unweighted
estimates. This is a direct result of the weighting procedure which aims to bring the weighted
sample results in line with known population benchmarks.
It is important to note that our analysis focused only on demographic dimensions of
nonresponse bias. Differences in demographic characteristics do not necessarily suggest there
may be nonresponse bias in substantive variables (Groves and Peytcheva, 2008; Peytcheva and
Groves, 2009). Moreover, such differences are mitigated through poststratification adjustments,
and are therefore ignorable nonresponse bias. To study nonresponse bias with respect to
substantive variables related to tobacco use, we will use data from the planned experimental and
observational studies.
To study nonresponse bias for Study A, we compared the weighted distribution (weighted
by panel weights) of the respondents and nonrespondents across several basic demographic
characteristics from the field screener: race/ethnicity, gender, age, education, employment status,
and region. The two distributions looked similar with some differences in the magnitude of
proportions. For race/ethnicity, Study A both overrepresented Whites and underrepresented

32

Asians. Even though the distributions of respondents and nonrespondents looked similar,
statistical tests (Wald Chi-square test) indicated that there were some discrepancies by these
characteristics, except for employment status. It is thus highly recommended that nonresponse
adjustments, through weighting, be included as part of any analysis.
We also compared the weighted distributions between Study A respondents only and all
panel members (including Study A nonrespondents). The weighted distributions of Study A
respondents were calculated using final analysis weights that have been adjusted to account for
Study A nonrespondents. The weighted distributions of the whole panel were calculated using
the original TCS analysis weights. The two distributions looked very similar, and this
comparison indicated that weighting for nonresponse adjustments may reduce potential
nonresponse bias in the survey estimates or analyses that are produced with the Study A
respondents’ survey data. [1]
In addition to the comparisons with the TCS panel members above, we also compared the
Study A respondents with respondents from the 2017 NHIS on survey items that indicate current
use of cigarettes. In both surveys, we defined the current users of cigarettes as respondents who
ever smoked 100 cigarettes or more during their lifetime and currently smoked every day or
some days (questions S1A1 = 1, and S1A1a = 1 or 2 in Study A). We calculated distributions of
current users of cigarettes in Study A and in the 2017 NHIS, calculated standard errors in both,
and performed comparisons through Bonferroni statistical tests. Only Asian and Other race
groups indicated statistical differences (p-values smaller than 0.01). The two groups of
respondents were similar with regard to other characteristics.
Based on our analyses, we concluded that there could be potential nonresponse bias
because of low response rates, especially when Study A respondents were analyzed using the
original panel weights. Although distributions of Study A respondents and nonrespondents are
statistically different, we did not find these differences to be of practical importance. Our
adjustments to the panel weights indicate that the use of adjusted weights (i.e., Study A final
analyses weights) reduces the potential nonresponse bias when analyzing data from the survey.

33

B.4

Tests of Procedures
Focus groups (OMB Control No. 0910-0497), involving 49 adult tobacco users with

varying demographic characteristics, were used to develop and refine protocols for recruiting
panel members and maintaining their interest and involvement during their tenure in the panel.
This included issues such as length of time in the panel, number and frequency of study requests,
panel member incentive strategies, and various panel maintenance methods. Participants were
asked to provide feedback on possible approaches and to complete several sample questionnaire
items on two tablet computers being considered for the panel. The focus group sessions explored
the following topics:
•

General reactions to the creation of a panel of tobacco users, including willingness to
participate and concerns participants may have

•

Willingness to commit for a 2- or 3-year period, and preferences of participants

•

Reaction to the planned monthly contacts to maintain participant interest in the panel

•

Information needed to make an informed decision to join the panel, and how the
information should be delivered

•

Reaction to proposed incentives, including cash incentives, tablet computers, and
other possible cash or non-cash incentives for study participation

•

Feedback on elements of the equipment agreement associated with the tablet
computers

•

Additional methods and materials that could be used to maintain interest in the panel

Feedback from focus group participants (OMB Control No. 0910-0497), as well as
discussions with an external consultant on Web panel data collection and senior contractor
methodology, survey, and IT personnel informed the final design recommendations for the panel.
Key recommendations adopted for the panel included:
•

Implementing a cash-based incentive protocol rather than a tablet-based one for most
panelists;

•

Utilizing a mixed-mode design to provide an alternative data collection option for
those sample members who are technology adverse or who will not (or cannot) access
the Internet, and

•

Subsampling of nonrespondents to address potential coverage and bias concerns
through the limited offer of a study tablet computer (for use while in the panel).

34

More extensive testing of the panel procedures was conducted through the initial panel
implementation period described in Section B.2.2. The initial panel implementation period
provided an opportunity for testing all field interviewer training protocols, data collection
systems, and panel screening and recruitment protocols. FDA and its contractor remain
committed to continuous improvement throughout the life of the panel.
B.5

Individuals Consulted on Statistical Aspects and Individuals Collecting and/or
Analyzing Data
The sample design for the panel was developed by senior statisticians in the contractor’s

organization, in consultation with FDA statisticians. Contact information for the statistical
consultants and FDA statisticians is provided below.
Karol Krotki, PhD
Senior Research Statistician

RTI International
Division of Statistical and Data Sciences
701 13th St. NW, Suite 750
Washington, DC 20005-3967
Ph. 202-728-2485

Patrick Chen, PhD
Senior Research Statistician

RTI International
Division of Statistical and Data Sciences
3040 Cornwallis Rd
Research Triangle Park, NC 27709
Ph. 919-541-6309

Antonio Paredes
Statistician

Food and Drug Administration
Center for Tobacco Products
Office of Science
Division of Population Health Science
10903 New Hampshire Ave
Silver Spring, MD 20993
Ph. 301-796-3866

Nikolas Pharris-Ciuej
Statistician

Food and Drug Administration
Center for Tobacco Products
Office of Science
Division of Population Health Science
10903 New Hampshire Ave
Silver Spring, MD 20993
Ph. 301-796-8875

As discussed in Part A, to inform the design of the panel recruitment and retention
strategies, the contractor also engaged the services of a Web survey panel expert in the research
35

community. The consultant participated in discussions with the contractor to review focus group
findings (OMB Control No. 0910-0497) discussed above and provided feedback on strategies for
recruiting and engaging panel members long-term. Consultant contact information is provided
below.

Scott Crawford
Founder, Chief Executive Officer

Survey Sciences Group, LLC
950 Victors Way, Suite 50
Ann Arbor, Michigan 48108
Ph. 734-527-2150

36

REFERENCES
Ajzen, I. 1991. The theory of planned behavior. Organizational Behavior and Human Decision
Processes, 50(2), pp.179-211.
Armstrong, J. Scott. 1975. Monetary Incentives in Mail Surveys. Public Opinion Quarterly, 39,
pp. 111–116.
Baker, R., Blumberg, S., Brick, M., Couper, M., Courtright, M., Dennis, J. M., Dillman, D.,
Frankel, M., Garland, P., Groves, R., Kennedy, C., Krosnick, J. and Lavrakas, P. 2010.
AAPOR Report on Online Panels. Public Opinion Quarterly, 74 (4), pp.711–781.
Baumgartner, Robert and Pamela Rathbun (1997). Prepaid monetary incentives and mail survey
response rates. Paper presented at AAPOR, Norfolk, VA
Bethlehem, J. (2002). Weighting Nonresponse Adjustments Based on Auxiliary Information. In
Survey Nonresponse. R.M. Groves, D.A. Dillman, J.L. Eltinge, & R.J.A. Little, eds. pp.
275-278. New York: John Wiley and Sons.
Biemer, P. P., & Lyberg, L. (2003). Introduction to Survey Quality. Hoboken, NJ: Wiley.
Biner, P. M. and Kidd, H. J., 1994. The Interactive Effects of Monetary Incentive Justification
and Questionnaire Length on Mail Survey Response Rates. Psychology and Marketing
11:483–492.
Clark, S. M. and Mack, S.P. 2009. SIPP 2008 Incentive Analysis. Paper Presented at the Federal
Committee on Statistical Methodology Research Conference, Washington, D.C.
Creighton, K., King, K. and Martin, E. 2007. The Use of Monetary Incentives in Census Bureau
Longitudinal Surveys. Survey Methodology Research Report Series N2007-2.
Washington, DC: U.S. Census Bureau.
Cunradi, C. B., Moore, R., Killoran, M., and Ames, G. 2005. Survey Nonresponse Bias among
Young Adults: The Role of Alcohol, Tobacco, and Drugs. Subst Use Misuse 40(2): 171–
85.
DeBell, M., Krosnick, J. and A. Lupia 2010. Methodology Report and User’s Guide for the
2008-2009 ANES Panel Study. Palo Alto, CA and Ann Arbor, MI: Stanford University
and the University of Michigan.
Dillman, D. A. 2000. Mail and Internet Surveys: The Tailored Design Method, 2nd edition. New
York: Wiley.
Dillman, D. A., 2007. Mail and Internet Surveys: The Tailored Design Method, 2nd edition. 2007
Update with New Internet, Visual and Mixed-mode Guide. New York: Wiley.

37

Folsom, R. E., & Singh, A. C. (2000). The generalized exponential model for sampling weight
calibration for extreme values, nonresponse, and poststratification. In Proceedings of the
American Statistical Association, Survey Research Methods Section, pp. 598-603.
Alexandria, VA: American Statistical Association.
Fox, R.J., Crask, M.R., and Kim, J. 1988. Mail Survey Response Rate: A Meta-analysis of
Selected Techniques for Inducing Response. Public Opinion Quarterly, 52, 467–491.
Groves, R. M., Singer, E., and Corning, A. 2000. Leverage-Saliency Theory of Survey
Participation - Description and an Illustration. Public Opinion Quarterly 64(3): 299–308.
Groves, R., & Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias: A
meta-analysis. Public Opinion Quarterly, 72(2), 167-189.
Heberlein, T. A. and Baumgartner, R. 1978. Factors Affecting Response Rates to Mailed
Questionnaires: A Quantitative Analysis of the Published Literature. American
Sociological Review 3:447-62.
Heberlein, T. A. and Baumgartner, R. 1978. Factors Affecting Response Rates to Mailed
Questionnaires: A Quantitative Analysis of the Published Literature. American
Sociological Review 3:447-62.
Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research.
Public Opinion Quarterly, 75(3), 556–575.
James, T. L. 1997. Results of Wave 1 Incentive Experiment in the 1996 Survey of Income and
Program Participation. Proceedings of the Survey Research Methods Section of the
American Statistical Association, pp.834–839.
Kish, L. (1965). Survey Sampling. New York: John Wiley and Sons.
Lanz, P. M. 2003. Smoking on the Rise among Young Adults: Implications for Research and
Policy. Tobacco Control 12 (Suppl I): i60-i70.
Lengacher, J., Sullivan, C., Couper, M. P and R. Groves. 1995. Once Reluctant, Always
Reluctant? Effects pf Differential Incentives on Later Survey Participation in a
Longitudinal Survey. Proceedings of the American Statistical Association, Survey
Research Methods Section, p.1029–1034.
Levine, S. and Gordon, G. 1958. Maximizing Returns on Mail Questionnaires. Public Opinion
Quarterly, 22:568-75.
Lin, I. F., & Schaeffer, N. (1995). Using survey participants to estimate the impact of
nonparticipation. Public Opinion Quarterly, 59, 236-258.
Linsky, A. 1975. Stimulating Responses to Mailed Questionnaires: A Review. Public Opinion
Quarterly, 39, pp. 82–101.

38

Mack, S., Huggins, V., Keathley, D. and Sundukchi, M. 1998. Do Monetary Incentives Improve
Response Rates in the Survey of Income and Program Participation? Proceedings of the
American Statistical Association, Survey Research Methods Section, 529–534.
McMichael, J., Ridenhour, J., & Shook-Sa, B. 2008. A robust procedure to supplement the
coverage of address-based sampling frames for household surveys. Proceedings of the
American Statistical Association, Section on Survey Research Methods, 4329–4335.
Peytcheva, E., & Groves, R. M. (2009). Using variation in response rates of demographic
subgroups as evidence of nonresponse bias in survey estimates. Journal of Official
Statistics, 25(2), 193–201.
Poynter, R. and P. Comley. 2003. Beyong Online Panels. Proceedings of the ESOMAR
Technovate Conference. Amsterdam: ESOMAR.
Rodgers, W. 2002. Size of Incentive Effects in a Longitudinal Study. Proceedings of the
American Association for Public Research 2002: Strengthening Our Community Section on Survey Research Methods.
RTI, 2010, SUDAAN Release 10.
Schiller, J. S., Lucas, J. W., Peregoy, J.A. 2012. Summary health statistics for U.S. adults:
National Health Interview Survey, 2011. National Center for Health Statistics. Vital
Health Stat 10(256).
Seltzer, C. C., R. Bosse and A. J. Garvey 1974. Mail Survey Response by Smoking Status.
American Journal of Epidemiology 100(6): 453–457.
Singer, E., Van Hoewyk, J. and Maher, M. P. 1998. Does the Payment of Incentives Create
Expectation Effects? Public Opinion Quarterly, 62: 152–64.
TUS-CPS, 2010-2011. Public Use Dataset updated May 2011;
http://thedataweb.rm.census.gov/ftp/cps_ftp.html#cpssupps.
U.S. Census Bureau 2013. Computer and Internet Use in the United States. U.S. Census Bureau
publication P20-569. http://www.census.gov/prod/2013pubs/p20-569.pdf
Vestbo, J. and Rasmussen, F. V. 1992. Baseline Characteristics Are Not Sufficient Indicators of
Non-Response Bias Follow up Studies. Journal of Epidemiology and Community Health
46(6): 617–619.
Yu, J. and H. Cooper, 1983. A Quantitative Review of Research Design Effects on Response
Rates to Questionnaires. Journal of Marketing Research 20: 36-44.
Zickuhr, K. 2013. Who’s not Online and Why. Washington, DC: Pew Research
Center; http://www.pewinternet.org/2013/09/25/whos-not-online-and-why/

39


File Typeapplication/pdf
AuthorRadway, Anne
File Modified2020-04-27
File Created2020-04-22

© 2024 OMB.report | Privacy Policy