0815__Supporting_Statement_PART_B_7-2017

0815__Supporting_Statement_PART_B_7-2017.docx

National Panel of Tobacco Consumer Studies

OMB: 0910-0815

Document [docx]
Download: docx | pdf

TOBACCO USER PANEL

SUPPORTING STATEMENT PART B

0910-0815



TABLE OF CONTENTS

Section Page


Exhibits

Number Page

Exhibit B.1-1. Sample Sizes in Sampling Domains 8

Exhibit B.1-2. Target Panel Response Rates 9

Exhibit B.1-3. Relative Standard Errors/Power to Compare Prevalence Estimates 11

Exhibit B.1-4. Sample Sizes for Yearly Sample Replenishment 12


Part B: Collection of Information Employing Statistical Methods

B.1 Respondent Universe and Sampling Methods

This section describes the sample design for the panel, including the four-stage sample design, sample selection at each stage, design assumptions, target sample sizes, and precision and statistical power. The section also describes sample replenishment plans for the panel.

B.1.1 Overview of the Sample Design

The target population for the panel is tobacco users aged 18 years and older in housing units and in noninstitutionalized group quarters in the 50 states and the District of Columbia. A stratified four-stage sample design will be employed, with a goal of recruiting 4,000 adult tobacco users into the sample panel. Eighty (80) primary sampling units (PSUs) will be selected at the first stage, 3 census block groups (CBGs) within each selected PSU at the second stage, approximately 152 housing units (HUs) within each selected CBG at the third stage, and a maximum of one adult tobacco user from an eligible HU at the fourth stage. To successfully recruit 4,000 adult tobacco users for the panel, we estimate 36,390 HUs need to be selected to conduct screening and recruiting. We will screen additional households from a reserve sample of 12,091 addresses if estimated eligibility and/or response rates are lower than expected. Full details of the sample design are presented in Attachment 5.

The main goal of the design is to select a sample of all tobacco users in the nation representing the full range in that population with respect to behavior patterns, knowledge, and attitudes. Another objective is to design a sample that is efficient and cost-effective. This is the motivation behind the strategies for stratification, stratum allocation, and PSU design.

B.1.2 Stratified Four-stage Sample Design and Sample Selection

The proposed four-stage sample design and the probabilities proportional to size (PPS) measure selection method applied at the first and second stages, where the number of tobacco users is used as the size measure, will ensure a near equal probability selection method (epsem) within each of the four design domains:

  • 18- to 25-year-olds, low socioeconomic status (SES)

  • 18- to 25-year-olds, non-low SES

  • 26 years of age or older, low SES

  • 26 years of age or older, non-low SES

The epsem sample will minimize the unequal weighting effect (UWE), thereby maximizing the precision of estimates for those domains. In addition, selecting the same number of CBGs within a PSU and equally allocating HU samples to each CBG will provide for a consistent workload for each field interviewer in every PSU and more efficient field management.

Sampling PSUs at the First Stage: At the first stage, a sample of 80 PSUs in 50 states and Washington, DC, will be drawn. Traditionally PSUs have been defined as one county or groups of counties because that is the administrative unit for which Census data are readily available. However, counties have very large variation in population sizes (varies from 82 to 9,818,605 among 3,143 counties) and large variation in number of estimated tobacco users1 (varies from 17 to 1,074,654). As a result, some large counties will be selected in the PSU sample with certainty; certainty PSUs could cause more variation in sample weights. To avoid undesirable effects caused by the large variation in population size or number of estimated tobacco users, we will create customized PSUs by combining small contiguous counties and splitting large counties based on the number of estimated tobacco users in each county. Small counties will be combined to have at least 2,0002 tobacco users, while large counties with more than 31,000 tobacco users will be divided into areas comprising census tracts within a county. Strata will be defined based on various factors related to tobacco use, as well as geography. The 80 PSUs will then be allocated proportionally to the strata. The PSU sample with PPS of tobacco users will be selected within each stratum, the size measure being the estimated number of adult tobacco users in a PSU.

Sampling CBGs at the Second Stage: At the second stage, CBGs will be sampled within the PSUs selected from the first stage. A CBG is a cluster of census blocks generally containing between 600 and 3,000 people, with an average size of about 1,500 people. It is the smallest geographic entity for which the decennial census and American Community Survey (ACS) tabulate and publish sample data. We will sample three CBGs per PSU using the PPS method, with the size measure being the estimated number of adult tobacco users in a CBG.

The size measure, namely the number of tobacco users in a PSU or a CBG, is not readily available. A predictive model, shown below, was developed to estimate the tobacco use prevalence rate for each CBG using National Adult Tobacco Survey data including race/ethnicity and SES. The estimated CBG-level tobacco user rate can be used with the population counts in each CBG to estimate the number of tobacco users for each CBG. The number of estimated tobacco users for each CBG can be aggregated to estimate the number of tobacco users for census tracts and counties.

We fit a logistic regression model, using smoking status as the dependent variable and the Census and ACS block group level variables in Table 1 as the independent variables. To fit the model we used SAS software LOGISTIC procedure. The model has the form:

.

The independent variables are the n variables that come from the Tables 1 and 2 below.

Table 1. 2010 U.S. Census Data

2010 Census Variable

Variable Type


Population count of the block group

Continuous


Household count of the block group

Continuous


African-American proportion of the block group

Continuous


Hispanic proportion of the block group

Continuous


Rural proportion of the block group

Continuous


Median age of the block group

Continuous


Children per household of the block group

Continuous


Adults per household of the block group

Continuous


Total housing units of the block group

Continuous


Occupied household proportion of the block group

Continuous


Occupied households with a mortgage proportion of the block group

Continuous



Table 2. 2006-2011 American Community Survey (ACS) 5-year Summary File

2006-2011 ACS Variable

Variable Type


Proportion of population with less than a high school degree in the block group

Continuous


Proportion of population with a college degree or higher in the block group

Continuous


Proportion of the population that lived in the same house one year ago in the block group

Continuous


Proportion never married in the block group

Continuous


Proportion now married in the block group

Continuous



To evaluate whether oversampling geographic areas with higher density of tobacco users can significantly improve cost efficiency without unduly decreasing design efficiency, the contractor conducted several simulation experiments of oversampling tobacco-user-concentrated PSUs and/or block groups to optimally balance the cost efficiency and design efficiency. The simulation results showed that oversampling block groups or oversampling both PSUs and block groups achieved small gains in cost savings, but also suffered an associated statistical penalty as loss of design efficiency. Considering the gain of oversampling is relatively small, and the loss of design efficiency due to oversampling, a decision was made not to oversample PSUs and/or CBGs with higher prevalence rates.

Sampling Housing Units at the Third Stage: The third stage will involve selecting housing units within the selected second-stage CBGs. The sample of households will be drawn from the contractor’s in-house, nationally-representative Enhanced Address-based Sampling (ABS) listing of all addresses in the United States. The foundations of this high-quality ABS frame are sourced from commercially available versions of the U.S. Postal Service’s (USPS) Computerized Delivery Sequence (CDS) file. The CDS file is available through nonexclusive license agreements with qualified private companies and includes variables such as vacancy/seasonal status, address type (city-style, P.O. box, etc.), single/multifamily, and high-rise. The contractor supplements the CDS file with the No-Stat file that contains over 9 million primarily rural mailing addresses. The union of these files accounts for all postal delivery points, giving near-complete coverage of U.S. addresses (Iannacchione, 2011). The contractor licenses both files from one of only two nationally qualified vendors and receives monthly updates.

The quality of the national ABS frame is enhanced by appending ancillary information from public and private sources, including geographic and demographic data from sources such as the U.S. Census Bureau, U.S. Department of Agriculture, National Oceanic and Atmospheric Administration, and U.S. Bureau of Labor Statistics, and hundreds of person-level characteristics sourced from private databases such as Acxiom, updated monthly. These data include elements for each person in the household, including name, age, child age range, race/ethnicity, and SES data such as education and income. There is also a household size variable modeled by Acxiom. Addresses have been geocoded into census geography to develop area information. This allows aggregate neighborhood information (county, zip code, tract, census block group, block) to be created based on the variables collected in the American Community Survey and the Census.

ABS has emerged as a high-coverage, cost-effective sampling frame for in-person, mail, and multimode surveys. It is a much cheaper alternative to the traditional counting and listing method. The ABS coverage in the majority of CBGs is high; however, the ABS coverage is expected to be low in rural CBGs. We will estimate the expected ABS coverage rate for each sampled CBG, calculated as the ratio of the number of city-style mailing addresses on the ABS list to the estimated number of HUs in the CBG. If the expected ABS coverage is greater than 50%, the ABS list will be supplemented with addresses identified through the Check for Housing Units Missed (CHUM) procedure. The CHUM procedure, developed at RTI (McMichael et al., 2008), is similar in concept to the Half-open Interval procedure in that the interviewers search the selected HU and the prescribed area up to the next HU on the frame, whether or not the next HU is sequentially next on the list. Interviewers also check a subset of sample blocks so that housing units in blocks with no city-style addresses on the Computerized Delivery Sequence have a chance of selection. CHUM takes geocoding error into account and gives every housing unit one chance of selection with known probability. CHUM is most effective when monitored and conducted in a separate field visit from the survey interviewing, but it is far less costly than enhanced listing because only small portions of the geographical areas are searched, while still giving all housing units a chance of selection through the corresponding sample HUs and subsampled blocks. And, because it is conducted after HUs are selected and not at the frame-building stage, the results are more up to date. The CHUM instrument is included in Attachment 1.

The improved list will serve as the frame for CBGs having coverage rates at or above the coverage threshold. For CBGs having ABS coverage less than the coverage threshold, traditional field enumeration, that is, counting and listing, will be used to develop the HU frame. We estimate that ABS and the CHUM will be used in approximately 90% of the CBGs, and counting and listing will be used in the remaining 10% of CBGs. On average, 152 HUs will be selected using a systematic random sampling method from each CBG.

Sampling Adult Tobacco Users at the Fourth Stage: At the final stage, we will sample at most one adult tobacco user from an eligible HU into the panel. The sample of 4,000 adult tobacco users will be distributed disproportionately to four sampling strata called domains. The four domains are formed by the cross-classification of two age groups (18–25, 26 or older) and two SES categories (low SES, non-low SES). The sample allocation is displayed in Exhibit B.1‑1.

Exhibit B.1-1. Sample Sizes in Sampling Domains

Domain

Proportionate Samplea

Target Sample

N

prop

N

prop

18–25, Low SESb

336

8%

416

10%

18–25, Non-Low SES

330

8%

624

16%

26+, Low SES

1,305

33%

1,184

30%

26+, Non-Low SES

2,029

51%

1,776

44%

18–25

666

17%

1,040

26%

26+

3,334

83%

2,960

74%

Low SES

1,641

41%

1,600

40%

Non-Low SES

2,359

59%

2,400

60%

Total

4,000

100%

4,000

100%

a Proportionate sample size was estimated from 2010 TUS-CPS.

b Low SES is defined as household income less than $30,000.

We will screen household members for SES (combined household income less than $30,000, or greater than or equal to $30,000), age, and tobacco use status.

As shown in Exhibit B.1-1, to achieve the target sample sizes in four domains, adult tobacco users aged 18–25 will be oversampled, in particular users aged 18–25 with non-low SES, while tobacco users aged 26 or older will be undersampled. The probabilities of an adult tobacco user being selected for the panel are different and they are predetermined. A young adult user with non-low SES has the highest probability, and an older adult tobacco user with low SES has the lowest probability of being selected in the sample. Poisson sampling will be used to determine the rate at which persons in each domain are selected. These sampling rates will be continuously monitored and adjusted during data collection to ensure that the target number of tobacco users in each domain are obtained with a minimum amount of screening. When smokeless tobacco users are identified during screening, they will be assigned higher probabilities than regular tobacco users in the same domain, therefore increasing their chance of being selected. As noted earlier, no more than one tobacco user will be selected from an eligible housing unit.

B.1.3 Recruitment Response Rates

Exhibit B.1-2. Target Panel Response Rates

Response Rates

Percentage

Occupied Household Rate (A)

95

Screening Response Rate (B)

85

Recruitment Rate (C)

90

Household Initiation Rate (D)

99

Experimental/Observational Study Response Rate (E)

90

Cumulative Response Rates (A*B*C*D*E)

65

We understand that for the survey data results to be credible, generalizable, and able to withstand scientific scrutiny, high response rates must be obtained. Our recruitment protocol is designed to achieve higher response rates than online panels that recruit by telephone or use opt-in methodology.

Exhibit B.1-2 shows the targeted response rates at each stage in the process using our proposed technical approach. The occupied household and screening rates are based on our experience in conducting eligibility screening for the National Survey of Drug Use and Health (NSDUH), which achieves an annual screening response rate of 88%. Our targeted screening rate for the panel is slightly lower, given a longer screening questionnaire than used in NSDUH. The recruitment rate is based on our experiences recruiting sample members for longitudinal studies such as the National Longitudinal Study of Adolescent Health and interview rates on NSDUH and other national surveys that range from 80% to 90%.

The Occupied Household Rate (A) indicates the number of dwelling units occupied by residents. The Screening Response Rate (B) reflects the number of households that were successfully screened as eligible or ineligible. The Recruitment Rate (C) is the number of eligible households that agree to join the panel. The Household Initiation Rate (D) is the number of eligible households that follow through with all enrollment requirements (e.g., navigate to the Web portal and complete the acknowledgement). The Experimental/Observational Study Response Rate (E) is the response rate for a given study. The cumulative response rate, factoring in all of these stages, is an overall 65%.

B.1.4 Precision and Statistical Power

Based on the target sample sizes presented in Exhibit B.1-1, the relative standard error (RSE) and the minimum power of detecting 7% of difference at the 0.05 significance level for proportion estimates within various domains are estimated and displayed in Exhibit B.1-3. To illustrate, we use three proportion estimates (p = 0.1, p = 0.3, and p = 0.5). The average RSE over all proportions in Exhibit B.1-3 is 6.5%; this is considered to be reasonably good for a survey with a total sample size of 4,000. Similarly, the power of detecting a 7% difference within SES, age group, and sex domains is also high. However, the statistical power within race/ethnicity and tobacco product domains is considered low because of smaller sample sizes in some of those categories.


Exhibit B.1-3. Relative Standard Errors/Power to Compare Prevalence Estimates

Domain

Sample Sizea

Estimated Deffb

Effective Sample Size

Relative Standard Error for Domain Prevalence Estimates

Minimum Powerc of Detecting 7% a Difference within Domain (p=0.5)

p = 0.1

p = 0.3

p = 0.5

SES Status


  • Low SES

1,440

1.3

1,108

9.0%

4.6%

3.0%

95.3%

  • Non-Low SES

2,160

1.3

1,662

7.4%

3.7%

2.5%

Age Group


  • 18–25

936

1.5

624

12.0%

6.1%

4.0%

75.9%

  • 26–44

1,241

1.5

827

10.4%

5.3%

3.5%

  • 45+

1,423

1.5

949

9.7%

5.0%

3.2%

Race/Ethnicity


  • NH-Black

592

1.5

395

15.1%

7.7%

5.0%

44.3%

  • NH-Others

2,586

1.5

1,724

7.2%

3.7%

2.4%

  • Hispanic

422

1.5

281

17.9%

9.1%

6.0%

Sex


  • Male

1,936

1.5

1,291

8.4%

4.3%

2.8%

93.3%

  • Female

1,664

1.5

1,109

9.0%

4.6%

3.0%

Tobacco Product

  • Cigarette

2,778

1.5

1,852

12.0%

6.1%

4.0%

50.7%

  • Cigar

759

1.5

506

10.4%

5.3%

3.5%

  • Smokeless

482

1.5

321

9.7%

5.0%

3.2%

a Assuming a 90% response rate to the survey. Sample sizes for race/ethnicity, sex, and tobacco product were estimated from the 2010 TUS-CPS.

b Deff = design effect, which measures the loss of efficiency resulting from the use of cluster sampling and unequal selection probabilities, instead of simple random sampling.

c Differences in percentage estimates will be detected at the 0.05 level of significance.

B.1.5 Panel Replenishment

We recognize that some panel members will leave the panel because of nonresponse at each wave of Web surveys, and have assumed a 35% yearly attrition rate.3 To maintain a panel with a constant number of members and the baseline distribution of age group and SES, we will implement quarterly sample replenishment. We will select extra CBGs per PSU when the CBG samples are selected for establishing the main panel and use one CBG each year for the sample replenishment. The yearly sample sizes for sample replenishment are provided in Exhibit B.1‑4, assuming the same recruitment response rates as in Exhibit B.1‑2 for the main panel, and will be equally allocated to the quarterly replenishment.

Exhibit B.1-4. Sample Sizes for Yearly Sample Replenishment

Sample

Sample Size

Selected Hus

12,737

Occupied Hus

12,100

Screened Hus

10,285

Eligible Hus

1,555

Selected Tobacco Users

1,555

Recruited Tobacco Users

1,400a

a Will be allocated to four design domains to maintain the same age group and SES status distribution as for the baseline panel.

B.2 Information Collection Procedures

This section describes the procedures for panel recruitment and maintenance, including the weighting plan, panel screening, enrollment, and retention strategies, and efforts to maximize response rates.

B.2.1 Weighting Plan

This section describes the weighting plan for the main panel sample and the individual experimental and observational studies, taking into account the complex sample design, panel replenishment efforts, nonresponse, and attrition from the panel.

B.2.1.1 Weighting the Main Panel Sample

Sample weights will be needed to adjust for the sampling approach and nonresponse. They will be developed for every member of the main panel, reflecting the varying probability of selection discussed in Section B.1, and adjustments for unit nonresponse, coverage error, and extreme weight values. The weights will account for the disproportionate sampling of various subgroups of interest resulting from the sample design, and the bias that can be introduced by screening and interview nonresponse. These weights for the main panel members will be used in all subsequent studies after adjusting them for nonresponse at each study.

B.2.1.2 Weighting the Sample of the First Study

For the first study, the weights for main panel members will be adjusted for nonresponse. In addition, to compensate for potential coverage error, a poststratification adjustment can be implemented. An adjustment of extreme weights can also be performed if it is needed.

B.2.1.3 Weighting the Sample of Subsequent Studies

For each subsequent study, sample weights will be developed for both cross-sectional and longitudinal data analyses.

  1. Cross-Sectional Analysis Weights—In developing the cross-sectional analysis weights for a study, the sample replenishment should be accounted for if recent sample replenishment was implemented. The design weights will be calculated for each new sample member in the same manner as the design weights were computed for the main panel sample. The final weights from the first study or previous study sample, combined with the design weights for the recent sample replenishment, will be the initial weights for post-survey weight adjustments. These weights will be adjusted for nonresponse and coverage error, with an extreme weight adjustment applied if required. The fully adjusted weights can be used independently of prior studies for cross-sectional analysis at each study.

  2. Longitudinal Analysis Weights—In addition to the cross-sectional weights for each experimental and observational study, longitudinal weights may be developed for longitudinal and trend analyses. Longitudinal weights differ from cross-sectional weights in that they account for the joint probabilities of response or study combinations. For example, the first and second study longitudinal weights adjust by the joint probability or propensity of responding to both studies. Separate longitudinal weights will be calculated for comparing any two studies. Longitudinal weights can also be computed for simultaneously analyzing all studies or any combination of those studies together. We will work with the contractor to determine the desired set of longitudinal analysis weights as the experimental and observational studies are implemented.

The most current version of NCHS’ National Health Interview Survey, will be used at that time as the source for control totals to perform the poststratification adjustment to reduce coverage error and variance of survey estimates (currently 2016). The WTADJUST procedure in SUDAAN (RTI, 2010) can be used for nonresponse, poststratification, and extreme weight adjustments.

B.2.2 Initial Implementation of the Panel

A phased approach to panel recruitment and implementation will be followed. During the initial implementation period (approximately the first six weeks), we will conduct testing of panel procedures for process improvement. This includes evaluating the materials, procedures, and systems that will be used to conduct the CHUM, screen and recruit panel members, review participation requirements and obtain informed consent for Web or mail participation, instruct participants on accessing and completing the baseline survey and subsequent experimental and observational studies via the panel Website or mail, and initiate participation in the panel. The initial implementation period will also evaluate procedures for equipping and training select eligible adult tobacco users with loaned tablet computers to facilitate Web survey access while they are in the panel. During this initial implementation period, a portion of the national ABS sample will be fielded across two sites with approximately 100 original addresses in each. The goal of this initial implementation period is to recruit 25 adult tobacco users to serve in the first cohort of the panel. These panel members will be retained in the panel, and data obtained will be retained for use, unless testing experiences during this period necessitate a change in panel procedures.

During the 6-week initial implementation period, both the mail and field screening protocols will be implemented. For the in-person household visits, field interviewers will use panel recruitment materials and protocols to visit sampled addresses, determine whether they serve occupied residential dwelling units, conduct the CHUM procedure, administer the field screening interview to identify eligible adult household members, and, if found, invite the selected eligible household member to join the panel. As part of this process, interviewers will administer the enrollment questionnaire to consenting panel members and train them on procedures for logging in and completing panel studies via the Web, including the initial baseline survey and future experimental and observational studies. Protocols for identifying and enrolling panelists who require a mail mode or for equipping select panelists with a loaned tablet computer to facilitate Web participation will also be followed.

The objectives of the testing during this initial implementation period will be to improve panel recruitment and implementation processes. This includes:

  • Examining the effectiveness of the recruitment materials and protocols in gaining cooperation and addressing questions that prospective panel members may have about their participation.

  • Identifying any software or hardware problems interviewers experience during the recruitment process, including adding missed housing units through the CHUM, doorstep screening of households, and administration of the enrollment questionnaire (in both English and Spanish) to recruited panel members.

  • Gauging the ease or difficulty with which respondents access and complete the baseline survey online, if participating via Web, with particular attention paid to the effectiveness of the training delivered by the interviewer and any usability issues panel members experience in logging into the panel Website and navigating through the Web survey application.

  • Testing the procedures for ensuring that panel members are Web-enabled, including being able to receive panel emails and other information.

  • Identifying respondent concerns about the informed consent protocol, incentive protocol, or other aspects of the panel recruitment process that may hinder long-term commitment. This includes concerns about the tablet equipment agreement if the panel member is being offered the loan of a tablet computer to facilitate Web access while in the panel.

  • Launching the first self-administered survey (the baseline survey) and monitoring responsiveness.

  • Evaluating the effectiveness of initial nonresponse prompting protocols.

At the conclusion of the initial implementation period, a telephone debriefing will be conducted with interviewers to discuss lessons learned, problems experienced in the field, and ways to mitigate them during the remainder of the panel recruiting effort. Information gathered will inform any needed refinements to the English and Spanish recruiting and screening protocols. If there are any changes to the protocol or the materials or survey instruments provided to the potential study participants, FDA will submit a nonsubtantive change request to OMB. If informed that the package is coming, OMB will clear within a few days. As noted above, participants recruited during this initial implementation period will be retained in the panel, and data obtained will be retained for use, unless experiences during this period necessitate a change in panel procedures.

As noted above, the sample for the initial implementation period will be drawn from the larger sample of addresses selected for the panel. As long as the there are no major problems with the recruitment process, the 25 eligible adults recruited during the initial panel implementation would then remain part of the panel and be subject to the same study requests as all other panel members.

However, because they are being recruited approximately 2 months earlier than other panel members, this cohort can also be used, if needed, to pretest an advance version of study questionnaires to ensure that they are able to easily access and respond to survey requests. As noted above, if there are any changes to the protocol or the materials or survey instruments provided to the potential study participants, FDA will submit a nonsubtantive change request to OMB. If informed that the package is coming, OMB will clear within a few days.

B.2.3 Panel Recruitment and Maintenance

An array of respondent materials has been developed to aid in the panel screening and recruitment process, including lead letters, a study brochure, consent forms, nonresponse letters, and various reminder postcards and other forms. These are provided in Attachment 3 (English-language versions) and Attachment 4 (Spanish-language versions). A custom-designed panel logo has also been created for use on all respondent materials and the study Website to help panel members easily recognize study correspondence and materials through a form of “brand” recognition.

B.2.3.1 Panel Screening and Recruitment

As noted in Section A.2.3, eligibility screening of prospective households for the panel will be conducted in two phases. Sampled households will first receive a brief mail screener designed to determine whether there are any age-eligible adult tobacco users residing in the home. The mail screening operation is designed to reduce the number of sampled addresses that require an in-person screening visit, thereby reducing data collection costs. The mail screening instrument will include a cover letter explaining the purpose of the survey contact and requesting the household complete and return the questionnaire in the enclosed postage-paid envelope. The letter and mail screener will be printed in both English and Spanish. As a token of appreciation for completing the mail screening survey, the mail screening package will include a $2 prepaid cash incentive. Following this initial mailing, a post-card reminder will be sent to all nonresponding households to serve as both a reminder and a thank you for completing the survey. A second mail screener questionnaire will be sent to any remaining nonresponding households following the postcard reminder. This additional survey mailing will not include the $2 prepaid cash incentive. We anticipate achieving at minimum a 35% response rate for the mail screening questionnaire.

An in-person field screening visit will be made by an interviewer to all households that report one or more eligible adult tobacco users in their completed mail screener. Additionally, all nonresponding households will be visited in an effort to complete the screening in-person and collect the data needed to assess eligibility. Households that complete the mail screener but report no adult tobacco users will be eliminated from the field screening operation. However, as a quality control check of the mail screening results, a 10% sample of these households will be selected for an in-person visit in an effort to validate the mail screening data. Households with eligible sample members identified during the quality control check will be considered for the panel. Field screening will be conducted using the interviewer’s tablet computer.

Lead letters will be mailed to all sampled addresses that require in-person screening, including those that do not return the mail screener. When making in-person visits, field interviewers will provide a copy of the lead letter (if needed) and study brochure to legitimize his/her visit and help answer questions posed by the household. The lead letter and study brochure will be available in English and Spanish. As needed, the interviewer will also present his/her letter of authorization to verify he/she is working legitimately for the contractor. When attempting contact, field interviewers will leave “Sorry I Missed You” (SIMY) cards when encountering situations where no one is home at the time of their visit.

If a household is found to include one or more eligible adult members, the field screening application will select one eligible adult to receive the panel invitation. The interviewer will then administer the enrollment interview to verify the demographic and tobacco use data collected in the screener, review the panel participation requirements, including length of commitment, frequency of contact, and incentives participants can expect to receive while in the panel, obtain informed consent to join the panel, and collect detailed contact information to facilitate subsequent contact while in the panel. Data from the enrollment interview, specifically information about access to and comfort level with computers and availability of Internet access in the home or on a personal computing device, will inform the decisions about the mode of participation (Web or mail) that should be offered to the sampled adult. Once received by the contractor, the enrollment data will also be used to identify and select the subset of eligible adults who are not Internet-capable and are disinterested in mail mode participation, but who may be successful Web panelists if provided with a reliable means of accessing the Internet and thus the panel Website. Appointment reminder cards will be provided to eligible adults who are not immediately available but instead request a future appointment for the panel enrollment interview. Appointments cards will be available in English and Spanish.

Once enrolled, the interviewer will instruct the panel member on the procedures for accessing the panel Website (if participating via Web) and completing the baseline survey on his/her own. The baseline survey includes a brief tutorial that will allow the panel member to practice answering sample survey questions. For those panelists who are enrolled as mail participants (maximum of 800 panelists), the baseline survey will be administered by the field interviewer using his/her tablet computer. The interviewer may also administer the survey to those panelists offered the loan of the tablet, if needed. All screening, enrollment, and baseline instruments will be available in both English and Spanish.

In the event reliable Internet connectivity cannot be established during the enrollment visits to the home, interviewers will be equipped with paper back-up copies of the baseline survey to record the panel member’s answers. This will allow the interviewer to complete the enrollment process with the panel member. The interviewer will subsequently transfer the information from the paper questionnaire into the Web survey and return the paper form to the contractor for receipt and secure storage.

As noted in Section A.2.1, we anticipate offering the loan of a Web-enabled tablet computer to a subset of the eligible adult tobacco users who are likely to be successful Web participants but who do not have the means—that is, no access to a computer, data-plan-enabled cellular device, or the Internet in their home. Providing access to a tablet computer while in the panel will allow these panel members to participate online. This is an important step in mitigating coverage and nonresponse bias and will help maximize the number of panelists who can receive stimuli (e.g., media images) electronically for the experimental and observational studies. We expect 400 panel members, or approximately 10% of the panel, will participate using a tablet computer loaned by the project. These adults will be identified from screening and enrollment data collected by the field interviewer and subsampled by contractor statisticians. We will enroll a maximum of 800 mail mode participants if we find a higher percentage of panel members express a preference for this mode.

Those eligible to receive the tablet computer offer will be contacted again in-person to discuss the tablet option and attempt to complete the enrollment process. As part of this effort, the interviewer will complete the panel consent process, deliver the tablet, provide a short training on the use of the device, and have the panel member review and complete the equipment agreement form governing the use and care of the device and the protocol for returning the tablet at the end of their panel participation. The interviewer will instruct the panelist on how to log into the panel website with the tablet computer and assist with completion of the baseline survey, as needed. The interviewer will be available to answer any questions the panel member may have about navigating the website or completing the self-administered survey. All panel members will receive a “cheat sheet” which includes tips for accessing the panel Website. Additionally, panel members who receive a tablet computer loan will be provided with a tablet user “cheat sheet” which contains general use guidance. Both of these documents will be available in English and Spanish.

As described in Section A.2.3, interviewers will complete a short observation questionnaire at the conclusion of the enrollment process and upon leaving the panel member’s home. About one week after enrollment, panel members will also be contacted by the contractor to thank them for their participation in the panel. The contact mode will vary based on the panel member’s participation mode. For example, Web participants will receive an email or text message from the contractor, while mail mode participants will receive a thank you letter. Panel members who are using a loaned tablet will be called by the recruiting interviewer to thank them for enrolling and to help address any problems they may have experienced with the device.

B.2.3.2 Informed Consent Procedures

Verbal consent for the field screening interview will be obtained from a knowledgeable adult household member who agrees to respond to housing unit eligibility screening questions. Adult tobacco users who are selected for and agree to enroll in the panel will undergo a more comprehensive 3-step consent process. This will include (1) obtaining verbal consent for the enrollment interview, (2) obtaining verbal consent for the use of computer audio recorded interviewing (CARI) during portions of the enrollment interview, and (3) obtaining written consent for the 3-year panel participation (Web or mail). For those adults offered the loan of a tablet computer while in the panel, the consent process will also include review and completion of the equipment agreement form. Consent forms will be available in both English and Spanish.

Consent will also be obtained for each of the experimental and observational studies conducted with the panel. The Web questionnaires will include an introductory question that requires panelists to actively consent (answer “yes” or “no”) to participate in each study. Consent will be implied for mail participants who complete and return the hardcopy survey forms.

B.2.3.3 Interview Content

Two questionnaires will be used in the eligibility screening of prospective households. The mail screener, estimated at 2 minutes in length, will collect high-level information about the number of adult household members and their current use of cigarettes, cigars or little cigars, and smokeless tobacco. Enumeration of the household and selection of an eligible tobacco user will be accomplished as part of the subsequent in-person field screening visit. The field screening questionnaire, estimated at 10 minutes in length, will be used to verify that the address serves an occupied housing unit, determine if there are any missed housing units within the structure, enumerate adult members of the household, and determine whether any of the rostered adults are current tobacco users. The questionnaire will collect data on adult household members’ current tobacco use (cigarettes, cigars or little cigars, and smokeless tobacco) for panel eligibility purposes, and basic demographic information about each adult household member to inform sample selection, including the oversampling of young adults 18-25 years of age. The screening information will determine whether an adult will be selected from the household and invited to join the panel.

The enrollment questionnaire, estimated at 10-minutes in length, will collect data to verify eligibility information collected during screening, establish the panel participation mode (Web, mail, Web via loaned tablet), obtain informed consent, and maintain contact with the panel member over time. Data from the survey will also be used to inform future support needs and to establish important benchmarks for subsequent analyses, including examination of demographic characteristics of survey nonrespondents and panel members who attrite over time.

The baseline questionnaire, estimated at 10 minutes in length, will collect more detailed information about the panel member’s tobacco use history, which will establish important tobacco use benchmarks for subsequent analyses. The questionnaire will also collect additional information to gauge panel members’ comfort level with computers. The baseline survey will provide important covariates for nonresponse adjustments, to correct for bias due to wave nonresponse.

The interviewer observation questionnaire will capture the interviewer’s observations about the panelist’s enrollment process and risk of attrition from the panel. The questionnaire will also capture any questions or issues reported by panel members using loaned tablets.

Panelists will be asked to confirm or update their contact information, including name, address, telephone number, and contact information for up to two people named in the baseline survey as being able to help locate them if they move. These requests for contact information will be folded into experimental and observational studies or other forms of planned, non-survey contacts (see Section B.2.4).



Up to 8 experimental and observational studies will be conducted with the panel. The study questionnaires, which are expected to average 15–20 minutes in length and vary in content, will assess tobacco consumers’ responses to new and existing warning statements and labels on product packaging and in advertisements; communication about harmful and potential harmful constituents in tobacco products; and perceptions of tobacco products, advertising, and marketing. The first of these studies (Study 1) is included in this clearance request. Study 1 focuses on consumer purchasing behavior, tobacco brands, and use of coupons and price promotions for tobacco products. The purpose of this study is to collect information about panel member’s tobacco product brand loyalty and more accurate measures of their tobacco product consumption. The remaining studies will be included in future OMB clearance requests.

Several additional questionnaires will be used to support the data collection operations. These include a Tracing/Nonresponse Follow-up Questionnaire to be completed by field interviewers who conduct in-person tracing or nonresponse follow-up of panel members, and brief telephone verification surveys for use in verifying the quality of field interviewer performance during the panel screening and enrollment operations.

Attachment 1 includes copies of the English-language versions of the screening, enrollment, baseline, interviewer observation, and Study 1 questionnaires. The questionnaires to be used for in-person tracing/nonresponse follow-up and telephone verification of field interviewer performance are also included. Attachment 2 provides copies of the Spanish-language questionnaires.

B.2.3.4 Spanish Translation

All questionnaires and panel member materials (e.g., lead letters, brochures, consent forms, FAQs) will be available in both English and Spanish. The contractor’s translation professionals are native speakers from Mexico, Peru, Venezuela, and other countries who are skilled at producing Spanish translations that are grammatically and terminologically accurate. The goal in performing the translations is to produce materials that remain true to the intent of the English documents yet provide the information to non-English speakers in both a linguistically and culturally appropriate way. A multistep, forward translation procedure that involved a careful review of the source documents, examination of key terminology and research of any unfamiliar vocabulary, translation, editing by a second native-speaking translation professional, proofreading, and final quality control review was used for the translation of panel participant materials.

In addition to providing Spanish-language translation services, contractor language specialists will also conduct the training of bilingual field interviewers, conduct quality control reviews of Spanish-language interviews, and support calls to the panel’s toll free number from Spanish-speaking panel members.

B.2.4 Panel Maintenance

Maintaining frequent contact and providing readily available support to panel members throughout their time in the panel is critical to minimizing attrition and achieving high response rates for each study. The literature on panel maintenance is growing, but there is still much to be learned about optimal strategies for maintaining a healthy and productive panel, especially one that is focused on a subpopulation such as tobacco users. A comprehensive, multipronged approach is planned to maintain the panel and minimize attrition throughout the study period.

Panel maintenance activities, conducted in non-study months, will involve the following types of contacts: email, text, mail, or telephone correspondence from FDA or its contractor to ensure contact information is accurate, provide study updates and findings, or announce upcoming study requests;

An extensive support network will be deployed for the data collection and panel maintenance operations to assure respondents that we are invested in them and provide prompt response to time-sensitive survey requests. This includes:

  • Ongoing sampling support to select survey samples, replace sample members who attrite, and refresh the sample as needed.

  • Ongoing programmer support to maintain the survey control and case management systems, send e-mail and text prompts and automatic survey notifications by telephone, and troubleshoot system issues in the field.

  • Ongoing triage support available through e-mail or a toll-free number that rings to a help desk operated during normal business hours, and in-house referral to project staff who can address questions about the survey content or process, or to technical support staff who can respond to hardware, connectivity, or other technical issues.

  • Follow-up by contractor technical support personnel for more challenging problems that require further investigation.

  • In-person follow-up by field interviewers to help troubleshoot technical problems in person, including providing retraining on procedures for accessing and completing the Web surveys.

Increased support will also be provided to panel members who experience technical difficulties during the initial weeks of the panel or who are perceived by interviewers as being at greater risk of attrition, in particular due to perceived discomfort with the Internet, computers, or the initial self-administered survey task (baseline survey). Increased support will also be provided to the subset of panelists who are loaned tablet computers to facilitate online survey completion. This may include a telephone call or visit from the field interviewer within 2–3 days after recruitment to confirm that the panel member is able to log in to the panel Website successfully on his/her own and to inquire about any technical or usability issues. Panel members will also be provided with answers to frequently asked questions (FAQs), a troubleshooting guide (“cheat sheet”) that will allow them to investigate and resolve more common technical problems on their own, and contact information for contractor support personnel during recruitment. Copies of these items are included in Attachments 3 and 4 with other panel member materials. Additionally, links on the panel Website will provide ready access to the FAQs online as well as a quick means of e-mailing contractor support staff with questions or technical support inquiries.

Conditioning effects is a well-known risk in panel surveys and is one of the many factors that will be subject to regular measurement and study. Assessment will include regular study of the relationship of panelists’ responses to their tenure on the panel and comparison of responses of “veteran” panelists with those who joined more recently. For example, tenure can be used as a stratification factor in the sample selection process to restrict a specific study subsample to the more recent panel members.

One additional strategy to reduce panel conditioning is spreading the survey-taking load over all panel members. Such a strategy can be implemented by randomly selecting each subsample, but at the same time keeping track of each member’s survey-taking activity. As the number and frequency of survey-taking for a given member increases, their probability of selection can decrease – a strategy that can be implemented using probability proportion to size sampling. This strategy will lead to known and measurable selection probabilities for each specific subsample.

At an early point in the planning process, the question arose as to whether to retain or drop panelists who stop using tobacco. Because of recidivism rates, it was decided to retain all enrolled panel members regardless of changes in their tobacco use patterns. Subsampling of panelists may be implemented, however, for specific experimental and observational studies that are intended solely for current users of one or more specific tobacco products.

B.3 Methods to Maximize Response Rates and Assess Non-Response Bias

B.3.1 Response Rates

The proposed incentive strategy, described in detail in Section A.9 and Attachment 6, is a key component of our overall approach to maximizing response rates. We believe that incentives are critical to recruiting the desired number of panel members, obtaining their commitment for the full 3-year period, and maintaining their active involvement in the experimental and observation studies while in the panel. Moreover, providing older, less technically savvy adults with an alternative means to comfortably participate (mail mode) is also important to gaining and maintaining cooperation long-term. Additionally, loaning a select group of eligible adults a Web-enabled tablet computer for use while in the panel is a practical, effective, and reliable means of minimizing bias while maximizing response via Web to the planned studies.

Several additional strategies are planned for reducing nonresponse, the primary one being in-person recruitment of panel members which we believe will lead to significantly larger recruitment rates than would be achieved if sample members were contacted via mail, telephone, or web. Others include:

  • Training field interviewers thoroughly on panel recruitment methods and available resources and processes to (1) overcome respondent objections, (2) resolve restricted access problems, (3) safely and successfully work in dangerous neighborhoods, and (4) reach difficult-to-contact respondents such as those seldom at home.

  • Use of the study logo on all respondent materials and panel Website to maximize brand recognition.

  • Using lead letters, and study brochure, e-mails, and text messages to address frequently asked questions about the panel or individual studies.

  • Emphasizing privacy in all aspects of the panel experience.

  • Using tailored nonresponse letters addressing specific reasons for nonparticipation (see Attachments 3 and 4) at both the screening level as well as during the enrollment phase.

  • Implementing field supervisor review and approval of all noninterview cases.

  • Hiring sufficient numbers of bilingual interviewers so cases are rarely lost because of a Spanish-language barrier.

  • Designing study protocols and questionnaires that simplify the respondent task.

  • Providing easy access to project and information technology (IT) staff to address technical or other questions (see, for example, online technical support request form and password reset scripts in Attachments 3 and 4).

Tracking of movers is also critical to achieving high response rates and maintaining the panel. Detailed contact information will be collected and maintained for each panel member by the panel contractor, including name, address, date of birth, e-mail addresses, telephone numbers, and contact information for relatives or friends who will know how to reach the panel member in the event of a move. A unique 8-digit identification number will be assigned to each sample member and used for storage and retrieval (see A.10: Assurance of Privacy Provided to Respondents for more detail). The locator data will be updated periodically through as part of one of the planned studies. Panel members will also be provided with a means to update their contact information on the panel Website at any time, and encouraged to notify the contractor about upcoming moves or name, address, or telephone number changes via the panel Website. Additionally, forwarding information and address corrections will be requested with any communications provided to panel members via the U.S. Postal Service.

The contractor will deploy both centralized tracing and in-person field tracing to maximize location rates and minimize sample attrition. Tracing professionals in the contractor’s call center will track hard-to-locate sample members using an extensive array of interactive tracing databases and other resources to generate new leads and contact panelists who have relocated. Field interviewers will be trained on in-person tracing techniques, including strategies for generating new contact leads from current residents and neighbors of the panelist’s last known address, as well as relatives and other contact persons, postal carriers, and other local, community sources. Field staff training sessions will include reviews of general tracing procedures and locating strategies that are tailored to specific populations, such as low-income and minority populations.

B.3.2 Nonresponse Bias Assessment

We propose to study and measure nonresponse bias at the original recruitment stage, replenishment, and at least several early experimental or observational studies. Extensive analysis of nonresponse cases and panel members who leave the panel early will be conducted to inform subsequent refusal conversion and panel replenishment activities. This includes development of propensity models predicting the likelihood of panel attrition as a function of demographic characteristics, interviewer observations of the recruitment experience and likelihood of attrition, and historic panel behavior to identify cases that may need additional contacts and/or interviewer effort to remain in the panel.

We recognize that some panel members will request to end their participation in the panel early, before the end of their 3-year period. We will respect panel members’ decisions to leave the panel early and will provide them a formal disenrollment letter thanking them for their participation and will send any outstanding incentive payments they are owed at the time of their withdrawal. Other panel members may demonstrate their lack of continued interest through a pattern of nonresponse across multiple studies. We will assess each situation individually and make case-level decisions about whether or when to cease contact. For example, cases for panel members who do not respond to two consecutive experimental or observational studies will be reviewed to assess the level of responsiveness to panel maintenance or nonresponse follow-up contacts, and the feasibility of continued contact. If a decision is made to halt further contact efforts, the panel member will be sent a disenrollment letter along with any outstanding incentive payments they are owed. English and Spanish-language versions of the disenrollment letters are provided in Attachments 3-44, 3-45, 4-44, and 4-45.

There are two contributing components to the nonresponse bias, nonresponse rate and the difference between responses from respondents and nonrespondents (Kish, 1965). If both components are small, then the bias should be negligible. For bias to be significant, a large nonresponse rate should exist, and/or a large difference between the responses between respondents and nonrespondents. For example, the nonresponse bias would be large if older respondents tend not to respond and their tobacco use patterns are different from younger respondents.

Although response rates have been used as a key measure of data quality (Biemer & Lyberg, 2003), low response rates are not generally predictive of the nonresponse bias (Groves & Peytcheva, 2008). Researchers have explored alternative indicators to detect nonresponse bias (Wagner, 2012). We propose using three of the standard methods for assessing the nonresponse bias due to the unit nonresponse: response rate subgroup analysis, indirect comparisons of survey outcomes, and comparison of sample survey outcomes with corresponding population benchmarks. (Wagner, 2012). We believe that these three approaches will identify major sources of nonresponse bias and will suggest corrective strategies. There are several stages involved in developing and maintaining the panel. The stage most at risk for nonresponse bias is the original recruitment which is expected to experience the lowest response rate. Consequently, this is the stage on which we will focus most of our efforts, especially since all subsequent panel surveys and estimates will be based on the original recruitment stage. However, we reiterate that a strictly representative panel is not required for the majority of the work that is currently planned.

B.3.2.1 Compare Response Rates for Subgroups

In this first method, we will calculate and compare response rates for some key characteristics (e.g., household size, socioeconomic status, race/ethnicity, geographic location, urbanicity) that are available for both respondents and nonrespondents in the frame files. Because the contractor’s maintained frame is ABS-based with considerable amount of appended data, we will have an ample supply of indicators to be used in this analysis.

Response rate differences in those key characteristics provide insights into possible nonresponse bias to the extent those attribute characteristics are correlated with the survey outcomes. We will also use those characteristics as independent variables and the response indicator as the dependent variable to fit a logistic regression model. The predicted response probability/propensity will be estimated from the model, and the weighted (design-based weights will be used) standard deviation of the estimated response propensities will be calculated, S(p). Then the R-indicator (Schouten et al., 2009) can be calculated as R(p) = 1-2S(p), where 1 indicates good representativeness and 0 indicates poor representativeness.

B.3.2.2 Compare Differences of Survey Outcomes Indirectly

For the second method, we propose two approaches to assess the nonresponse bias by comparing survey outcomes between respondents and nonrespondents indirectly. Some nonresponse models suggest that those units that require more efforts to respond—for example, more callbacks, incentives, refusal conversion—are similar to the units that do not respond (Lin & Schaeffer, 1995). Thus the first approach will involve categorizing the respondents according to the level of efforts (LOE), such as number of contact attempts, ever refused, early or late responder, and comparing survey estimates (weighted by design-based weights) for each category. The differences among LOE categories could give a reasonable indicator of the magnitude and direction of nonresponse bias.

The second approach is based on the findings of stochastic nonresponse models that nonresponse bias of a mean is a function of the correlation between response propensity and the survey variables of interest (Bethlehem, 2002). We will use logistic regression to estimate the response propensities for all respondents and examine the correlation between the predicted propensity and the survey outcome variables. Each respondent will have a propensity score as well as a value for major outcome variables; correlation between propensity and outcome variable suggests presence of nonresponse bias. Another approach is to divide the response units into various propensity groups according to their response propensities and compare the survey estimates over propensity groups. Either high correlation between survey outcomes and predicted propensities or differences of survey estimates among different propensity groups may suggest nonresponse bias exists in the panel data.

B.3.3 Compare Respondent and Population Benchmarks

We also propose to measure nonresponse bias directly by comparing our panel participants’ distributions with distributions based on the corresponding target population. In this case, since we are dealing with the specific population of tobacco users, we will have to obtain benchmark data from a major national survey such as the NHIS. This will be the source of our gold-standard distributions and we will measure the extent to which our panel participants approximate those target distributions. We will use unweighted data to make these comparisons. For example, we will compare the distribution of the panel characteristics with the corresponding NHIS distribution of tobacco users. This analysis will jointly evaluate gender, age, socioeconomic status, race/ethnicity, and region. Significant differences on any of these variables indicates presence of nonresponse bias which should be flagged and quantified. Furthermore, once we have identified differences in the joint characteristics of the two populations, we will be in a position to use those variables for calculating adjustment weights. A final comparison of weighted panel distributions with benchmark targets will confirm that the weighting process has brought the sample data in line with the gold standards and thus eliminated the bias associated with the variables used in the weighting process.

B.3.4 Weight Adjustment to Minimize Nonresponse Bias

The results of nonresponse bias analyses will inform whether nonresponse bias exists, the magnitude of the bias if it exists, and possible methods for reducing the bias. The design weights will be adjusted for nonresponse, and nonresponse adjusted weights will be further poststratified to ACS total population and housing unit counts for important characteristics. We will calculate weights using the contractor’s proprietary software SUDAAN which uses generalized exponential modeling (Folsom & Singh, 2000) to adjust design weights for nonresponse and coverage imbalance to control all the variables that show different response rates or variables that relate to the survey outcome variables. We expect that the nonresponse and poststratification adjustments to the weights will reduce the nonresponse bias. However, we recognize that the nonresponse and poststratification adjustments cannot eliminate nonresponse bias completely and thus will take that into consideration in analysis of the study data.

B.4 Tests of Procedures

Focus groups (OMB Control No. 0910-0497), involving 49 adult tobacco users with varying demographic characteristics, were used to develop and refine protocols for recruiting panel members and maintaining their interest and involvement during their tenure in the panel. This included issues such as length of time in the panel, number and frequency of study requests, panel member incentive strategies, and various panel maintenance methods. Participants were asked to provide feedback on possible approaches and to complete several sample questionnaire items on two tablet computers being considered for the panel. The focus group sessions explored the following topics:

  • General reactions to the creation of a panel of tobacco users, including willingness to participate and concerns participants may have

  • Willingness to commit for a 2- or 3-year period, and preferences of participants

  • Reaction to the planned monthly contacts to maintain participant interest in the panel

  • Information needed to make an informed decision to join the panel, and how the information should be delivered

  • Reaction to proposed incentives, including cash incentives, tablet computers, and other possible cash or non-cash incentives for study participation

  • Feedback on elements of the equipment agreement associated with the tablet computers

  • Additional methods and materials that could be used to maintain interest in the panel

Feedback from focus group participants (OMB Control No. 0910-0497), as well as discussions with an external consultant on Web panel data collection and senior contractor methodology, survey, and IT personnel informed the final design recommendations for the panel. Key recommendations adopted for the panel included:

  • Implementing a cash-based incentive protocol rather than a tablet-based one for most panelists;

  • Utilizing a mixed-mode design to provide an alternative data collection option for those sample members who are technology adverse or who will not (or cannot) access the Internet, and

  • Subsampling of nonrespondents to address potential coverage and bias concerns through the limited offer of a study tablet computer (for use while in the panel).

More extensive testing of the panel procedures is planned through the initial panel implementation period that is described in Section B.2.2. The initial panel implementation period will provide an opportunity for testing all field interviewer training protocols, data collection systems, and panel screening and recruitment protocols. It will also provide a small group of panelists who can be used for initial implementation of future study questionnaires and panel maintenance protocols before they are launched for the remaining panel members. FDA and its contractor are committed to continuous improvement throughout the life of the panel.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The sample design for the panel was developed by senior statisticians in the contractor’s organization, in consultation with FDA statisticians. Contact information for the statistical consultants and FDA statisticians is provided below.

Karol Krotki, PhD

Senior Research Statistician

RTI International

Division of Statistical and Data Sciences

701 13th St. NW, Suite 750

Washington, DC 20005-3967

Ph. 202-728-2485

Patrick Chen, PhD

Senior Research Statistician

RTI International

Division of Statistical and Data Sciences

3040 Cornwallis Rd

Research Triangle Park, NC 27709

Ph. 919-541-6309

Antonio Paredes

Statistician

Food and Drug Administration

Center for Tobacco Products

Office of Science

Division of Population Health Science

10903 New Hampshire Ave

Silver Spring, MD 20993

Ph. 301-796-3866

Nikolas Pharris-Ciuej

Statistician

Food and Drug Administration

Center for Tobacco Products

Office of Science

Division of Population Health Science

10903 New Hampshire Ave

Silver Spring, MD 20993

Ph. 301-796-8875



As discussed in Part A, to inform the design of the panel recruitment and retention strategies, the contractor also engaged the services of a Web survey panel expert in the research community. The consultant participated in discussions with the contractor to review focus group findings (OMB Control No. 0910-0497) discussed above and provided feedback on strategies for recruiting and engaging panel members long-term. Consultant contact information is provided below.

Scott Crawford

Founder, Chief Executive Officer

Survey Sciences Group, LLC

950 Victors Way, Suite 50

Ann Arbor, Michigan 48108

Ph. 734-527-2150




References

Armstrong, J. Scott. 1975. Monetary Incentives in Mail Surveys. Public Opinion Quarterly, 39, pp. 111–116.

Health System Measurement Project (2013). https://healthmeasures.aspe.hhs.gov/measure/268


Baker, R., Blumberg, S., Brick, M., Couper, M., Courtright, M., Dennis, J. M., Dillman, D., Frankel, M., Garland, P., Groves, R., Kennedy, C., Krosnick, J. and Lavrakas, P. 2010. AAPOR Report on Online Panels. Public Opinion Quarterly, 74 (4), pp.711–781.

Baumgartner, Robert and Pamela Rathbun (1997). Prepaid monetary incentives and mail survey response rates. Paper presented at AAPOR, Norfolk, VA

Bethlehem, J. (2002). Weighting Nonresponse Adjustments Based on Auxiliary Information. In Survey Nonresponse. R.M. Groves, D.A. Dillman, J.L. Eltinge, & R.J.A. Little, eds. pp. 275-278. New York: John Wiley and Sons.

Biemer, P. P., & Lyberg, L. (2003). Introduction to Survey Quality. Hoboken, NJ: Wiley.

Biner, P. M. and Kidd, H. J., 1994. The Interactive Effects of Monetary Incentive Justification and Questionnaire Length on Mail Survey Response Rates. Psychology and Marketing 11:483–492.

Centers for Disease Control and Prevention. Current Cigarette Smoking Among Adults—United States, 2005–2014. Morbidity and Mortality Weekly Report 2015;64(44):1233–40 [accessed 2015 Dec 7].

Church, Allan H. 1993. Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. Public Opinion Quarterly 57:62–79.

Clark, S. M. and Mack, S.P. 2009. SIPP 2008 Incentive Analysis. Paper Presented at the Federal Committee on Statistical Methodology Research Conference, Washington, D.C.

Cobb, L.C., Lawrence, M.S., and Gross, W., 2012. The Prevalence and Impact of Self-Selection Bias and Panel Conditioning on Smoker Studies Using Established Internet Panels. Presentation for Southern Association of Public Opinion Research (SAPOR).

Coen, T., Lorch, J. and Piekarski, L. 2005. The effects of survey frequency on panelists’ responses. Worldwide Panel Research: Developments and Progress. Amsterdam: ESOMAR.

Creighton, K., King, K. and Martin, E. 2007. The Use of Monetary Incentives in Census Bureau Longitudinal Surveys. Survey Methodology Research Report Series N2007-2. Washington, DC: U.S. Census Bureau.

Cunradi, C. B., Moore, R., Killoran, M., and Ames, G. 2005. Survey Nonresponse Bias among Young Adults: The Role of Alcohol, Tobacco, and Drugs. Subst Use Misuse 40(2): 171–85.

DeBell, M., Krosnick, J. and A. Lupia 2010. Methodology Report and User’s Guide for the 2008-2009 ANES Panel Study. Palo Alto, CA and Ann Arbor, MI: Stanford University and the University of Michigan.

Dillman, D. A. 2000. Mail and Internet Surveys: The Tailored Design Method, 2nd edition. New York: Wiley.

Dillman, D. A., 2007. Mail and Internet Surveys: The Tailored Design Method, 2nd edition. 2007 Update with New Internet, Visual and Mixed-mode Guide. New York: Wiley.

Folsom, R. E., & Singh, A. C. (2000). The generalized exponential model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association, Survey Research Methods Section, pp. 598-603. Alexandria, VA: American Statistical Association.

Fox, R.J., Crask, M.R., and Kim, J. 1988. Mail Survey Response Rate: A Meta-analysis of Selected Techniques for Inducing Response. Public Opinion Quarterly, 52, 467–491.

Groves, R. M., Couper, M. P., Presser, S., Singer, E., Tourangeau, R., Acosta, G. P., and Nelson, L. 2006. Experiments in Producing Nonresponse Bias. Public Opinion Quarterly 70(5): 720–736.

Groves, R. M., Presser, S., and Dipko, S. 2004. The Role of Topic Interest in Survey Participation Decisions. Public Opinion Quarterly 68(1): 2–31.

Groves, R. M., Singer, E., and Corning, A. 2000. Leverage-Saliency Theory of Survey Participation - Description and an Illustration. Public Opinion Quarterly 64(3): 299–308.

Groves, R., & Peytcheva E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167-189.

Heberlein, T. A. and Baumgartner, R. 1978. Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature. American Sociological Review 3:447-62.

Heberlein, T. A. and Baumgartner, R. 1978. Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature. American Sociological Review 3:447-62.

HINTS 4, Cycle 4. 2014. Public Use Dataset updated June 2015: http://hints.cancer.gov/dataset.aspx


Iannacchione, V. G. (2011). The changing role of address-based sampling in survey research. Public Opinion Quarterly, 75(3), 556–575.

James, T. L. 1997. Results of Wave 1 Incentive Experiment in the 1996 Survey of Income and Program Participation. Proceedings of the Survey Research Methods Section of the American Statistical Association, pp.834–839.

Kish, L. (1965). Survey Sampling. New York: John Wiley and Sons.

Kruse, Y., Callegaro, M., Dennis, J. M., DiSogra, C., Subias, S., Lawrence, M., & Tompson, T. 2009.  Panel conditioning and attrition in the AP-Yahoo! News Election Panel Study. Paper presented at the American Association for Public Opinion Research (AAPOR) 64th Annual Conference.

Lanz, P. M. 2003. Smoking on the Rise among Young Adults: Implications for Research and Policy. Tobacco Control 12 (Suppl I): i60-i70.

Lengacher, J., Sullivan, C., Couper, M. P and R. Groves. 1995. Once Reluctant, Always Reluctant? Effects pf Differential Incentives on Later Survey Participation in a Longitudinal Survey. Proceedings of the American Statistical Association, Survey Research Methods Section, p.1029–1034.

Levine, S. and Gordon, G. 1958. Maximizing Returns on Mail Questionnaires. Public Opinion Quarterly, 22:568-75.

Lin, I. F., & Schaeffer, N. (1995). Using survey participants to estimate the impact of nonparticipation. Public Opinion Quarterly, 59, 236-258.

Linsky, A. 1975. Stimulating Responses to Mailed Questionnaires: A Review. Public Opinion Quarterly, 39, pp. 82–101.

Mack, S., Huggins, V., Keathley, D. and Sundukchi, M. 1998. Do Monetary Incentives Improve Response Rates in the Survey of Income and Program Participation? Proceedings of the American Statistical Association, Survey Research Methods Section, 529–534.

McMichael, J., Ridenhour, J., & Shook-Sa, B. 2008. A robust procedure to supplement the coverage of address-based sampling frames for household surveys. Proceedings of the American Statistical Association, Section on Survey Research Methods, 4329–4335.

Nancarrow, C. & Catwright, T. 2007. Online access panels and tracking research: The conditioning issue. International Journal of Market Research, 49(5), 435–447.

National Adult Tobacco Survey (2014). http://www.cdc.gov/tobacco/data_statistics/fact_sheets/adult_data/cig_smoking/index.htm.


NHIS. 2014. Public Use Dataset updated June 2015:

http://www.cdc.gov/nchs/nhis/nhis_2014_data_release.htm.


Poynter, R. and P. Comley. 2003. Beyong Online Panels. Proceedings of the ESOMAR Technovate Conference. Amsterdam: ESOMAR.

Rodgers, W. 2002. Size of Incentive Effects in a Longitudinal Study. Proceedings of the American Association for Public Research 2002: Strengthening Our Community - Section on Survey Research Methods.

RTI, 2010, SUDAAN Release 10.

Schiller, J. S., Lucas, J. W., Peregoy, J.A. 2012. Summary health statistics for U.S. adults: National Health Interview Survey, 2011. National Center for Health Statistics. Vital Health Stat 10(256).

Seltzer, C. C., R. Bosse and A. J. Garvey 1974. Mail Survey Response by Smoking Status. American Journal of Epidemiology 100(6): 453–457.

Singer, E., Van Hoewyk, J. and Maher, M. P. 1998. Does the Payment of Incentives Create Expectation Effects? Public Opinion Quarterly, 62: 152–64.

TUS-CPS, 2010-2011. Public Use Dataset updated May 2011;

http://thedataweb.rm.census.gov/ftp/cps_ftp.html#cpssupps.


U.S. Census Bureau 2013. Computer and Internet Use in the United States. U.S. Census Bureau publication P20-569. http://www.census.gov/prod/2013pubs/p20-569.pdf

Vestbo, J. and Rasmussen, F. V. 1992. Baseline Characteristics Are Not Sufficient Indicators of Non-Response Bias Follow up Studies. Journal of Epidemiology and Community Health 46(6): 617–619.

Yu, J. and H. Cooper, 1983. A Quantitative Review of Research Design Effects on Response Rates to Questionnaires.  Journal of Marketing Research 20: 36-44.

Zickuhr, K. 2013. Who’s not Online and Why. Washington, DC: Pew Research Center; http://www.pewinternet.org/2013/09/25/whos-not-online-and-why/

1 The number of tobacco users for each county is estimated using the results from the predictive modeling as described in Section 2.1.3.

2 The cutoff value of 2,000 and 31,000 tobacco users correspond to the 25 percentile and 90 percentile of the distribution of county-level estimated number of tobacco users.

3 The yearly attrition rate assumes a 90% response rate for each experimental/observational study and a maximum of four studies per year. After the first study, 90% of panel members stay in the panel. After the second study, that drops to 81%, and so on. After four studies, the panel retains approximately 65% of its original members.

1

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRadway, Anne
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy