Supporting Statement Part B Adequate Provision

Supporting Statement Part B Adequate Provision.docx

Utilization of Adequate Provision Among Low to Non-Internet Users

OMB: 0910-0853

Document [docx]
Download: docx | pdf

B. Statistical Methods

  1. Respondent Universe and Sampling Methods

Given that older adults (i.e., those aged 65 and older) are among the largest consumers of prescription drugs1 and that approximately 41 percent of older adults do not use the Internet,2 investigating use of adequate provision in this population is especially important. Also of concern, 34 percent of those with less than a high school education do not use the Internet, 23 percent of individuals with household incomes lower than $30,000 per year do not use the Internet, and 22 percent of individuals living in rural areas do not use the Internet.3 These estimates capture non-Internet users, and so consideration of low-Internet users warrants additional concern. Consistent with these citations, the present research will utilize a nationally representative sample of low to non-Internet users from these and other relevant demographic groups.

Data collection will utilize a random digit dialing (RDD) sample that has been pre-identified as being a non-Internet household, or having at least one non-Internet using member. This sample solution is ideal because it relies on a dual-frame (landline and cell phone) probability-sample, yet has the advantage of prior knowledge of those who are likely to be low to non-Internet users (re-screening will verify this). The Social Science Research Solutions (SSRS) Omnibus, within which this survey will be embedded, utilizes a sample designed to represent the entire adult U.S. population, including Hawaii and Alaska, and including bilingual (Spanish-speaking) respondents. As reflected in the overall population of low to non-Internet users, we intend to collect a small sample of Spanish-speaking individuals, which comprise a subsample of the regular landline and cell phone RDD sampling frames. We may also screen for past and present prescription drug use in order to ensure a motivated sample.

  1. Procedures for Collection of Information

This survey will be conducted by telephone on landline and cell phones, with an expected 50 to 60 percent of interviews conducted on cell phones. Interviewing for the pretest and main study will be conducted via SSRS's computer-assisted telephone interviewing (CATI) system. We expect to achieve a roughly 40 percent survey completion rate from the pre-identified respondents to be sampled in this study, given an 8-week field period and a maximum of 10 attempts to reach respondents. The original SSRS Omnibus from which this sample is derived receives an approximately 8 to 12 percent response rate. These are not uncommon response rates for high-quality surveys and have been found to yield accurate estimates.4,5

As communicated earlier, the primary focus of interview questions concern the ability and willingness of low to non-Internet users to utilize the various components of adequate provision, particularly the toll free number and print ad components. In addition to these questions, experimental manipulations will be embedded in the survey as an exploratory test to assess the impact of opening statements that could be used to introduce risks in DTC prescription drug broadcast ads, which is a related concept. To form the experimental manipulations, participants will be presented with a statement of major risks and side effects ("the major statement") drawn from a real prescription drug product, but modified to include only serious and actionable risks. Preceding this description of major risks will be one of three opening statements: (1) "[Drug] can cause severe, life threatening reactions. These include…"; (2) "[Drug] can cause serious reactions. These include…"; or (3) "[Drug] can cause reactions. These include…" All risk statements will conclude with the following language: "This is not a full list of risks and side effects. Talk to your doctor and read the patient labeling for more information." Participants will be randomly assigned to experimental condition, and all manipulations will be pre-recorded to allow for consistent administration. Following exposure to these manipulations, participants will respond to several questions designed to assess risk perceptions.

Before the main study, we will execute a pretest with a sample of 25 participants from the same sampling frame as outlined in this document. The pretest questionnaire will take approximately 15 minutes to complete. The goal of the pretest will be to assess the questionnaire's format and the general protocol to ensure that the main study is ready for execution. To test the protocol among the target groups, we will seek to recruit a mix of participants based on demographic and other characteristics of interest. We do not plan to use incentives for the pretest or main study portions of this survey. However, upon request, cell phone respondents may be offered $5 to cover the cost of their cell phone minutes.

Questionnaire development is an iterative process and so the main study questionnaire will include any changes from pretesting, as well as other outcomes, such as OMB and public comments, or cognitive interviewing. Like pretesting, the main study questionnaire should take approximately 15 minutes to complete. Based on a power analyses, the main study sample will include approximately 1,996 participants. This sample size will allow us to draw statistical comparisons between the various demographic groups in the sample.

Analysis Plan

Primary research questions of interest are as follows:

  1. What is the ability and willingness of low-/non-internet users to access adequate provision by using the various sources currently available?

  2. Are low-/non-internet users able and willing to seek information online if they have a specific information-seeking goal and if the information is only available online?

  3. Are the print component and toll-free number both necessary to adequate provision, or are low-/non-internet users able and willing to access information in other ways?

  4. What sources are low-/non-internet users using to access prescription-drug product information other than those sources used in adequate provision?

  5. What are low-/non-internet users’ concerns about privacy related to accessing prescription-drug product information, if any?

To address these questions, we will conduct a descriptive analysis on items relating to the sample characteristics and items responses. When appropriate based on distribution, we will perform follow-up univariate and multivariate analyses (e.g., t-test, correlations, ANOVA, chi-square, and regression) on survey items relating to the key research questions. For the experimental conditions specifically, we will conduct an omnibus Analysis of Variance (ANOVA) to measure differences between the conditions on follow-up questions relating to perceived risk magnitude, perceive risk likelihood, and perceived risk duration. If significant differences are detected (p < .05), we will follow up the preliminary ANOVA with post hoc comparisons to confirm where the differences between groups and subgroups exist.


In order to calculate the minimum sample size required for this research effort, we conducted a power analysis. We assumed a power of .90, alpha of .05, and small effect. In addition, we assumed that the majority of tests will involve t-tests (d = .20) that compare the responses of different subgroups (e.g., older adults versus rest of offline adults) against each other in order to determine whether any differences exist in their risk perceptions, beliefs, and information seeking behaviors. Given this, if the survey were a simple random sample (SRS) with a 100% response rate, we would recommend a minimum sample size of 1,054 in the main study. This sample size would provide enough power to detect small effects for correlation tests (r = .10). However, nearly every sample survey suffers from nonresponse, and thus team members will conduct survey weighting to account for the sampling design and mitigate various sources of survey error. Although a well-designed weighting strategy will reduce the risk of biased estimates, it will typically increase the variance of estimates. Further, given the potential for four experimental groups, we assume that the majority of tests will involve a series of analysis of variance (ANOVA) tests (f = .10). With this in mind, we recommend obtaining a minimum sample size of 1,996 in the main study, which corresponds to approximately 499 participants per experimental condition and allows for weighted estimates. This estimate is based on a power analysis that indicated a requirement of 1,424 observations, which has been increased by 40% to allow for a design effect of up to 1.4.

  1. Methods to Maximize Response Rates and Deal with Non-response

We expect to achieve a roughly 40% survey completion rate from the pre-identified Omnibus respondents to be sampled in this study, given an eight-week field period and a maximum of 10 attempts to reach respondents. The original SSRS Omnibus from which this sample is derived receives an approximately 8% to 12% response rate. These are not uncommon response rates for high-quality surveys and have been found to yield accurate estimates.6,7 Further, the use of these pre-identified sample members will increase the availability of demographic characteristics for the full survey sample, which will increase the options for assessing the risk of nonresponse bias and/or applying nonresponse weighting adjustments4.

SSRS has developed several techniques to alleviate the problem of nonresponse in order to increase response rates, including:

  • Increasing the number of callbacks placed before considering a sampling unit exhausted.

  • Varying the times of day and the days that callbacks are placed (differential call rule).

  • Explaining the purpose of the survey and stating as accurately as possible the expected length of the interview.

  • Permitting respondents to set the schedule for a callback; allowing them to phone back to the toll-free number.

  • Stating clearly and early that the call is not a sales call.

  • Informing respondents about how they will be well served by the survey results.

In an effort to maximize the response rate in the interview phase, respondents are given every opportunity to complete the interview at their convenience. For instance, those refusing to continue at the initiation of or during the course of the interview will be offered the opportunity to be re-contacted at a more convenient time to complete the interview. Nonresponsive numbers, such as no answers, answering machines, and busy signals, receive six call attempts.

  1. Test of Procedures or Methods to be Undertaken

Two types of pretesting (qualitative and quantitative) are employed as a test of procedures and methods.8 The first type of pretesting—already conducted—is qualitative. Cognitive testing with nine individuals was used to refine study stimuli and questions. Additionally, as described in this package, one round of quantitative pretesting will be employed. The pretest will be used to evaluate the procedures and measures used in the main studies. The pretest will have the same design as the main study. The purpose of the pretest will be to test the questionnaire’s format, the data collection protocol, statistical measures, and any other considerations that may arise. Based on pretest findings, we will refine the survey questions and data collection process, as necessary, to optimize the full-scale study conditions.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The contractor, Fors Marsh Group, will collect and analyze data on behalf of FDA as a task order under Contract HHSF223201510003B. Brian Griepentrog, Ph.D., is the Project Director, (571) 858-3798. Review of contractor deliverables and supplemental analyses will be provided by the Research Team, Office of Prescription Drug Promotion (OPDP), Office of Medical Policy, CDER, FDA, and coordinated by Kevin R. Betts, Ph.D., (240) 402-5090, and Kathryn Aikin, Ph.D., (301) 796-0569.

1 National Center for Health Statistics. "Health, United States, 2015: With Special Feature on Racial and Ethnic Health Disparities." Hyattsville, MD. 2016.

2 Anderson, M. and A. Perrin (2016). "13% of Americans Don't Use the Internet: Who Are They?" Pew Research Center. Available at

3 Anderson, M. and A. Perrin (2016). "13% of Americans Don't Use the Internet: Who Are They?" Pew Research Center. Available at

4 Brick, J. M. and D. Williams (2013). "Explaining Rising Nonresponse Rates in Cross-Sectional Surveys." The Annals of the American Academy of Political and Social Science, 645: pp. 36-59.

5 Groves, R. M. (2006). "Nonresponse Rates and Nonresponse Bias in Household Surveys." Public Opinion Quarterly, 70: pp. 646-675.

6 Brick, J. M., & Williams, D. (2013). Explaining rising nonresponse rates in cross-sectional surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 36-59.

7 Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675.

8 Pretesting is suggested by OMB as a method to test procedures. See Office of Management and Budget Standards and Guidelines for Statistical Surveys (September, 2006). Available at Last accessed January 12, 2012.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBetts, Kevin
File Modified0000-00-00
File Created2021-01-21

© 2023 | Privacy Policy