Quant Claims SSA FINAL MAY 2024

Quant Claims SSA FINAL MAY 2024.docx

A Survey on Quantitative Claims in Direct-to-Consumer Prescription Drug Advertising

OMB: 0910-0929

Document [docx]
Download: docx | pdf

United States Food and Drug Administration


A Survey on Quantitative Claims in Direct-to-Consumer Prescription Drug Advertising

OMB Control No. 0910-NEW


SUPPORTING STATEMENT

Part A: Justification:

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the Food and Drug Administration (FDA) to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA-regulated products in carrying out the provisions of the FD&C Act.


The mission of the Office of Prescription Drug Promotion (OPDP) is to protect the public health by helping to ensure that prescription drug promotion is truthful, balanced, and accurately communicated so that patients and healthcare providers can make informed decisions about treatment options. OPDP’s research program provides scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health. Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that are most central to our mission, focusing in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features, we assess how elements such as graphics, format, and the characteristics of the disease and product impact the communication and understanding of prescription drug risks and benefits. Focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience. Our focus on research quality aims at maximizing the quality of our research data through analytical methodology development and investigation of sampling and response issues. This study will inform the first topic area, advertising features.


Because we recognize that the strength of data and the confidence in the robust nature of the findings are improved through the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage at https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/office-prescription-drug-promotion-opdp-research, which includes links to the latest Federal Register notices and peer-reviewed publications produced by our office.



Direct-to-consumer (DTC) prescription drug advertising may make quantitative claims about the drug’s efficacy or risks (Ref. 1). We conducted a literature review and found that some types of quantitative information are well-studied (e.g., relative frequencies, see also Section 4). In addition, the FDA guidance (“Presenting Quantitative Efficacy and Risk Information in Direct-to-Consumer Promotional Labeling and Advertisements,” available at https://www.fda.gov/media/117573/download) provides general guidelines for presentation of quantitative information about prescription drugs in a consumer-friendly manner. However, many questions remain on how best to communicate certain quantitative information about prescription drugs, including how consumers will interpret specific quantitative claims. For example, we do not have sufficient information about how consumers interpret different claims describing medians (e.g., “People treated with Drug X lived for a median of 8 months” alone or in combination with a definition such as “In people receiving Drug X, this means that about half lived more than 8 months and about half lived less than 8 months” or “A median is the middle number in a group of numbers ordered from smallest to largest”). This study aims to survey U.S. adults about their interpretation of specific quantitative claims, including those regarding medians, that are seen in the marketplace.


  1. Purpose and Use of the Information Collection

The purpose of this survey is to collect insights on consumer understanding of quantitative claims in DTC prescription drug advertising. This study will build on previous qualitative research by recruiting a wider range of respondents and weighting the data to make it nationally representative. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore, it is important to communicate the benefits and risks of prescription drugs to consumers as clearly and usefully as possible.

  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. We plan to use an address-based, mixed-mode methodology that will direct one randomly chosen member of sampled households to complete a 30-minute online survey, with nonrespondents receiving a paper questionnaire. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant and by keeping survey durations to less than 30 minutes.

  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. This literature review investigated laypersons’ interpretations of statistical terms, expanding on past reviews and including terms that may be used in direct-to-consumer prescription drug promotion. We searched six databases. Articles were included if they were in English and examined comprehension of quantitative or statistical terms by general or lay audiences. We identified 25 eligible articles. There is evidence regarding likelihood ratios, odds ratios, probabilities, numbers needed to treat/harm, confidence intervals, frequencies, percentages, absolute risk reduction, and relative risk reduction. We found no studies examining interpretations of minimum, maximum, central tendency, power, statistical significance, or hazard ratio. The available literature yields little information on the types of claims examined in this survey. We are unaware of duplicative information collection.

  1. Impact on Small Businesses or Other Small Entities

There will be no impact on small businesses or other small entities. The collection of information involves individuals, not small businesses.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In the Federal Register of April 25, 2023 (88 FR 24997), FDA published a 60-day notice requesting public comment on the proposed collection of information. FDA received two submissions that were Paperwork Reduction Act (PRA) related. Within the submissions, FDA received multiple comments that the Agency has addressed in this notice. For brevity, some public comments are paraphrased and, therefore, may not state the exact language used by the commenter. All comments were considered even if not fully captured by our paraphrasing in this document. Comments and responses are numbered here for organizational purposes only.

(Comment 1) One comment suggested testing claims that are addressed by the guidance for industry entitled “Medical Product Communications That Are Consistent With the FDA-Required Labeling - Questions and Answers.”

(Response 1) The focus of this study is not to test such claims. In addition, because these drugs are fictional, there is no label with which to compare the claims, which means that the guidance is not an applicable concept in this study. These study results could apply to any quantitative claims similar to those we will test.

(Comment 2) One comment suggested that individuals with a health condition or their caregivers may be more familiar with specific types of quantitative claims. The comment recommended ensuring that an adequate number of such individuals is recruited.

(Response 2) Our intent is to conduct a nationally representative sample to get a broad sense of how the public interprets quantitative claims that appear in prescription drug ads across drug classes. In response to this comment, we have added an item to the survey to assess whether participants have the conditions mentioned in the survey (i.e., colon cancer, arthritis, seizures, migraine, lung cancer, heart attack or stroke, eczema), or have cared for someone with these conditions. This will allow us to explore associations between survey responses and experiences with medical conditions.

(Comment 3) One comment recommended determining participants’ comprehension of information regarding relative risk, absolute risk, relative benefit, and absolute benefit.

(Response 3) There is a body of research on many of these topics; see, for example, the references section in the guidance for industry entitled “Presenting Quantitative Efficacy and Risk Information in Direct-to-Consumer (DTC) Promotional Labeling and Advertisements,” available at https://www.fda.gov/media/169803/download. In this survey, we will examine participants’ interpretations of relative benefit.

(Comment 4) One comment requested information on the number of and demographic diversity of the one-on-one interviews.

(Response 4) Since the 60-day Federal Register notice was published, we conducted 24 interviews (approved under OMB control number 0910-0847). We recruited with demographic diversity in mind. Half (50 percent) of the participants had some college or more education, and half (50 percent) had less education. Overall, 58 percent of the participants were women, 29 percent were non-Hispanic White, 29 percent were non-Hispanic Black, and 25 percent were Hispanic. We also recruited participants of different ages: 42 percent were between the ages of 18 and 39, 29 percent were between the ages of 40 and 59, and 29 percent were 60 and older.

External Reviewers


In addition to public comment, OPDP sent materials and received comments for external peer review in 2023 from:


Ellen Peters, Ph.D.

Philip H. Knight Chair

Director, Center for Science Communication Research

Professor, School of Journalism and Communication and Department of Psychology

University of Oregon


La’Marcus Wingate, PharmD, Ph.D.

Associate Professor

Howard University College of Pharmacy


Daniella Zipkin, M.D.

Professor of Medicine

Duke University School of Medicine


  1. Explanation of Any Payment or Gift to Respondents


A large body of literature supports the inclusion of prepaid incentives in a survey mailing to increase response rates (Ref. 2) and, up to a point, larger incentives result in larger effects (Ref. 3). Mercer et al. (Ref. 4) found that a $5 prepaid incentive led to about 10 percentage points higher response than a $1 prepaid incentive. Given the strong evidence for including a prepaid incentive, all sampled address will receive the $5 prepaid incentive.


Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of this incentive:


Burden on the respondent: Because participants often have competing demands for their time and recently even more limited time due to pandemic challenges (e.g., lack of or limited childcare, increase in home or caregiving responsibilities), incentives are used to encourage participation in research. When applied in a reasonable manner, incentives are not an unjust inducement and are an approach that recognizes the time burden placed on participants, encourages their cooperation, and conveys appreciation for contributing to this important study. The use of incentives treats participants justly and with respect by recognizing and acknowledging the effort that they expend to participate (Refs. 5 and 6). Incentives must be high enough to equalize the burden placed on respondents with respect to their time and cost of participation, as well as to provide enough motivation for them to participate in the study rather than another activity.


Data quality/Improved coverage of specialized respondents, rare groups, or minority populations: There is some evidence that using incentives can reduce nonresponse bias in some situations by bringing in a more representative set of respondents. High nonresponse can risk our ability to achieve the target number of completes for the study. Therefore, it is critical to maximize the number of people who respond in order to ensure sufficient power. The use of modest incentives is expected to enhance survey response rates and reduce nonresponse bias. In fact, monetary incentives have been found to increase initial response rates, convert refusals, and reduce subsequent attrition (Refs. 7 to 9).


Reduced survey costs: If the incentive is not adequate, participants may not agree to participate in the study, or they may agree to participate but then drop out early, resulting in an incomplete survey. Low participation may result in inadequate data collection and can cause a difficult and lengthy recruitment process that, in turn, can cause delays in launching the research, both of which lead to increased costs.


Research into the effects of incentives: We will conduct an experiment to assess the efficacy of using a $10 promised incentive. A prepaid incentive helps legitimize the survey and increase the effectiveness of the postpaid incentive; promised incentives alone generally are not that effective at increasing response rates (Ref. 10). Mercer et al. (Ref. 4) found that a promised or postpaid incentive led to increased response rates, but there were diminishing returns on the amount promised. They found that for phone surveys, offering a $5 promised incentive increase response rates by 4 percentage points, and offering a $10 promised incentive increased it by 5 percentage points. However, offering a $20 promised incentive only increased response rates by one percentage point compared to a $10 incentive but would double the cost per complete. In a longitudinal study, Yu et al. (Ref. 11) found that a promised $10 incentive increased the likelihood of returning a survey by 50 percent. More recently, Ellis et al (Ref. 12) testing no promised incentive compared with $5, $10, or $20 incentives for an ABS sample and found that the $10 promised incentive obtained a 7-percentage point increase in response rate over a $5 incentive and an 8-percentage point increase over no promised incentive. Therefore, we propose to conduct an experiment whereby 75 percent of the sample will receive the $10 promised incentive sent upon completion of the survey. The promised $10 incentive will either be mailed to the respondent, or, in the case of web surveys, it will be offered as an egift card available immediately upon completion. The remaining 25 percent of the sample will not be notified of or provided any promised incentive. We opted to split the sample 75-25 rather than 50-50 because the initial evidence shows the benefits of including a promised incentive, and we aimed to maximize response rates.


The use of incentives treats participants justly and with respect by recognizing and acknowledging the total effort that they expend to respond to the survey and, for paper surveys, to mail it back in. In this survey effort, we are asking participants to provide feedback on concepts that require a high level of engagement. The Bureau of Labor Statistics calculated that the average hourly compensation for civilian workers in March 2023, including benefits, was $43.07. At that hourly rate, compensation for 35 minutes, which includes 30 minutes to take the survey and approximately 5 minutes to review the accompanying letter, is approximately $25. Although the incentive is a token of appreciation rather than a wage, this estimate represents the amount of money participants would earn if they spent the same amount of time working at a job, which helps identify what an appropriate minimum incentive might be to adequately incentivize someone to complete the survey. By offering a $15 incentive in total for completing and submitting the survey, we hope to reduce nonresponse early in the data collection protocol and reduce overall expenses of the project with repeat mailings.

  1. Assurance of Confidentiality Provided to Respondents

In preparing this Supporting Statement, we consulted our Privacy Office to ensure appropriate identification and handling of information collected.


This ICR will collect personally identifiable information (PII). The PII collected typically consists of contact information. PII is collected on behalf of the FDA by a contractor or vendor who conducts surveys. PII is collected to gather information on consumer understanding of quantitative claims in DTC prescription drug advertising. Information collected by the vendor or contractor will be summarized into aggregate form, sent in aggregate to FDA (no PII will be included), and destroyed after the study or interview has been completed. Collected PII is used to notify potential respondents of their selection and includes contact information (address). All information collected will be kept secure by the vendor or contractor. FDA and any vendor or contractor will disclose identifiable information only to the extent authorized by the individual or required by law. Contractors or vendors maintaining information will destroy it in accordance with applicable records retention and other requirements per contract terms after the aggregate information has been provided to FDA and the survey has been completed. In keeping with IRB/Human Subjects Research protocols, the FDA clearance process ensures that study data is appropriately secured (e.g., housed on the Contractor’s servers, password protected, separate storage areas for each study, access controlled).


FDA determined that although PII is collected it is not subject to the Privacy Act of 1974 and the particular notice and other requirements of the Act do not apply. Specifically, the contractor does not use name or any other personal identifier to retrieve records from the information collected.


11. Justification for Sensitive Questions


The collection of information does not involve sensitive questions. The survey is in Appendix A.


12. Estimates of Annualized Burden Hours and Costs


12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:

Table 1.--Estimated Annual Reporting Burden1




Activity


No. of

Respondents

No. of Responses per Respondent


Total Annual

Responses

Average Burden per Response


Total

Hours

Read prenotification letter

2,993

1

2,993

0.08

(5 min.)

239

Read web survey invitation letter2

2,843

1

2,843

0.08

(5 min.)

227

Read reminder postcard

2,585

1

2,585

0.03

(2 min.)

78

Respond to survey (web and paper)

1,100

1

1,100

0.50

(30 min.)

550

Total


1094

1There are no capital costs or operating and maintenance costs associated with this collection of information.

2The numbers assume around 5 percent postal non-deliverables from the prenotification letter and estimates nonrespondents for the subsequent mailings.

These estimates are based on FDA’s and the contractor’s experiences with previous nationally representative surveys.


12b. Annualized Cost Burden Estimate


There are no capital costs or operating and maintenance costs associated with this collection of information. As a voluntary collection being administered at FDA’s expense, we estimate no annualized cost to respondents.

13. Estimates of Other Total Annual Costs to Respondents/Recordkeepers or Capital Costs

There are no capital, start-up, operating, or maintenance costs associated with this information collection.

14. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the collection of data is $520,000. This includes the costs paid to the contractors to develop the survey, draw the sample, collect the data, analyze the data, and draft a report. The cost also includes the time for FDA staff to design and manage the study. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information.


15. Explanation for Program Changes or Adjustments


This is a new information collection.

16. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for survey data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Part B for detailed information on the design and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined but may include presentations at trade and academic conferences, publications, articles, and internet posting.


Table 2.--Project Time Schedule

Task

Estimated Number of Weeks after OMB Approval

Main study data collected

24 weeks

Data analysis and report completed

48 weeks

17. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

18. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.

References

1. Sullivan, H.W., K.J. Aikin, and L.B. Squiers, “Quantitative Information on Oncology Prescription Drug Websites,” Journal of Cancer Education, vol. 33, Issue 2, pp. 371–374, 2018. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5334459)

2. Sun, H., J. Newsome, J. McNulty, et al., What Works, What Doesn’t? Three Studies Designed to Improve Survey Response, Field Methods, vol. 32, Issue 3, pp. 235252, 2020. (https://doi.org/10.1177/1525822X20915464)

3. Brick, J.M., D. Williams, and J.M. Montaquila, “Address-Based Sampling for Subpopulation Surveys,” Public Opinion Quarterly, vol. 75, Issue 3, pp. 409-428, 2011.

4. Mercer, A., A. Caporaso, D. Cantor, et al., How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys, Public Opinion Quarterly, vol. 79, Issue 1, pp. 105129, 2015.

5. Halpen, S.D., J.H., Karlawish, D. Casarett, et al., “Empirical Assessment of Whether Moderate Payments are Undue or Unjust Inducements for Participation in Clinical Trials,” Archives of Internal Medicine, vol. 164, Issue 7, pp. 801–803, 2004.

6. Russell, M.L., D.G. Moralejo, and E.D. Burgess, “Paying Research Subjects: Participants’ Perspectives,” Journal of Medical Ethics, vol. 26, Issue 2, pp. 126–130, 2000.

7. Guyll, M., R. Spoth, and C. Redmond, “The Effects of Incentives and Research Requirements on Participation Rates for a Community-Based Preventive Intervention Research Study,” Journal of Primary Prevention, vol. 24, Issue 1, pp. 25-41, 2003.

8. Pit, S. W., T. Vo, and S. Pyakurel, “The Effectiveness of Recruitment Strategies on General Practitioner’s Survey Response Rates—A systematic Review,” BMC Medical Research Methodology, vol. 14, Issue 1, p. 76, 2014.

9. Singer, E., and R.A. Kulka. “Paying Respondents for Survey Participation,” In M. Ver Ploeg, R.A. Moffitt, and C.F. Citro (Eds.), Studies of Welfare Populations: Data Collection and Research Issues, Washington, D.C.: National Academy Press, 2002.

10. Cheung, Y.T.D., X. Weng, M.P. Wang, et al., Effect of Prepaid and Promised Financial Incentive on Follow-Up Survey Response in Cigarette Smokers: A Randomized Controlled Trial, BMC Medical Research Methodology, vol. 19, Article 138, 2019. (https://link.springer.com/article/10.1186/s12874-019-0786-9)

11. Yu, S., H.E. Alper, A.M. Nguyen, et al., The Effectiveness of a Monetary Incentive Offer on Survey Response Rates and Response Completeness in a Longitudinal Study, BMC Medical Research Methodology, vol. 17, Article 77, 2017. (https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-017-0353-1)

12. Ellis, J., J. Charbonnier, C. Lowenstein, et al., “Assessing the Impacts of Different Incentives and Use of Postal Mail on Response Rates,” American Association for Public Opinion Research (AAPOR) Conference, Chicago, IL, 2022, May.







7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title[Insert Title of Information Collection]
Authorjcapezzu
File Modified0000-00-00
File Created2024-12-03

© 2024 OMB.report | Privacy Policy