Supporting Statement Part A - Multiple Indications Study 2021

Supporting Statement Part A - Multiple Indications Study 2021.docx

Study of Multiple Indications in Direct-to-Consumer Television Advertisements

OMB: 0910-0897

Document [docx]
Download: docx | pdf


United States Food and Drug Administration


Study of Multiple Indications in Direct-to-Consumer Television Advertisements

OMB Control No. 0910-NEW


SUPPORTING STATEMENT

Part A. Justification

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.


The Office of Prescription Drug Promotion’s (OPDP) mission is to protect the public health by helping to ensure that prescription drug promotional material is truthful, balanced, and accurately communicated. OPDP’s research program provides scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health.


Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that are most central to our mission, focusing in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features, we assess how elements such as graphics, format, and disease and product characteristics impact the communication and understanding of prescription drug risks and benefits. Focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience, and our focus on research quality aims at maximizing the quality of research data through analytical methodology development and investigation of sampling and response issues. This study will inform the first topic area, advertising features, including content and format.


Because we recognize the strength of data and the confidence in the robust nature of the findings is improved through the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage, which can be found at: https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/office-prescription-drug-promotion-opdp-research. The website includes links to the latest Federal Register notices and peer-reviewed publications produced by our office. The website maintains information on studies we have conducted, dating back to a direct-to-consumer (DTC) survey conducted in 1999.


A number of prescription drugs are approved for multiple indications. These indications can be similar in certain respects (e.g., diabetic peripheral neuropathy and fibromyalgia, which are both conditions that manifest in pain) or very different from one another (e.g., diabetic peripheral neuropathy and general anxiety disorder). If a drug is approved for multiple indications, sponsors choose whether to promote only one of those indications in DTC television advertising, or multiple indications in the same television ad. We are unaware of any quantitative research that addresses how presenting multiple indications in one ad affects consumers’ processing of drug information. Some research suggests that presenting more than one indication in a television ad, regardless of the similarity of the indications, may increase the cognitive load on consumers, thus decreasing their understanding of the drug’s indications (Refs. 1-3).


When more than one indication is presented, the similarity or dissimilarity of the indications may affect participants’ ability to remember and understand the indications. If this is the case, it is not clear whether similarity would have a positive or negative effect in the multimodal context of a television ad (e.g., Refs. 4 and 5).


This study will provide preliminary information on whether consumers face challenges when multiple indications are promoted in a single television ad. The study also will explore whether similarity of the indications affects participants’ likelihood to recall and understand the indications, and whether its effect would be positive or negative.


  1. Purpose and Use of the Information Collection

Pretesting will be used to refine the study materials and procedures. We propose to test three types of fictional DTC television ads – one that promotes a single indication, one that promotes an indication plus a similar indication, and one that promotes an indication plus a dissimilar indication – in two different medical conditions (Studies 1 and 2). Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore, it is important to communicate the benefits and risks of prescription drugs to consumers as clearly and usefully as possible.

  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants in the pretests and main studies will self-administer the survey via the Internet, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 20 minutes.


  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. The available literature yields little information on this topic.

  1. Impact on Small Businesses or Other Small Entities

There will be no impact on small businesses or other small entities. The collection of information involves individuals, not small businesses.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In accordance with 5 CFR 1320.8(d), FDA published a 60-day notice for public comment in the Federal Register July 6, 2020 (85 FR 40296). FDA received four comments that were PRA-related.


Within the four submissions, FDA received multiple comments that the Agency has addressed below. For brevity, some public comments are paraphrased and therefore may not reflect the exact language used by the commenter. We assure commenters that the entirety of their comments was considered even if not fully captured by our paraphrasing in this document.


(Comment) One comment suggested several ideas for other study designs, including: (1) studying consumer reactions to actual ad campaigns; (2) studying consumer reactions to watching a DTC television ad and then viewing a related website; and (3) studying ads for multiple indications with different risk profiles. Another comment suggested another study idea: studying a drug with multiple indications for the same disease.


(Response) We appreciate these alternate study ideas. As this is the first study on this topic, we acknowledge our study cannot answer every research question. We believe these alternate study ideas could be candidates for future research, and we encourage stakeholders to conduct research in this area.


(Comment) One comment recommended using Crohn’s or Ulcerative Colitis rather than leukemia as the dissimilar indication in Study 2 to avoid confusion with adverse effects of common RA medications.


(Response) Based on this comment, we plan to use ulcerative colitis rather than leukemia as the dissimilar indication in Study 2.


(Comment) Three comments noted that care should be taken to reduce confounding variables in the study stimuli in terms of length, order and presentation of indications, background and actor profiles, ad quality, and audio and visual effects.


(Response) We can confirm that care has been taken to ensure that we do not have any unintentional confounds across the study conditions. The ads use the same actors, scenes, audio and visual effects and all other design and content features to ensure that all elements are consistent across experimental conditions. We also used the same setting, actors, and ad concept across Study 1 and Study 2 to minimize differences across the two studies. The only aspect that will change is the manipulated content (i.e., script and superimposed text relaying the indications).


(Comment) One comment requested that we clarify how we are defining similar versus dissimilar indications.

(Response) The similar indications have similar clinical manifestations: In Study 1, nerve-related pain for diabetic peripheral neuropathy and fibromyalgia, and in Study 2, joint pain for rheumatoid arthritis and psoriatic arthritis. The dissimilar indications have dissimilar clinical manifestations: In Study 1, nerve-related pain for diabetic peripheral neuropathy and anxiety for generalized anxiety disorder, and in Study 2, joint pain for rheumatoid arthritis and abdominal pain and diarrhea for ulcerative colitis.


(Comment) One comment recommended stratification across conditions for demographics and several health characteristics.


(Response) Typically, stratified randomization is used if there are prognostic variables that correlate with outcome measures and researchers are concerned about such factors not being evenly distributed across groups (Ref. 6). We have no reason to expect that the aforementioned factors would have a strong association with the outcome measures, nor do we have reason to believe that we will not achieve adequate balance of prognostic variables given the large sample size proposed for this study (Ref. 6). Random assignment will help to produce groups which are, on average, probabilistically similar to each other. Because randomization eliminates most other sources of systematic variation, we can be reasonably confident that any effect that is found is the result of the intervention and not some preexisting differences between the groups (Ref. 7). However, we have included questions about demographics and health characteristics, which will enable us to assess their association with our outcomes and statistically control for them if necessary.


(Comment) One comment noted that the sample size per cell should be at least 75 participants.


(Response) We conducted power analyses to determine sample size. We plan to have 134 participants per cell in each study, for a total of 402 participants per study.


(Comment) One comment noted that recruiting participants with only the primary indication could bias results because participants will be more familiar with their own medical condition. Instead, it suggested that for each study condition we recruit a sample that matches that study condition (e.g., recruiting participants with diabetic peripheral neuropathy or fibromyalgia for the second study condition in Study 1).


(Response) We agree that participants may know more about their own medical condition than the other medical conditions advertised. However, we believe the alternate design offered in the comment would make results difficult to interpret as it would be unclear whether differences were due to the ad manipulations or to the different samples. Instead, we plan to keep the original design. We do not plan to compare participants’ recall, recognition, or comprehension of the primary indication to the second indication (which may lead to the bias noted in the comment). Rather, we plan to compare understanding across the experimental conditions. For instance, we are testing the hypothesis that participants (with diabetes in Study 1 and rheumatoid arthritis in Study 2) who see the first indication alone will be more likely to recall, recognize, and comprehend the first indication compared with participants (with diabetes in Study 1 and rheumatoid arthritis in Study 2) who see the first indication and a second (similar or dissimilar) indication. As another example, we would expect that recall, recognition, and comprehension of the second indication would be higher when the second indication is mentioned in the ad compared with when it is not (e.g., participants are more likely to know the drug is also indicated for fibromyalgia when the ad mentions the fibromyalgia indication). We will measure participants’ familiarity with treatments for each medical condition and assess whether they have been diagnosed with each medical condition. We can use these variables to explore differences among participants. A future study could examine how individuals suffering from fibromyalgia or generalized anxiety, or from psoriatic arthritis or ulcerative colitis (which are secondary indications in the current study) may interpret these ads.


(Comment) One comment suggested recruiting participants with diabetic peripheral neuropathy specifically rather than diabetes in Study 1, while another comment noted that diabetic peripheral neuropathy is underdiagnosed and therefore may present recruitment challenges.


(Response) We plan to retain the diabetes sample for Study 1 to aid recruitment. We will ask participants if they experience diabetes-related pain and whether they have been diagnosed with diabetic peripheral neuropathy.


(Comment) One comment noted concern about the chosen indications because medical conditions can differ from one another in several ways (e.g., prevalence, treatment options) and suggested considering public awareness of the medical conditions.


(Response) We agree that medical conditions vary; this is unavoidable in a study of this kind. To account for this, we plan to conduct two studies using different medical conditions to determine whether the effects replicate across studies. We will measure participants’ familiarity with treatments for the medical conditions in each study.


(Comment) One comment suggested asking participants if they were familiar with the fictitious drug and terminating participants who say yes.


(Response) It is unlikely that many participants will claim to be familiar with the fictional brand name. However, past research has noted the human tendency to falsely recognize content (Ref. 8). While theoretically interesting, the fact that people may falsely recognize our brand should not threaten the internal validity of the current study. Random assignment should guard against systematic differences among groups in terms of false recognition tendency. Nonetheless, we appreciate this concern and in response, we have added a question to the survey to measure familiarity with the brand, which we can then explore in auxiliary analyses, but we do not think participants with false brand familiarity should be removed from the study. Our study sample includes those with rheumatoid arthritis for one of the studies (a condition with lower prevalence in the U.S., about 0.6% of the population). Excluding those with false recognition would impose additional burden on recruitment.


(Comment) One comment suggested that the questionnaire should include the statement “Based on the ad you just saw…” before each question.


(Response) We include this statement and similar language throughout the questionnaire.


(Comment) One comment suggested we measure unaided awareness of the indications, aided awareness of the indications, likelihood to go to the branded drug website to learn more about the drug, and likelihood to ask their doctor about the drug.


(Response) We measure unaided awareness of the indications (benefit recall) in Question 2, aided awareness of the indications (benefit recognition) in Question 3, and likelihood to look for more information about the drug and ask their doctor about the drug in Questions 16 and 17.


(Comment) One comment suggested deleting Questions 2 and 13 in favor of Questions 3

and 14 because these open-ended questions may be difficult for respondents to answer.


(Response) Questions 2 and 13 measure unaided recall of drug benefits and risks whereas Questions 3 and 14 measure recognition of drug benefits and risks. We agree that recall is more difficult than recognition. We plan to retain Questions 2 and 13 but will assess their utility in cognitive interviews and pretesting.


(Comment) One comment suggested using consistent scales on the questionnaire.


(Response) Most questionnaire items have true/false/don’t know or yes/no/don’t know response options. Some items are validated measures with Likert-type scales; for these, we have used the response options from the validated measures.


(Comment) Two comments suggested removing or revising questions 7-10 because participants do not have the medical expertise to say whether someone is a good candidate for a drug. Instead, the comments suggested asking whether the drug is appropriate for them.


(Response) These questions are intended to measure participants’ comprehension of the indications as communicated in the ads. DTC ads can drive consumers to ask their doctors about a drug, so it is important to know whether the drug indication is accurately communicated to consumers. We used similar questions about being a “good candidate” in another study (OMB control number #0910-0885). In cognitive interviews, participants were able to answer the questions and they understood that the questions were asking about the drug information in the ad. We also tested language, such as whether it would be appropriate for the person to ask their doctor about the drug, but participants found this language to be wordy and unnecessary. We do not plan to change these questions at this time, but we will assess participants’ ability to answer these questions in cognitive interviews and pretesting.


(Comment) Two comments suggested deleting or revising several items (questions 16, 17, 21-24, 26, 27 in one comment, questions 18-27 in the other) because responses to these items may be influenced by the particular stimuli used and by factors other than those being studied.


(Response) These items measure intentions, attitudes, and perceptions. We agree that several factors can influence these outcomes. However, random assignment to conditions allows us to determine whether the experimental manipulation is responsible for differences in these outcomes across conditions. We will retain these items and assess their utility in cognitive interviews and pretesting.


(Comment) One comment suggested combining Questions 30 through 33 into one item and asking it at the beginning of the questionnaire.


(Response) We combined questions Q31 and Q32 into one item and moved the item to the screener.


(Comment) One comment suggested we ask participants if they have been diagnosed with the indicated medical conditions (diabetic neuropathy, fibromyalgia, etc.).


(Response) These questions are included on the questionnaire.





External Reviewers


In addition to public comment, OPDP sent materials and received comments from two individuals for external peer review in 2020. These individuals are:


1. Dominick Frosch, Ph.D. Executive Director and Senior Scientist, Palo Alto Medical

Foundation Research Institute, Co-Director, Sutter Health Center for Health Systems

Research


2. Susan Mello, Ph.D. Assistant Professor, Communication Studies, Northeastern

University


  1. Explanation of Any Payment or Gift to Respondents


For completing the pretests and main studies, participants will receive approximately $4.00 in points. Internet panelists are compensated for taking part in surveys using a structured incentive scheme that reflects the length of the survey and the nature of the sample.

Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of these incentives.

Data quality: Because providing a market-rate incentive should increase response rates, it should also significantly improve validity and reliability to an extent beyond that possible through other means. Previous research suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as those with lower education (Ref. 4)). Furthermore, there is some evidence that using incentives can reduce nonresponse bias in some situations by bringing in a more representative set of respondents (Refs. 5-6). This may be particularly effective in reducing nonresponse bias due to topic saliency (Ref. 7).

Past experience: The Internet vendor for this study has conducted hundreds of health-related surveys in the past year. The Internet vendor offers incentives to its panel members for completing surveys, with the amount of incentive for consumer surveys determined by the length of the survey and the nature of the sample. Their experience indicates that the requested amount is reasonable for a 20-minute survey.

Reduced survey costs: Recruiting with market-rate incentives is cost-effective. Lower participation rates will likely impact the project timeline because participant recruitment will take longer and, therefore, data collection will be slower and more costly.

  1. Assurance of Confidentiality Provided to Respondents

No personally identifiable information will be sent to FDA. Data from completed surveys will be compiled into a SPSS or excel data set by the vendor (Dynata) and sent to contractor (RTI), with no personally identifiable information (PII) for analysis. All information that can identify individual respondents will be maintained by the subcontractor in a form that is separate from the data provided to FDA. The information will be kept in a secured fashion that will not permit unauthorized access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be reviewed by FDA’s Institutional Review Board prior to collecting any information.


All participants will be assured that the information they provide will be used only for research purposes and will be kept private to the extent allowable by law. In addition, the informed consent (Appendix A) will include information explaining to participants that their information will be kept confidential, their answers to screener and survey questions will not be shared with anyone outside the research team and their names will not be reported with responses provided. Participants will be assured that the information obtained from the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.


The Internet panel includes a privacy policy that is easily accessible from any page on the site. A link to the privacy policy will be included on all survey invitations. The panel complies with established industry guidelines and states that members’ personally identifiable information will never be rented, sold, or revealed to third parties except in cases where required by law. These standards and codes of conduct comply with those set forth by American Marketing Association, the Council of American Survey Research Organizations, and others. All Dynata employees and contractors are required to take yearly security awareness and ethic training, which is based on these standards.

All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products). Upon final delivery of data files to RTI and completion of the project, Dynata will destroy all study records, including data files, upon request.


11. Justification for Sensitive Questions


This data collection will not include sensitive questions. The complete list of questions is available in Appendices B and C.


12. Estimates of Annualized Burden Hours and Costs


12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:



Table 1. Estimated Annual Reporting Burden1

Activity

No. of Respondents

No. of Responses per Respondent

Total Annual Respondents

Average Burden per Response

Total Hours

Pretest 1 & 2 screener


264

1

264

.083

(5 min)

22

Pretest 1 & 2

132

1

132

.333

(20 min)

44

Main Study 1 & 2 screener

1,770

1

1,770

.083

(5 min)

147

Main Study 1 & 2

885

1

885

.333

(20 min)

295

Total






508

1There are no capital costs or operating and maintenance costs associated with this collection of information.


These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.

13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs

There are no capital, start-up, operating or maintenance costs associated with this information collection.

14. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the collection of data is $517,941 ($172,647 per year for 3 years). This includes the costs paid to the contractors to program the study, draw the sample, collect the data, and create a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the data, and to draft a report ($45,000 over 3 years).


15. Explanation for Program Changes or Adjustments


This is a new data collection.

16. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Section B for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.


Table 2. – Project Time Schedule

Task

Estimated Number of Weeks

after OMB Approval

Pretest data collected

30 weeks

Main study data collected

90 weeks

Data analysis completed

120 weeks


17. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

18. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.























References

1. Mayer, R.E., & Moreno, R. (2003), Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist, 38(1), 43-52.


2. Mutlu-Bayraktar, D., Cosgun, V., & Altan, T. (2019), Cognitive Load in Multimedia Learning Environments: A Systematic Review. Computers & Education, 141, 103618.


3. Betts, K. R., Boudewyns, V., Aikin, K. J., Squire, C., Dolina, S., Hayes, J. J., & Southwell, B. G. (2018), Serious and Actionable Risks, Plus Disclosure: Investigating an Alternative Approach for Presenting Risk Information in Prescription Drug Television Advertisements. Research in Social and Administrative Pharmacy, 14(10), 951-963.


4. Jiang, Y. V., Lee, H. J., Asaad, A., & Remington, R. (2016), Similarity Effects in Visual Working Memory. Psychonomic Bulletin & Review, 23(2), 476-482.


5. Oberauer, K., & Lange, E. B. (2008), Interference in Verbal Working Memory: Distinguishing Similarity-based Confusion, Feature Overwriting, and Feature Migration. Journal of Memory and Language, 58(3), 730-745.

6. Friedman, L. M., Furberg, C. D., & DeMets, D. L. (1998), Fundamentals of Clinical Trials. New York, NY: Spring Science-Business Media, LLC.


7. Fisher, R.A. (1937), The Design of Experiments. Edinburgh, United Kingdom: Oliver and Boyd.


8. Southwell, B. G., & Langteau, R. (2008), Age, Memory Changes, and the Varying Utility of Recognition as a Media Effects Pathway. Communication Methods and Measures, 2, 100-114.

10


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title[Insert Title of Information Collection]
Authorjcapezzu
File Modified0000-00-00
File Created2021-10-04

© 2024 OMB.report | Privacy Policy