Supporting Statement Part A - Secondary Claim Disclosures and Biosimilar Disclosures 2021

Supporting Statement Part A - Secondary Claim Disclosures and Biosimilar Disclosures 2021.docx

Examination of Secondary Claim Disclosures and Biosimilar Disclosures in Prescription Drug Promotional Materials

OMB: 0910-0902

Document [docx]
Download: docx | pdf

United States Food and Drug Administration

Examination of Secondary Claim Disclosures and Biosimilar Disclosures

in Prescription Drug Promotional Materials

OMB Control No. 0910- NEW

SUPPORTING STATEMENT

Part A. Justification

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.


The Office of Prescription Drug Promotion’s (OPDP) mission is to protect the public health by helping to ensure that prescription drug promotion is truthful, balanced, and accurately communicated. OPDP’s research program provides scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health. Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that are most central to our mission. Our research focuses in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features we assess how elements such as graphics, format, and disease and product characteristics impact the communication and understanding of prescription drug risks and benefits; focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience; and our focus on research quality aims at maximizing the quality of research data through analytical methodology development and investigation of sampling and response issues. This study will inform the first two areas: advertising features and target populations.


Because we recognize that the strength of data and the confidence in the robust nature of the findings is improved by utilizing the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage, which can be found at: https://www.fda.gov/aboutfda/centersoffices/officeofmedicalproductsandtobacco/cder/ucm090276.htm. The website includes links to the latest Federal Register notices and peer-reviewed publications produced by our office. The website maintains information on studies we have conducted, dating back to a survey on direct-to-consumer (DTC) advertisements conducted in 1999.


The purpose of this research is to build on prior FDA research on the topic of disclosures by examining the impact of disclosures of two different types of information, detailed later in this notice. The literature on disclosures suggests their effectiveness is subject to format, design, and audience factors, among other things (Ref. 1). For example, research on consumer attitudes has found some people believe that FDA evaluates certain dietary supplement claims despite the presence and consumer awareness of language required by the Dietary Supplement Health and Education Act (DSHEA), which clearly states that FDA has not evaluated those claims (Refs. 2-3). In the context of prescription drug promotion, there is initial evidence that—when noticed—disclosures may effectively convey important information (Ref. 4-6); however, what role disclosures may play in educating or correcting misunderstanding warrants further investigation. We will study the role of disclosures through two concurrent studies referred to in this document as Phase 1 and Phase 2. We describe each below.


Phase 1: In the new study proposed here, the first type of disclosed information we will examine is clinical benefit information based on a secondary endpoint reported in a product’s approved labeling (a secondary claim). In some cases, truthful and non-misleading presentations about secondary endpoints in well-designed clinical studies can provide reliable information about treatment effects that may be distinct from the treatment effects described in the product’s indication statement. For example, a product may be indicated to treat a specific type of cancer based on a primary endpoint of survival. However, a secondary endpoint in the study of that product may provide data about an additional distinct benefit, such as functional status.


Phase 1 of the proposed research will examine the impact of adding a disclosure about a secondary claim in DTC and healthcare provider (HCP)-directed promotion in the context of a prescription drug website. We will also examine the effect of the presence of a comparative claim about the secondary claim. Our proposed main outcome measures are perceptions of and attitudes toward the product, the secondary claim, and the disclosure. We will examine four levels of secondary claim disclosure to explore the effects of disclosing that the secondary benefit is not one of the indicated uses of the product (e.g., not a treatment for [the secondary benefit claim], quantitative information about claim, not a treatment for [claim] and quantitative information about claim, or no disclosure), and two levels (presence or absence) of a comparative element regarding the secondary claim, for a total of eight experimental conditions (see Figure 1). Participants will be randomly assigned to one of these conditions; they will view one version of a website. This 4 x 2 design will be replicated across two target populations (HCPs and consumers).



Figure 1.  Phase 1 Study Design

Phase 1: Secondary Claim Disclosure by Comparative Secondary Claim in Online Prescription Drug Websites





Comparative Secondary Claim

Secondary Claim Disclosure










HCPs







Drug X is not a treatment for [claim]”

In a clinical trial, participants [quantitative information] on Drug X”

Drug X is not a treatment for [claim]” AND

In a clinical trial, participants [quantitative information] on Drug X.”

None

(no secondary claim disclosure)




Present:

Compared to [xx] on Drug Y








Absent














Consumers

Present:

Compared to [xx] on Drug Y








Absent





















Phase 2: The second, independent phase of the proposed research will examine disclosures about a biosimilar product. In both consumer and HCP audiences, we will assess the impact of a disclosure designating the product as a biosimilar as well as varying basic factual statements about biosimilars. Phase 2 will examine the impact of: (1) adding a disclosure designating the product as a biosimilar; (2) adding general informational statements about biosimilars; and (3) naming a reference product. This approach allows us to examine the effect of disclosing biosimilar status, examines the additive effect of including one, two, or three additional basic statements of information about biosimilars, and measures the effect of naming the reference product. Our proposed main outcome measures are perceptions of and attitudes toward the biosimilar product and the disclosure. We propose to examine seven different disclosure conditions plus a control with no disclosure for a total of eight test conditions.


The studies will be conducted online. Following exposure to the stimuli, participants will be asked to complete a 20-minute questionnaire that assesses comprehension, perceptions, intentions, and demographics. In the pretest, participants will also answer questions about the study design and questionnaire.





  1. Purpose and Use of the Information Collection

The purpose of this research is to build on prior FDA research on the topic of disclosures by examining the impact of disclosures of: (1) clinical benefit information based on a secondary endpoint reported in a product’s approved labeling (a secondary claim) and a comparative claim; and (2) the impact of a disclosure designating the product as a biosimilar as well as varying basic factual statements about biosimilars. We will use the results of this research to better understand if disclosures related to secondary endpoints and comparative claims help consumers and HCPs better understand those claims, and if disclosure about the properties of a biosimilar affect understanding of what a biosimilar is. The findings of this study will also help inform FDA’s understanding about when disclosures such as these might be useful and what types of information should be included.

  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants in the pretests and main studies will self-administer the survey via the Internet, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 20 minutes.

  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. The available literature yields little information on this topic.

  1. Impact on Small Businesses or Other Small Entities

There will be no impact on small businesses or other small entities. The collection of information involves individuals, not small businesses.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.



  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In the Federal Register of July 7, 2020 (85 FR 40659), FDA published a 60-day notice requesting public comment on the proposed collection of information. FDA received eight submissions. Three submissions (regulations.gov tracking numbers 1k4-9hoh-uskf, 1k4-9itu-fj33, and 1k4-9its-ko9f) were outside the scope of the research and are not addressed further. Within the remaining five submissions, FDA received multiple comments that the Agency has addressed below. For brevity, some public comments are paraphrased and therefore may not reflect the exact language used by the commenter. We assure commenters that the entirety of their comments was considered, even if not fully captured by our paraphrasing in this document. The following acronyms are used here: HCP = healthcare provider; FDA and “The Agency” = Food and Drug Administration; DTC = direct-to-consumer; OPDP = FDA’s Office of Prescription Drug Promotion.


(Comment 1) Two comments were supportive of the study, and one comment was supportive of the study’s inclusion of both HCP and consumer samples.


(Response 1) We thank the commenters for their support of the research.


(Comment 2) One comment asserted that FDA has not made the stimuli available for public comment.


(Response 2) Our full stimuli are under development during the PRA process. We do not make draft stimuli public during this time because of concerns that this may contaminate our participant pool and compromise our research. In our research proposals, we describe the purpose of the study, the design, the population of interest, and the estimated burden.


(Comment 3) Two comments recommended FDA ensure the wording of the stimuli in both phases is appropriate to each audience (HCP and consumer), and one comment suggested FDA partner with a health literacy organization.


(Response 3) We assessed understanding of both the consumer and provider versions of statements through indepth cognitive interviews and will also do so in our survey. Findings from our cognitive interviews suggest that most consumers understood the gist of this information, although they were not always familiar with some terminology. The stimuli in both phases use language appropriate to each sample and, where possible, use plain language in the consumer versions for greater clarity. We crafted the statements about biosimilars using terminology from FDA’s Biosimilar Basics Patient Materials (https://www.fda.gov/drugs/biosimilars/patient-materials). However, when examining perceptions around complex concepts, such as biosimilars, plain language substitutes for certain terms are not always available.


(Comment 4) One comment suggested we measure diabetes and obesity comorbidities of the Phase 1 consumer sample. One comment suggested we restrict the Phase 2 sample to consumers who have rheumatoid arthritis (RA), half of whom are being treated with a biologic for that condition, and one comment suggested we only sample rheumatologists.


(Response 4) In Phase 1 we are measuring participants’ self-reported diagnosis of type 2 diabetes, knowledge about the disease and treatments for type 2 diabetes and weight loss, and prior experience with type 2 diabetes and weight loss treatment. These will be used as covariates in the analyses, where appropriate.


With respect to the suggestion to limit the sample to diagnosed consumers and rheumatologists in Phase 2, there are several factors to consider. Diagnosed sample participants are likely to be more motivated to read the ad because it is relevant to their medical condition. On the other hand, participants in that sample are also more likely to be familiar with treatments for their condition and bring with them prior knowledge that may influence their responses. As in Phase 1, we will assess treatment familiarity and diagnosis amongst our general population sample and control for those variables. While we understand that the Phase 2 topic may be relevant for specialists, and we do often include specialists in our research, we chose not to limit our HCP sample. Recruiting from a wider HCP sample is more reflective of the reality of the healthcare environment where patients interact with HCPs across multiple specialties and expertise. Further, specialists make up a small proportion of HCPs, which makes them harder to recruit. In 2020, for example, the proportion of specialists representing each specialty area ranged from 3 percent (endocrinologists) to 17 percent (emergency medicine specialists) (Ref. 7). These data demonstrate that the pool of potentially eligible specialists is limited.


(Comment 5) One comment suggested we focus the study on patients rather than HCPs, as the knowledge levels of patients is low, or perhaps conduct separate but parallel studies of both HCPs and patients.


(Response 5) The study will be conducted among two separate populations, consumers from the general population and HCPs. As shown in table 1, the study design incorporates parallel arms for consumers and HCPs.


(Comment 6) One comment suggested FDA ensure a sufficient sample size to conduct rigorous statistical analysis.


(Response 6) We conducted a power analyses to determine the sample size per study arm and will have a sufficient sample to rigorously test our research questions.


(Comment 7) Two comments suggested studying comparative claims in a separate study to reduce participant burden and confusion.


(Response 7) Our proposed design examines the impact of adding comparative and quantitative information to the disclosure of interest (see table 1). Each participant will see only one claim. Because these variables are fully crossed in the design, we will be able to examine the impact of comparative information and quantitative information separately.


(Comment 8) One comment asked FDA to explain the added value and appropriateness of including disclosures in biosimilar product promotional materials. The comment cautioned that disclosures must not be couched in cautionary or negative terms or include statements that are ambiguous or of minimal relevance to patients.


(Response 8) Currently, FDA neither requires nor prohibits biosimilar-related disclosures in biosimilar product promotion, and this research does not presuppose or reflect any established FDA position on their value. FDA is using this research to gather information to assess how certain biosimilar product disclosures, if they are used in promotion, could impact perceptions. Our study seeks to test several variations of biosimilar statements. We specifically examined potential negative reactions during in-depth cognitive interviews. Participants in our interviews expressed that the language was neutrally worded, and participants did not perceive the statements to be negative or cautionary.


(Comment 9) One comment questioned whether there was a control group in the Phase 2 questionnaire and suggested a control group that will not identify the product as a biosimilar be included.


(Response 9) The Phase 2 study includes a control condition where the promotional material does not identify the product as biosimilar.


(Comment 10) One comment noted that the prescribing information for a biosimilar does not include a named reference product and questioned why FDA is mandating inclusion of a named reference product in biosimilar promotional materials.


(Response 10) Sponsors may choose to disseminate promotion in which a comparator product is named. These comparative promotions exist in the marketplace. One purpose of Phase 2 is to examine the difference between a disclosure statement that includes a named comparator and one that refers to a comparator generally. The fact that FDA is conducting research that includes specific disclosures does not create a requirement that sponsors use any of those disclosures or any other requirement.


(Comment 11) Two comments suggested concepts that should be conveyed in the biosimilar disclosures. One comment stressed the importance of the tone of the disclosure statement about biosimilars. The following key messages were proposed for inclusion in the study:


  1. Patients can expect that biosimilars will provide the same safety and effectiveness as the reference product.

  2. FDA has a rigorous review and approval process, applying the same high-quality standards to both biosimilars and reference products.

  3. Patients have been benefitting from the use of biosimilars for many years.




The second comment suggested the study should also include an examination of the impact of adding additional information about the list of extrapolated indications, and the rationale for extrapolation of indications to a biosimilar product to assess impact on HCP perceptions.


(Response 11) This study seeks to test several variations on potential biosimilar statements but does not attempt to test all possible statements. We decline to expand this study to test additional content like that suggested by the comments, but other content may be considered in future research. With regard to the comment about tone, for the disclosure variations that we will test, we examined potential negative reactions during in-depth cognitive interviews. Participants in our interviews expressed that the language was neutrally worded, and participants did not perceive the statements to be negative or cautionary. An examination of how HCPs perceive a biosimilar based on extrapolated indications is beyond the scope of this research. It may be considered in future research.


(Comment 12) One comment suggested Phase 1 and Phase 2 be converted to separate studies.


(Response 12) Phase 1 and Phase 2 are intended to be two separate studies that are being examined concurrently for efficiency. We will make this distinction clear in any discussion of results.


(Comment 13) One comment recommended FDA narrow the scope of the research to questions within its jurisdiction and eliminate overlap with other ongoing research.


(Response 13) As explained earlier, the Public Health Service Act authorizes FDA to conduct research relating to health information, and the FD&C Act authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act. The study is within FDA’s authority, and it will help to inform OPDP’s work to help ensure that prescription drug information is truthful, balanced, and accurately communicated, so that HCPs and consumers can make informed decisions. While the comment did not identify any specific ongoing research as overlapping, we note that in general, OPDP may conduct concurrent or overlapping studies on similar topics to serve these goals.


(Comment 14) One comment suggested participants be permitted to refer back to the stimuli while answering questions.


(Response 14) For this study we will instruct participants to read the material carefully and alert them that they will be answering several questions about the content that they just saw. The goal of this study is not to assess participants’ comprehension of detailed, verbatim information in the stimuli, for which repeated exposures to study stimuli may be more appropriate. Rather, our study will determine if experimental manipulation of the disclosure language influences “gist” understanding of the information, attitudes, and perceptions (Ref. 8). Allowing for multiple exposures to the stimuli could potentially influence these outcomes. A large body of literature supports presence of a “mere exposure effects” in social science research, where more exposure enhances processing and increases positive affect towards stimuli (Refs. 9 and 10).


(Comment 15) One comment stated the research lacks practical utility because it treats the secondary benefit claim as not related to the product’s indicated uses, and the comment recommends that FDA revise Phase 1 of the study to reflect that secondary endpoints are not inherently unapproved uses and to focus instead on comprehension of what is a primary versus secondary endpoint in the data supporting a drug’s approval.


(Response 15) In this study, we are not making a generalization about the approval status of secondary endpoints. We are examining the specific case of a disclosure about a secondary endpoint that, while it may be related to the product’s primary indication, is not in itself an indication for the product and was not evaluated in such a way to support the drawing of conclusions about the product’s effect on that endpoint. In this scenario, a disclosure about the secondary claim may help the audience interpret the secondary claim and provide context. The purpose of this study is to evaluate such disclosures about this specific type of secondary claim and measure the impact on perceptions of and attitudes toward the product, the secondary claim, and the disclosure. For instance, we will vary such elements as the presence of quantitative information about the secondary claim and the presence of comparative information (see table 1 for full design). We note that there are examples of prescription drug ads currently in use that contain language similar to what we are evaluating in order to qualify secondary endpoints, thus highlighting the practical utility of this research.


(Comment 16) One comment suggested changes to the instructions for Phase 2 to state that the study is intended to “assess your understanding of and reactions to biosimilar biologic drug disclosures.”


(Response 16) The control condition does not identify the product as biosimilar. To maintain the internal validity of the study and avoid potentially biasing participants’ responses, we will keep the instructions as they are.


(Comment 17) One comment suggested changing the dosage route and strength of the reference product to be consistent with currently marketed biologics.


(Response 17) We have made this change.


(Comment 18) Two comments asked that the name of the reference product be changed to one that is fictitious.


(Response 18) We have made this change and will use a fictitious reference product name.


(Comment 19) One comment suggested stratifying the sample on several variables. The comment suggested that obesity and diabetes diagnosis be considered specifically for Phase 1, as well as variables like disease severity, treatment history (e.g., patients who have never received a biologic versus biologic-experienced patients), and knowledge of the studied condition for both phases.


(Response 19) Typically, stratified randomization is used if there are prognostic variables that correlate with outcome measures and researchers are concerned about such factors not being evenly distributed across groups (Ref. 11). We have no reason to believe that we will not achieve adequate balance of prognostic variables given the large sample size proposed for this study (Ref. 11). Random assignment will help to produce groups that are, on average, probabilistically similar to each other. Because randomization eliminates most other sources of systematic variation, we can be reasonably confident that any effect that is found is the result of the intervention and not some preexisting differences between the groups (Ref. 12). Our survey includes several questions about health and medical demographics that will enable us to assess their association with our outcomes and statistically control for them if necessary.


(Comment 20) One comment suggested using consistent scales throughout the study and adding “based on the ad you just saw” to many of the questions.


(Response 20) As suggested, we have added statements in the instructions for respondents to answer based on the promotion they “just saw” for clarification. Where possible, we have used validated measures and have retained the scale endpoints of those measures. We do not believe that these varied types of questions will pose difficulties for respondents as we did not find evidence of difficulties in cognitive testing.


(Comment 21) One comment suggested deleting or revising Phase 1 Questions 4 to 7 to focus on whether the participant understands that the secondary use is linked to the approved primary indication.


(Response 21) Our collection of constructs and measures, grounded in behavioral theory (Refs. 1 to 3), assesses perceptions, attitudes, understanding, and intentions around prescription drug disclosures. Based on cognitive testing, we have removed these questions.


(Comment 22) One comment suggested deleting Phase 1 Questions 9, 15, and 16 because they deal with the practice of medicine.


(Response 22) The intent of Question 9 is to assess understanding of the secondary claim disclosure, which explains that even though the drug is not indicated for weight loss, that it can help some people lose weight. Based on cognitive testing, we have revised the question to more specifically assess potential misperceptions of the claim; “[drug name] is for weight loss” Questions 15 and 16 are intended to assess perceptions about the magnitude of the drug’s benefit--with regard to both the indication (reduction in A1C levels) and the secondary claim (weight loss)--based on the information in the website. Based on cognitive testing, we have revised these questions to read “How much do you think [drug name] would lower A1C levels for patients with type 2 diabetes?” and “How much do you think [drug name] would help with weight loss for patients with type 2 diabetes?” It is a proper subject for FDA research to study whether particular framing of statements contributes to an HCP’s accurate understanding or to misunderstanding about drugs to inform their prescribing decisions in the course of their practice of medicine.


(Comment 23) One comment suggested deleting or clarifying Phase 1 Question 11 to refer to “type 1 or type 2 diabetes” rather than “other health conditions.” This comment also suggested revising Phase 1 Questions 12 to 16 to indicate they are focused on diabetic patients.


(Response 23) We have deleted Question 11 and have revised the other items to refer specifically to type 2 diabetes to improve question clarity.


(Comment 24) One comment suggested deleting or revising Phase 1 Question 10 to read “[Drug X] is approved for helping people without diabetes lose weight.”


(Response 24) We have deleted this question.


(Comment 25) One comment recommended deleting Phase 1 Questions 17 to 23 and Questions 35 to 38 because responses could be influenced by many reasons and it is unclear how these questions relate to the study objectives.


(Response 25) These items measure perceived efficacy and attitude toward the drug. Attitude toward the drug and perceived efficacy can influence other outcomes such as the intention to take the drug or mention it to the doctor. Thus, we believe it is important to assess these variables. Given that we are randomizing participants to experimental conditions, we suspect that differences between experimental conditions are due to the experimental manipulations rather than participants’ background and experiences. Additionally, we also included several variables to measure participants’ experience with diabetes and weight loss, as well as medications for these conditions. If these variables are related to perceived efficacy and attitude toward the drug, we plan to include them as covariates in analyses.


(Comment 26) One comment suggested deleting Phase 1 Questions 32 to 34 because these questions ask about perceived risks and side effects that are not within the stated study objectives.


(Response 26) The goal of the study is to examine the impact of the presence of the comparative claim and type of disclosures; it is possible for participants to form different (and potentially distorted) risk perceptions based on the presence or absence of the comparative claim or type of disclosure. Assessing this outcome will allow us to determine whether risk perceptions vary based on exposure to study manipulations.


(Comment 27) One comment suggested deleting or revising Phase 2 Questions 4 to 11 and Questions 14 to 18 because participants will not be able to evaluate the safety and efficacy of, or make decisions about, their intended course of action related to the fictitious drug.


(Response 27) The promotional material will include information on primary and secondary endpoints as well as an important safety information section. We acknowledge that in a clinical setting patients and HCPs may use additional information. However, the intent of these items is to understand whether exposure to different types of information related to the comparative claim and disclosure results in different comprehension or behavioral intention. All participants will have the same level of information regarding the fictitious drug with the only difference being the manipulated content. So, we would expect that all participants will be equally informed about the fictitious drug and differences between conditions could be attributed to the manipulations. Items 4 to 11 assess participant comprehension of promotional material.


(Comment 28) One comment suggested deleting all Phase 2 questions about the advertising statement, questions assessing participants’ understanding of how prescription drugs and biologic products work, familiarity with similar treatments, and attitudes about pharmaceutical companies; in particular, Questions 3, 27 to 30, and 36.


(Response 28) The answers to these questions may help contextualize differences between the experimental conditions. There is some evidence that prior attitudes toward prescription drugs and pharmaceutical companies have an impact on attitudes and perceptions of particular prescription drugs and DTC ads (Ref. 13). Question 3 assesses attitudes toward the disclosure. For instance, it is possible that participants exposed to a certain disclosure may have more favorable attitudes towards the drug because they viewed the disclosure as trustworthy. Questions 27 to 30 and 36 will also help us contextualize the findings by understanding participants’ prior beliefs about prescription drugs, biosimilars, and pharmaceutical companies that may influence their responses and how they process the disclosure, in which case we would include them as controls in our analyses.


(Comment 29) One comment suggested moving Phase 2 Questions 27 to 38 to the beginning of the questionnaire, before the participant views the stimuli.


(Response 29) These questions are included to contextualize the findings and obtain an understanding of participants’ prior beliefs and perceptions about biosimilars and more broadly prescription drug promotion. We ask these questions after the main study outcomes are assessed so that we do not contaminate participants’ thoughts and perceptions of the promotional material. In addition, we do not want to prime the participants in the control condition (who are not told the drug is a biosimilar) to think the drug is a biosimilar, which would be equivalent to one of the other study conditions.


(Comment 30) One comment suggested adding a response option to capture a neutral or “no reaction” response to questions.


(Response 30) There are benefits and drawbacks to including a neutral or “no reaction” response in survey research, and the decision to use a neutral mid-point depends on the goal of the measures. For items assessing comprehension of disclosure language, we include a “do not know” option as this response would indicate some level of uncertainty about the meaning of the disclosure, which is meaningful and actionable information. However, when assessing perceptions and attitudes towards disclosures, our objective is to force a selection and have participants choose a leaning towards agreement or disagreement with the statement. Inclusion of a neutral response option in these instances could potentially encourage “satisficing”--cuing participants to select a neutral response under uncertainty because it is offered (Ref. 14).


(Comment 31) One comment suggested clarifying Phase 2 Question 28 to make clear it refers to the approved uses of biosimilars, not health conditions generally.


(Response 31) We have removed this item from our survey.


(Comment 32) One comment suggested revising Phase 2 Question 18 to ask about safety and efficacy separately because they may introduce bias if located in the same items.


(Response 32) We acknowledge safety and efficacy are separate issues, and we assess beliefs about safety and efficacy separately in Questions 5 to 8. However, because biosimilars have no clinically meaningful differences in safety, purity, or potency (safety and effectiveness) from their reference product, we are also interested in the impact of the disclosure statement on participants’ perceptions of safety and efficacy as a whole. Given this, we do not believe this question will introduce bias.


(Comment 33) One comment suggested either deleting or revising questions about the biosimilar disclosure to make clear what “same types of sources” means.


(Response 33) The wording of the biosimilar disclosure statement was crafted using terminology from FDA’s Biosimilar Basics Patient Materials (https://www.fda.gov/drugs/biosimilars/patient-materials), and we tested its meaning during our in-depth cognitive interviews. Both the consumer and provider groups sufficiently understood this statement.


(Comment 34) One comment suggested only asking Phase 2 Question 17 of participants who are currently receiving a biologic.


(Response 34) The intent of the question is to understand whether participants would ask their doctor to switch their medication after viewing the ad. We provided a hypothetical scenario and asked participants to answer this question as if they were taking the reference medication or another prescription medication to treat RA. This question would not be feasible among only those with RA who are receiving a biologic, given the prevalence of RA in the population (i.e., 0.6 percent) as we only expect to have a few individuals diagnosed with RA, if any.





External Reviewers


In addition to public comment, OPDP sent materials to, and received comments from, two external peer reviewers in 2020. These individuals are:


1. Jisu Huh, Ph.D., Professor, University of Minnesota, Hubbard School of Journalism and Mass Communication.


2. Kim Sheehan, Ph.D., Professor, Advertising and Brand Responsibility, University of Oregon, School of Journalism and Communication. 


  1. Explanation of Any Payment or Gift to Respondents


For the pretest and main study, HCPs will be provided an honorarium of $30.00 and consumers will be provided points valued at approximately $2.00. The incentives for HCPs are lower than average for this population. The incentive is an effective method of drawing attention to the study and gaining cooperation for completing it. It is not intended as a payment for their time but rather a means for increasing response rates.

Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of these incentives.

Data quality: Historically, physicians are one of the most difficult populations to survey, partly because of the demands on their professional time. Consequently, incentives assume an even greater importance with this group. Several studies have discussed the challenges of conducting research with HCPs and have concluded that offering substantial incentives is necessary to attain high response rates (see Refs. 15-17).


Recruiting physicians to participate in research has been shown to be difficult for reasons related primarily to the time burden (Ref. 18). Physicians’ time is limited and, thus, quite valuable. Cash incentives, rather than nonmonetary gifts or lottery entries, can help improve response rates and survey completion rates (Refs. 19- 22). A meta-analysis on methodologies for improving response rates in physician surveys examined 21 studies published between 1981 and 2006 that investigated the effect of monetary incentives on response rates in surveys of physicians. The authors found that the odds of responding to a survey with an incentive were 2.13 times greater than responding to a survey without incentives (Ref. 23). Martins and colleagues conducted a review of published oncology-focused studies to investigate methods for improving response rates. Their meta-analysis also showed that monetary incentives were effective at increasing response rates (Ref. 24). Previous research also suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as PCPs or physician specialists, e.g., Refs. 25-26) and ensuring participation from a cross section of physicians, which will improve data quality by improving validity and reliability.



  1. Assurance of Confidentiality Provided to Respondents

No personally identifiable information will be sent to FDA. Data from completed surveys will be compiled into a SPSS or excel data set by the vendor and sent to RTI, with no personally identifiable information (PII) for analysis. All information that can identify individual respondents will be maintained by the subcontractor in a form that is separate from the data provided to FDA. The information will be kept in a secured fashion that will not permit unauthorized access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be reviewed by FDA’s Institutional Review Board prior to collecting any information.


All participants will be assured that the information they provide will be used only for research purposes and their responses will be kept private to the extent allowable by law. In addition, the informed consent (Appendix A) includes information explaining to participants that their information will be kept confidential, their answers to screener and survey questions will not be shared with anyone outside the research team and their names will not be linked to their responses. Participants will be assured that the information obtained from the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.


The panel includes a privacy policy that is easily accessible from any page on the site. A link to the privacy policy will be included on all survey invitations. The panel complies with established industry guidelines and states that members’ personally identifiable information will never be rented, sold, or revealed to third parties except in cases where required by law. These standards and codes of conduct comply with those set forth by American Marketing Association, the Council of American Survey Research Organizations, and others. All Dynata employees and contractors are required to take yearly security awareness and ethics training, which is based on these standards.

All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products). Upon final delivery of data files to RTI and completion of the project, Dynata will destroy all study records, including data files, upon request.


11. Justification for Sensitive Questions


This data collection will not include sensitive questions. The complete list of questions is available in Appendix B.


12. Estimates of Annualized Burden Hours and Costs


12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:

Table 1. Estimated Annual Reporting Burden1


Activity

No. of respondents

No. of responses per respondent

Total annual responses

Average burden per response

Total hours

Phase 1 Pretest screener (HCPs)

278

1

278

.08 (5 min)

22.24

Phase 1 Pretest screener (consumers)

278

1

278

.08 (5 min)

22.24

Phase 1 Pretest completes (HCPs)

139

1

139

.33 (20 min)

45.87

Phase 1 Pretest completes (consumers)

139

1

139

.33 (20 min)

45.87

Phase 2 Pretest screener (HCPs)

476

1

476

.08 (5 min)

38.08

Phase 2 Pretest screener (consumers)

476

1

476

.08 (5 min)

38.08

Phase 2 Pretest completes (HCPs)

238

1

238

.33 (20 min)

78.54

Phase 2 Pretest completes (consumers)

238

1

238

.33 (20 min)

78.54

Phase 1 Main study screener (HCPs)

990

1

990

.08 (5 min)

79.2

Phase 1 Main study screener (consumers)

990

1

990

.08 (5 min)

79.2

Phase 1 Main study completes (HCPs)

495

1

495

.33 (20 min)

163.35

Phase 1 Main study completes (consumers)

495

1

495

.33 (20 min)

163.35

Phase 2 Main study screener (HCPs)

792

1

792

.08 (5 min)

63.36

Phase 2 Main study screener (consumers)

792

1

792

.08 (5 min)

63.36

Phase 2 Main study completes (HCPs)

396

1

396

.33 (20 min)

130.68

Phase 2 Main study completes (consumers)

396

1

396

.33 (20 min)

130.68

Total

7,608

 

7,608

 

1,243

1There are no capital costs or operating and maintenance costs associated with this collection of

information.



Note: These estimates are based on FDA’s and the contractor’s experience with previous consumer studies. With online surveys, several participants may be in the process of completing the survey at the time that the total target sample is reached. Those participants will be allowed to complete the survey, which can result in the number of valid completes exceeding the target number. With this in mind, we have included an additional 10% over our target number of valid completes to account for some overage.

13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs

There are no capital, start-up, operating or maintenance costs associated with this information collection.

14. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the collection of data is $650,830 ($130,166 per year for 5 years). This includes the costs paid to the contractors to program the study, draw the sample, collect the data, and create a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the data, and to draft a report ($75,000 over 5 years).


15. Explanation for Program Changes or Adjustments


This is a new data collection.

16. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Part B for detailed information on the design, research questions, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.











Table 2. – Project Time Schedule

Task

Estimated Number of Weeks

after OMB Approval

Pretest data collected

13 weeks

Main study data collected

42 weeks

Final results report completed

54 weeks

Manuscript submitted for internal review

71 weeks

Manuscript submitted for peer-review journal review

91 weeks

17. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

18. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.

References

  1. Andrews, J.C. (2011). “Warnings and Disclosures.” In: Communicating Risks and Benefits: An Evidence-Based User’s Guide. Fischhoff, B., N.T. Brewer, and J.S. Downs, (Eds). FDA: Silver Spring, MD. pp. 149-161.


  1. Russo France, K. and P. Fitzgerald Bone (2005). “Policy Makers’ Paradigms and Evidence from Consumer Interpretations of Dietary Supplement Labels.” Journal of Consumer Affairs, 39(1), 27-51.


  1. Mason, M.J. and D.L. Scammon (2011). “Unintended Consequences of Health Supplement Information Regulations: The Importance of Recognizing Consumer Motivations.” Journal of Consumer Affairs, 45(2), 201-223.


  1. Betts, K.R., K.J. Aikin, V. Boudewyns, et al. (2017). “Physician Response to Contextualized Price-Comparison Claims in Prescription Drug Advertising.” Journal of Communication in Healthcare, 10(3), 195-204.


  1. Betts, K.R., V. Boudewyns, K.J. Aikin, et al. (2018). “Serious and Actionable Risks, Plus Disclosure: Investigating an Alternative Approach for Presenting Risk Information in Prescription Drug Television Advertisements.” Research in Social and Administrative Pharmacy, 14(10), 951-963.


  1. Sullivan, H.W., A.C. O'Donoghue, K.T. David, et al. (2018). “Disclosing Accelerated Approval on Direct‐To‐Consumer Prescription Drug Websites.” Pharmacoepidemiology and Drug Safety, 27(11), 1277-1280.


  1. Kaiser Family Foundation. (2020). Professionally Active Specialist Physicians by Field. Retrieved from https://www.kff.org/other/state-indicator/physicians-by-specialty-area.


  1. Corbin, J.C., V.F. Reyna, R.B. Weldon, et al. (2015). “How Reasoning, Judgment, and Decision Making Are Colored by Gist-Based Intuition: A Fuzzy-Trace Theory Approach.” Journal of Applied Research in Memory and Cognition, 4(4), 344–355. https://doi.org/10.1016/j.jarmac.2015.09.001.


  1. Bornstein, R.F. (1989). “Exposure and Affect: Overview and Meta-Analysis of Research, 1968-1987.” Psychological Bulletin, 106(2), 265.


  1. Bornstein, R.F. and P.R. D'Agostino (1994). “The Attribution and Discounting of Perceptual Fluency: Preliminary Tests of a Perceptual Fluency/Attributional Model of the Mere Exposure Effect.” Social Cognition, 12(2), 103-128.


  1. Friedman, L.M., Furberg, C.D., and D.L. DeMets, Fundamentals of Clinical Trials. 1998. Spring Science-Business Media, LLC: New York, NY.


  1. Fisher, R.A. (1937). The Design of Experiments. Edinburgh, United Kingdom: Oliver and Boyd.


  1. Hausman, A. (2008). “Direct-To-Consumer Advertising and Its Effect on Prescription Requests.” Journal of Advertising Research, 48(1), 42-56.


  1. Krosnick, J.A. (2018). “Questionnaire Design.” In The Palgrave Handbook of Survey Research (pp. 439-455). Palgrave Macmillan, Cham.


  1. Keating, N.L., Zaslavsky, A.M., Goldstein, J., West, D.W., and Ayanian, J.Z. (2008). Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care, 46(8), 878-881.


  1. Dykema, J., Stevenson, J., Day, B., Sellers, S.L., and Bonham, V.L. (2011). Effects of incentives and prenotification on response rates and costs in a national web survey of physicians. Evaluation & the Health Professions, 34(4), 434-447.


  1. Ziegenfuss, J.Y., Burmeister, K., James, K.M., Haas, L., Tilburt, J.C., and Beebe, T.J. (2012). Getting physicians to open the survey: Little evidence that an envelope teaser increases response rates. BMC Medical Research Methodology, 12(41).


  1. Asch, S., Connor, S.E., Hamilton, E.G., and Fox, S.A. (2000.) Problems in recruiting community-based physicians for health services research.” Journal of General Internal Medicine, 15(8), 591-599.


  1. Epley, N. and Gilovich, T. (2006.) The Anchoring-and-Adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17(4), 311-318.


  1. Höhne, J.K. and Krebs, D. (2017.) Scale direction effects in agree/disagree and item-specific questions: A comparison of question formats. International Journal of Social Research Methodology, 21(1), 91-103.


  1. Krosnick, J.A. and Presser, S. (2010). Question and Questionnaire Design. In Handbook of Survey Research (pp. 263‒314). Emerald Group Publishing Limited.


  1. Saris, W.E., Revilla, M., Krosnick, J.A., and Shaeffer, E.M. (2010.) Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Methods, 4, 61–79.


  1. VanGeest, J., Johnson, T., and Welch, V. (2007.) Methodologies for improving response rates in surveys of physicians: A systematic review. Evaluation and the Health Professions, 30, 303-321.


  1. Martins, Y., Lederman, R., Lowenstein, C., et al. (2012.) Increasing response rates from physicians in oncology research: A structured literature review and data from a recent physician survey. British Journal of Cancer, 106(6), 1021-6.


  1. Converse, J.M. and Presser, S. (1986.) Survey Questions: Handcrafting the Standardized Questionnaire (No. 63). SAGE Publications.


  1. DeVellis, R.F. (2016.) Scale Development: Theory and Applications (Vol. 26). SAGE Publications.








21


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title[Insert Title of Information Collection]
Authorjcapezzu
File Modified0000-00-00
File Created2021-10-04

© 2024 OMB.report | Privacy Policy