United States Food and Drug Administration
Disclosures of Descriptive Presentations in Professional Oncology Prescription Drug Promotion
OMB Control No. 0910-NEW
SUPPORTING STATEMENT Part A:
Justification
Circumstances Making the Collection of Information Necessary
Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.
Under the FD&C Act and implementing regulations, promotional labeling and advertising about prescription drugs are generally required to be truthful, non-misleading, and to reveal facts material to the presentations made about the product being promoted (see sections 502(a) and (n), and 201(n) of the FD&C Act (21 U.S.C. 352(a) and (n), and 321(n)); see also 21 CFR 202.1). As a part of the ongoing evaluation of FDA's regulations in this area, FDA is proposing to study the impact of disclosures as they relate to presentations of preliminary and/or descriptive scientific and clinical data in promotional labeling and advertising for oncology products. The use of disclosures is one method of communicating information to health care professionals about scientific and clinical data, the limitations of that data, and practical utility of that information for use in treatment. These disclosures may influence prescriber comprehension and decision making, and may affect how and what treatment they prescribe for their patients.
Pharmaceutical companies market directly to physicians through means that include publishing advertisements in medical journals, exhibit booths at physician meetings or events, sending unsolicited promotional materials to doctors' offices, and presentations ("detailing") by pharmaceutical representatives (Ref. 1). Research suggests that detail aids sometimes contain carefully extracted data from clinical studies that, taken out of context, can exaggerate the benefits of a drug (Ref. 2) or contribute to physicians prescribing the drug for an inappropriate patient population.
Promotional labeling and advertising for cancer drugs deserve specific attention. Oncology drugs represented 26 percent of the 649 compounds under clinical trial investigation from 2006 to 2011 (Ref. 3). The past decade has seen a dramatic rise in the number of oncology drugs brought to market. In the past 18 months, over 22% of new drug approvals at FDA were new cancer drugs. In that time period, FDA approved 16 cancer drugs as new molecular entities or new therapeutic biologics out of a total of 72 (this does not include approvals of benign hematology products or biological license application approvals of blood reagents, or assays and anti-globulin products used in testing kits) (Refs. 4-5). Although overall survival remains the gold standard for demonstrating clinical benefit of a cancer drug, several additional endpoints including progression free survival, disease-free or recurrence-free survival, or durable response rate (including hematologic response endpoints) are accepted for either regular or accelerated approval depending on the magnitude of effect, safety profile, and disease context (Ref. 6). In addition to the endpoints upon which FDA approval of these products may be based, pharmaceutical companies typically assess many other endpoints to further explore the effects of their products. Some trials are designed to allow for formal statistical analyses of these additional endpoints; however, in many cases these endpoints are strictly exploratory and support only the reporting of descriptive results. For clinicians who are not specifically trained in clinical trial design, interpreting these endpoints may be challenging. Pharmaceutical companies invest heavily in the development and distribution of promotional materials to make oncologists aware of favorable clinical trial results.
When communicating scientific and clinical data, a specific statement that modifies or qualifies a claim (referred to for the purposes of this document as a disclosure) could be used to convey the limitations of the data and practical utility of the information for treatment. Much of the prior research on disclosures in this topic area has been limited to the dietary supplement arena with consumers (Refs. 7-10). Disclosures in professional pieces could influence prescriber comprehension as well as subsequent decision making; however, no published data exist regarding how prescribers use and understand scientific claims in conjunction with qualifying disclosures.
The proposed study seeks to address the following research questions:
Do disclosures mitigate potentially misleading presentations of preliminary and/or descriptive data in oncology drug product promotion?
Does the language (technical, non-technical) of the disclosure influence the effectiveness of the disclosure?
Does the presence of a general statement about the clinical utility of the data in addition to a specific disclosure influence processing of claims and disclosures?
Do primary care physicians (PCPs) and oncologists differ in their processing of claims and disclosures about preliminary and/or descriptive data?
Which disclosures do physicians prefer?
Purpose and Use of the Information Collection
The purpose of this project is to investigate how healthcare professionals (HCPs) understand the presentation of data in oncological promotional pieces and whether disclosures containing additional context assist them in understanding the information. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to HCPs as clearly and usefully as possible. This study will inform FDA of the validity of disclosures as a way to present important contextual information in the oncology field.
Use of Improved Information Technology and Burden Reduction
Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants will self-administer the survey via a computer, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping the written parts of surveys to less than 25 minutes in both the pretests and main study.
Efforts to Identify Duplication and Use of Similar Information
Although the literature revealed a rich background on which to base the current research, we found no studies that have examined the issues we propose to study.
OPDP is also currently proposing research that will investigate the use of disclosures over a wider range of medical conditions and with both HCPs and consumers (Docket No. 2017-N-FDA-0558). Although the two studies both investigate disclosures as a method for conveying important contextual information, the two studies examine different types of disclosures in different populations. We believe the two studies will collectively provide FDA with valuable information about the use of disclosures as a method for ensuring the safe and effective use of prescription drugs.
Impact on Small Businesses or Other Small Entities
No small businesses will be involved in this data collection.
Consequences of Collecting the Information Less Frequently
The proposed data collection is one-time only. There are no plans for successive data collections.
Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for this collection of information.
Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
In accordance with 5 CFR 1320.8(d), FDA published a 60-day notice for public comment in the FEDERAL REGISTER of June 19, 2017 (82 FR 27845). Comments received along with our responses to the comments are provided below. Comments that are not PRA-relevant or do not relate to the proposed study are not included. For brevity, some public comments are paraphrased and therefore may not reflect the exact language used by the commenter. We assure commenters that the entirety of their comments was considered even if not fully captured by our paraphrasing. The following acronyms are used here: FRN = Federal Register Notice; DTC = direct-to-consumer; HCP = healthcare professional; PCP = primary care physicians; FDA = Food and Drug Administration; OPDP = FDA’s Office of Prescription Drug Promotion.
The first public comment responder (regulations.gov tracking number 1k1-8xz7-mwcd) included eight individual comments, to which we have responded.
Comment 1: “It is unclear why FDA has chosen to conduct a study focused on oncology therapeutics and those medical specialists who prescribe such products.” [verbatim] All prescription drug products are treated the same according to regulations; therapeutic intent and prescriber type do not invoke alternate regulatory approaches.
Response: As we described in the 60-day Federal Register notice, promotional activities for oncology drugs are frequent and pervasive. Promotional labeling and advertising for cancer drugs deserve specific attention. Oncology drugs represented 26% of the 649 compounds under clinical-trial investigation from 2006 to 2011 (Ref. 3). The past decade has seen a dramatic rise in the number of oncology drugs brought to market. In the past 18 months, over 22% of new drug approvals at FDA were new cancer drugs. In that time period, FDA approved 16 cancer drugs as new molecular entities or new therapeutic biologics out of a total of 72 (this does not include approvals of benign hematology products or biological license application approvals of blood reagents, or assays and anti-globulin products used in testing kits) (Refs. 4-5). Although overall survival remains the gold standard for demonstrating clinical benefit of a cancer drug, several additional endpoints including progression free survival, disease-free or recurrence-free survival, or response rate (including hematologic response endpoints) are accepted for either regular or accelerated approval depending on the magnitude of effect, safety profile, and disease context (Ref. 6). In addition to the endpoints upon which FDA approval may be based, pharmaceutical companies typically assess many other endpoints to further explore the effects of their products. Some trials are designed to allow for formal statistical analyses of these additional endpoints; however, in many cases these endpoints are strictly exploratory and support only the reporting of descriptive results. For clinicians who are not specifically trained in clinical trial design, interpreting these endpoints can be challenging. Pharmaceutical companies invest heavily in the development and distribution of promotional materials to educate oncologists about favorable clinical trial results.
As another public comment responder (regulations.gov tracking number 1k1-8y3p-o6qb) notes, “We agree with the FDA’s assessment that dedicated research is necessary regarding oncology drug promotion, particularly given that a significant proportion of the drug development pipeline is comprised of oncology products…”
Comment 2: FDA should use a more targeted approach, including a monadic design with 100 oncologists split into two experimental conditions.
Response: To clarify the study design, we are testing two variations of disclosure (specific disclosure: technical, non-technical), two variations of general statement (general statement: present or absent), plus a control (control: no specific disclosure). Participants will be healthcare professionals who are members of one of two medical populations and will be randomly assigned to one condition. Because we are examining the effects of multiple variables and their interactions, the necessary sample sizes will be larger than those suggested in this comment based on power analyses. We have, however, changed the study design based on multiple comments and will now examine only oncologists and primary care physicians.
Comment 3: The length of the survey looks long—at 17 pages, it appears that it will take approximately 30-40 minutes to complete.
Response: We have tested the survey in-house with individuals unfamiliar with the research project, and it appears that this survey will take approximately 15 minutes to complete.
Comment 4: Instead of using recall as a measure, respondents should be allowed to have access to the materials while answering questions to better approximate their actual experiences.
Response: It is an open question as to whether having the materials in front of them better approximates actual HCP experiences. In past discussions with HCPs, some have reported that they do refer back to materials that sales representatives leave, and others report that they do not receive leave-behind materials or do not refer to them again. In any case, we have a mixture of recall and comprehension questions in our questionnaire. For the recall questions, respondents will not be able to access the materials. They will, however, be able to review the materials while answering the comprehension questions.
Comment 5: Why is FDA examining non-oncologists at all? Why are you screening out oncology for specialists in question SPECIALTY2?
Response: HCPs of all types are exposed to prescription drug promotion. Depending on location (e.g., rural areas) and type of clinical setting, some non-oncologists may have a need to consider oncologic prescription drugs to treat their patients. We agree that oncologists are the most relevant population to study in this research. However, we also want to know whether specific education and experience influence the processing of claims, data, and disclosures. Upon further review, we agree that nurse practitioners and physician assistants without oncology experience are not a necessary group to investigate to answer our particular research questions. We intend to use PCPs as a control group to understand whether specific advanced training influences the understanding of preliminary and/or descriptive oncology data. Some PCPs may have experience with oncology prescriptions, particularly in rural areas. We will not eliminate PCPs without oncology experience, but we will measure oncology prescribing experience and use this variable as a covariate in our studies.
Comment 6: FDA should screen for the prescribing of oncologic products.
Response: Although we do not intend to screen out physicians without oncology prescribing experience, we will measure this variable and use this information to determine whether it plays a role in the responses of PCPs.
Comment 7: From this point (ENDPOINT) responses may be based on the ability of respondents to recall information vs. the absence/presence of disclosures. If FDA continues with this design, the Agency should be prepared to control for this in the study design.
Response: Because this is an experimental design with random assignment to condition, any fatigue with questions that may affect the recall of information should fall out evenly across conditions. Therefore, any differences would be the result of our manipulations, in this case, the presence and form of disclosures. We have given thought to the ordering of the questions so that the most important questions are asked in the beginning of the survey rather than toward the end.
Comment 8: The answer to this question (CAUTIOUS) may be influenced more by personal and subjective opinion vs. the content of the disclosure.
Response: Because of the experimental design with random assignment to condition, personal and subjective opinions should be evenly and randomly spread across experimental conditions. However, upon further review, we have determined that this question has limited utility and we will delete it.
The second public comment responder (regulations.gov tracking number 1k1-8y3p-o6qb) included one individual comment. They reported that they support the study specifically and OPDP’s overall research efforts generally, and they agree that oncology deserves special attention. We thank this commenter for taking the time to provide this comment to us.
The third public comment responder (regulations.gov tracking number 1k1-8y5u-5vp0) included eight individual comments, to which we have responded.
Comments 1 and 2: The commenter supports FDA social science research and this specific project, as well as the Disclosures study (Docket No. FDA-2017-N-0558). “FDA’s collective research indicates a considered, objective updating of the FDA’s advertising regulations, including the use of disclosures to prevent misleading claims in advertisements for oncology products, is timely….Enabling disclaimers would be one way to enable innovators to advertise new oncology therapeutics for their approved uses in ways which would be non-misleading.”
Response: Thank you for your support.
Comment 3: The commenter suggests making sure that primary care physicians and advanced practitioners have experience in the oncology field – otherwise, it seems useless to include less knowledgeable respondents whose answers are more speculative. Overall, they question whether advanced practitioners are appropriate for this study at all.
Response: We have removed advanced practitioners from the design. We will measure the oncology prescribing experience of the PCPs in our sample, but we will not eliminate those who do not have specific oncology training. One of our research questions is whether specific training and experience in oncology influences the understanding of preliminary oncology data. To do that, we need to include a group of practitioners who may not have specific training and experience in oncology, but who are licensed practitioners permitted by law to prescribe oncology drugs, and who, in some cases, may do so.
Comment 4: If the only data being presented for BENEFICIAL, EVIDENCE1 and EVIDENCE2 are the endpoints for the disclosure without presenting overall survival or more clinically validated data, we suggest removing these questions.
Response: The pieces include other clinically validated data as would be typical in an existing piece for an oncology indication.
Comment 5: Remove CONFUSING2 because it asks physicians to speculate.
Response: As this item is a perception measure, as opposed to an accuracy measure, it is reasonable to consider some level of speculation. Moreover, in cognitive testing, HCPs responded without difficulty.
Comment 6: For SCRIPT4, add an “I don’t know” option instead of instructing respondents to “make your best guess.”
Response: This item was cognitively tested and participants expressed no difficulty answering it.
Comment 7: Those who respond “not at all familiar” to FAMILIAR should skip BTKNOW1, BTKNOW2, and ACCEL.
Response: We agree with this comment. Those who respond “not at all familiar” to FAMILIAR will skip the three items mentioned above.
Comment 8: BTDV1 and BTDV2 present incomplete data and therefore it is unclear how this will be a useful question. The commenter suggests either adding an “I need more information” option or removing the question.
Response: These items present incomplete data but we have provided enough data that HCPs should be able to make a choice. HCPs in cognitive testing exhibited no difficulty with the question. There is no existing data on perceptions of FDA’s “breakthrough” designation and this item will provide at least rudimentary information. Please note that each respondent will see only one of the items. These items are carefully crafted to avoid order effects and alphabetical effects.
The fourth public commenter (regulations.gov tracking number 1k1-8y5u-koc0) included 15 individual comments, to which we have responded.
Comment 1 (summarized): The commenter is concerned with the Agency’s recent approaches to studies in this area. FDA has proposed to undertake projects in a variety of disparate topics without articulating a clear, overarching research agenda or adequate rationales on how the proposed research related to the goal of further protecting public health. Within the last year, the Agency has increased such efforts at an exponential pace. At times, FDA proposes new studies seemingly without fully appreciating its own previous research published on the Office of Prescription Drug Promotion (OPDP) website. Proposed studies are often unnecessary in light of existing data. The commenter suggests that the Agency publish a comprehensive list of its prescription drug advertising and promotion studies from the past five years and articulate a clear vision for its research priorities for the near future. Going forward, FDA should use such priorities to explain the necessity and utility of its proposed research and should provide a reasonable rationale for the proposed research.
Response: OPDP’s mission is to protect the public health by helping to ensure that prescription drug information is truthful, balanced, and accurately communicated, so that patients and health care providers can make informed decisions about treatment options. OPDP’s research program supports this mission by providing scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health. Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that we believe are most central to our mission, focusing in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features we assess how elements such as graphics, format, and disease and product characteristics impact the communication and understanding of prescription drug risks and benefits; focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience; and our focus on research quality aims at maximizing the quality of research data through analytical methodology development and investigation of sampling and response issues.
Because we recognize the strength of data and the confidence in the robust nature of the findings is improved through the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage, which can be found at: https://www.fda.gov/aboutfda/centersoffices/officeofmedicalproductsandtobacco/cder/ucm090276.htm. The website includes links to the latest Federal Register Notices and peer-reviewed publications produced by our office. The website maintains information on studies we have conducted, dating back to a survey of DTC attitudes and behaviors conducted in 1999.
Comment 2: FDA should provide more detail about the study to stakeholders. “It is not clear from this description whether the study will yield useful information to evaluate whether disclosures provide appropriate contextual information in certain communications, whether such disclosures can be made more effective, and where the disclosures are necessary to ensure communications are truthful and non-misleading.”
Response: We have described the purpose of the study, the design, the population of interest, and have provided the questionnaire to numerous individuals upon request. These materials have proven sufficient for others to comment publicly, and for academic experts to peer-review the study successfully. Our full stimuli are under development during the PRA process. We do not make draft stimuli public during this time because of concerns that this may contaminate our participant pool and compromise the research.
Comment 3: The Agency should wait until it has completed its broader study on disclosures more generally. This study is duplicative of other studies.
Response: As we discussed in the 60-day Federal Register Notice, oncological products deserve specific attention as they account for nearly a quarter of new drug approvals and can involve the assessment of complicated endpoints. Moreover, they have specific disclosures that are unique to their products and deserve particular study. The other disclosures study (Docket No. FDA-2017-N-0558) will provide important information about a variety of disclosures in different medical conditions. One research study cannot answer all questions or study all aspects of an issue. These two studies will be complementary but not redundant. Please also refer to our response to comment 1 from the first commenter above.
Comment 4: Given that FDA grants approval based on certain preliminary and descriptive data, and that various limitations as to the underlying data must already be communicated to prescribers, there appears to be limited utility in researching disclosures regarding such data.
Response: We disagree that FDA grants approval on preliminary or descriptive data. The evidentiary standard is substantial evidence. While we recognize that no single development program can answer all questions about a particular drug in all populations, it is not accurate to describe the evidence supporting approval as descriptive or preliminary. What is potentially unique about oncology products is that many are approved under accelerated approval, in which the substantial evidence of benefit is on a surrogate endpoint that is reasonably likely to predict a clinical outcome. There remains some residual uncertainty regarding whether the effect on a surrogate endpoint will directly correlate with a clinical benefit; however, there is a requirement that confirmatory evidence of clinical benefit be obtained after approval. This residual uncertainty about the relationship of the surrogate endpoint to the clinical benefit is communicated to prescribers through the FDA-required labeling (e.g., inclusion of a limitation of use in the Indications and Usage section of the FDA-required labeling). In addition, reliance on a surrogate endpoint under accelerated approval is only done for serious diseases when the evidence indicates that the product provides a meaningful therapeutic benefit to patients over existing treatments (21 CFR §314.500).
However, this study does not focus on endpoints that formed the basis for approval. This study focuses on promotional displays of preliminary and/or descriptive data. It has not been established whether and how current disclosure-type additions to promotion are adequately communicating the limitations around this type of data, and that is the purpose of the current study. Given the importance of these limitations, it is crucial to make sure that promotional materials directed at to prescribers convey limitations appropriately. Past research has shown that simply including a statement somewhere in a promotional piece does not grant it automatic usefulness (Refs. 7-10).
Comment 5: FDA notes that, “[a]lthough overall survival remains the gold standard for demonstrating clinical benefit of a drug, several additional endpoints are accepted as surrogates . . . [including] disease-free survival, objective response rate, complete response rate, progression-free survival, and time to progression.” The Agency further states that “[f]or clinicians who are not specifically trained in clinical trial design, interpreting these endpoints may be challenging.” FDA does not cite any sources for this claim, and there is no basis for thinking that clinicians do not have a thorough understanding of the data limitations described in presentations of preliminary or descriptive scientific and clinical data. This is especially true of oncologists.
Response: This statement was not intended to be a claim, but rather a statement of concern. Studies report that physicians lack sufficient critical knowledge and skills to evaluate evidence based medicine (EBM) and may be influenced by the way study results are presented (Refs. 11-13). FDA recently conducted a systematic review of research related to prescribers’ training and critical appraisal skills related to clinical trials (Ref. 14). The study found that extant physician knowledge and skills regarding certain statistical concepts and trial designs were in the middle of the possible outcome score range, at levels below those considered mastery, even after interventions designed to increase knowledge and skills. Evidence suggested that clinical credentials affect understanding and use of clinical data. Physicians with formal training in biostatistics, epidemiology, clinical research, or EBM demonstrated higher levels of knowledge and appraisal skills than those with usual medical education and training.
Comment 6: The specific disclosures outlined by FDA include “clinical or statistical information related to the trial design, the statistical analysis plan of the trial, or any other material statistical or clinical information necessary for evaluation or interpretation of the data.” The breadth of the proposed specific disclosures appears burdensome, unnecessary, and overwhelming for the purposes of the proposed survey.
Response: These concepts were provided as examples of the types of information that may be necessary for the accurate evaluation or interpretation of the data. This statement was not meant to imply that all of these concepts would be included in disclosures used in this study.
Comment 7: PCPs and non-oncology mid-level practitioners will provide much less utility in their survey responses regarding such disclosures.
Response: We have changed the design. See previous comments and responses.
Comment 8: The Agency proposes to conduct its survey via electronic media. FDA should consider testing non-electronic media, including printed sales aids, as these forms are often reviewed by the proposed study subjects.
Response: To clarify, the stimuli presented will consist of mock print materials in .pdf format, administered via the Internet. Conducting the study in person would require a greater expenditure of resources without appreciable benefits.
Comment 9: The Agency should consider using a consistent sliding scale format for all survey responses. Just within pages 7-9 of the survey, FDA proposes numerous different schemes for survey responses: (1)“Not at all beneficial – Extremely beneficial;” (2) “Completely agree –Completely disagree;” (3) “No evidence – Strong (or conclusive) evidence;” (4) “Not at all complex – Extremely complex;” (5) “Not at all confusing – Extremely confusing;” and (6) additional responses in which subjects are asked to agree with certain statements. The variety in response options is confusing in format and could potentially introduce error. To the extent possible, FDA should make the response format consistent throughout the survey. Further, the Agency should ensure the sliding scale format consistently provides an odd number of responses to permit a “neutral” response. Certain questions (e.g., the IMPROVE question on page 7) provide six choices, not permitting a neutral response.
Response: Although one scale throughout would be easier for respondents, it will not necessarily provide better data. When a series of adjacent questions have the same response options, respondents may use a response mechanism known as anchoring and adjusting when reporting (Ref. 15). Respondents use their response to the initial survey question on a topic as the “cognitive anchor,” and then adjust up or down based on subsequent questions (Ref. 16). Anchoring and adjusting is more likely to occur for questions when respondents have some level of uncertainty in their answer (Ref. 17), which would be expected in this study. Epley and Gilovich (Ref. 17) found that when respondents use an anchoring and adjusting strategy, they often adjust insufficiently. Respondents start with the response they used for the first item and then search for the next value that is “close enough.” This can result in responses to adjacent items being more similar than responses to the same items if they used an item-specific scale (Not at all beneficial to Extremely beneficial; Not at all complex to Extremely complex). Using the same scale across all survey questions would artificially increase the correlations between all questions making it more difficult to identify differences based on the stimuli or respondent characteristics. Furthermore, use of item-specific scales compared with agree-disagree scales reduces primacy effects (tendency of respondents to select options at the beginning of the list) (Ref. 18), and increases reliability and validity (Ref. 19). Careful consideration was made to use agree-disagree scales only when item-specific scales would not be appropriate (e.g., presenting patient vignettes) or unnecessarily complex (e.g., asking about “complex terminology, statistical terms, or jargon”, inquiring about “strong” evidence).
In terms of neutral points, given the focus of the questions, we believe that offering a neutral response option is not necessary to measure opinions and attitudes accurately. Consequently, our objective is to force a selection and have participants make at least a weak commitment in either a positive or negative direction. Of concern is that offering a neutral midpoint could potentially encourage “satisficing”—cuing participants to choose a neutral response because it is offered (Ref. 20). Additionally, providing a midpoint leads to the loss of information regarding the direction in which people lean (Ref. 21). Research has found that neither format (either with or without a neutral point) is necessarily better or produces more valid or reliable results (Ref. 22). Instead, it should be left to the researcher to determine the goals of the study. During cognitive testing, a majority of participants were satisfied with the response options and all participants felt comfortable choosing a response in the absence of a midpoint.
Use of a midpoint is an issue we have examined in previous studies and we determined that we achieve valid and reliable responses without a midpoint. To increase consistency with measures used in previous studies, and in support of the arguments presented above, we are opting to exclude a midpoint. Finally, if a participant does not feel that they can choose a response because of a lack of a neutral option, they will be able to skip the question.
Comment 10: In the BENEFICIAL question on page 7 of the survey, it is unclear what relevance the subject’s perception of clinical benefit of a drug has in studying FDA’s proposed research purpose.
Response: For prescription drug products, advertisers must ensure that both the benefits and limitations are appropriately conveyed. If limitations are not appropriately conveyed, viewers may have an inflated view of the benefits of the product, relative to its risks. This question investigates this issue.
Comment 11: In a study setting, subjects may be prone to read and pay attention to more or all of the information presented. Subjects also are more aware of the importance of their responses. The Agency should address what efforts it will take to avoid response bias by study subjects.
Response: We initially had this concern many years ago when OPDP began conducting research. However, since that time, we have seen no evidence of this bias. In fact, we often deal with the opposite problem—ensuring that respondents spend a minimum amount of time looking at mock materials. Moreover, cognitive testing participants have told us that they would not spend extra time on materials if they were answering questions without an interviewer in the room. Individuals, especially HCPs, are busy, and we believe our experiments do not overestimate the amount of time participants spend on actual materials.
Comment 12: Although the draft survey did not contain Informed Consent text, the Agency should ensure that this text does not state or imply that the survey is being conducted on behalf of the U.S. Food and Drug Administration. Such a statement could potentially influence subjects’ responses to study questions. Instead, this information might be provided at the conclusion of the study.
Response: We will ensure that all materials reference the U.S. Department of Health and Human Services rather than FDA.
Comment 13: The CAUTIOUS question on page 8 should be rephrased or omitted. Subjects may be biased to respond that they interpret all data with caution, regardless of the underlying scientific evidence presented in study stimuli.
Response: We agree with this comment and will delete this item.
Comment 14: The DECISIONS question on page 8 should be omitted. How survey participants “feel about the data presented” will be highly dependent on their external experience in making prescribing decisions. This question thus may lead to highly variable results.
Response: Because this is an experimental design with random assignment to conditions, external experiences in making prescribing decisions should be randomly scattered across experimental conditions. Thus, we will be able to infer causation to our manipulations of disclosures if we find any differences across experimental conditions. We believe the presence and form of the disclosure may influence this dependent variable and believe it will reveal important information about how HCPs process the data.
Comment 15: The PREFERENCE and PREFERWHY questions on page 16 should be moved to the beginning of the survey or omitted altogether. Subjects’ responses regarding their preference in sales aid disclosure statements will be heavily influenced by earlier portions of the survey.
Response: We have given careful thought to the ordering of the questions in the questionnaire. Because preference is of secondary interest to us, we have included it after our primary outcome variables, so that it does not influence them. We recognize that prior questions may influence these measures and will interpret them with that caveat in mind.
External Reviewers
In addition to public comment, OPDP solicited peer-review comments from researchers in fields relevant to the communication of DTC prescription drug information. We received responses and incorporated the thoughts of the following individuals:
Daniel J. Becker, MD, MS
Section Chief, Hematology/Oncology
VA-NYHHS, Manhattan Campus
Assistant Clinical Professor of Medicine
New York University School of Medicine
Andy SL Tan, PhD MPH MBA MBBS | Assistant Professor
Center for Community-Based Research | Dana-Farber Cancer Institute
Department of Social and Behavioral Sciences |
Harvard T.H. Chan School of Public Health
450 Brookline Avenue,
LW633 | Boston, MA 02215
9. Explanation of Any Payment
or Gift to Respondents
The Toluna panel is comprised of physicians who have opted into the panel in order to receive compensation for participating in surveys. Panelists receive an incentive for signing up and completing their first survey. Physicians then receive an incentive for each additional survey they complete. On average, specialists and primary care physicians from the Toluna panel are paid $65 and $55, respectively, for completing a 20-minute survey. Incentive amounts are determined by respondent type and the time commitment involved. Respondents receive compensation in cash, primarily via check, or they have the option to select electronic payment.
The incentives proposed for this study ($50 for specialists and $40 for primary care physicians) are lower than average, reflecting the fact that physicians are more willing to participate in surveys from Government agencies compared with commercial organizations. The incentives proposed are the only renumeration offered to participants for participating in the survey. They do not receive additional points or awards. If no incentive was offered, it is unlikely that a sufficient number of physicians would agree to participate in the study.
Incentives are intended to recognize the time burden placed on participants, encourage their cooperation, and convey appreciation for their contributions to the research. Numerous empirical studies have established that incentives can significantly increase participation rates (Refs. 17-18). Based on the research team’s extensive experience conducting online survey research of a similar nature with the identified populations, we have learned that incentives are necessary to sufficiently attract participants and ensure participants are incentivized to carefully answer the survey items.
In reviewing OMB’s guidance on the factors that may justify provision of incentives to research participants, we have determined that the following principles apply:
Improved coverage of specialized respondents.
Physicians are a difficult population to recruit to participate in research, and their response rates have been decreasing in the recent years. OMB offers a justification which supports the use of honoraria, in this case “to improve coverage of specialized respondents, rare groups, or minority populations” (Ref. 19).
Physicians are specialized respondents and require unique incentives to ensure participation. There have been numerous studies that show difficulties in recruiting physicians to participate in research (Ref. 17). Recruiting physicians to participate in research has been shown to be difficult for reasons related primarily to the time burden (Ref. 20). Physicians time is limited and, thus, quite valuable. Cash incentives, rather than nonmonetary gifts or lottery entries, can help improve response rates and survey completion rates (Refs. 21-24). A meta-analysis on methodologies for improving response rates in physician surveys examined 21 studies published between 1981 and 2006 that investigated the effect of monetary incentives on response rates in surveys of physicians. The authors found that the odds of responding to a survey with an incentive were 2.13 times greater than responding to a survey without incentives (Ref. 17). Martins et al 2012 conducted a review of published oncology-focused studies to investigate methods for improving response rates. Their meta-analysis also showed that monetary incentives were effective at increasing response rates (Ref. 25).
Additionally, a high honorarium has proven to be more successful than lower honoraria. For the Cost Comparison Study pretest (OMB Package # 0910-0791), we found that among PCPs and endocrinologists receiving higher incentives (around $45–$60) response rates were 4 to 11 percentage points higher than when lower incentives ($10 or $15) were used (Ref. 26). Because providing a market-rate incentive tends to increase response rates, it also improves data quality. Previous research suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as primary care physicians or physician specialists, e.g., Refs. 27-28) and ensuring participation from a cross section of physicians, which will improve data quality by improving validity and reliability.
An honorarium of up to $100 was previously approved under recent OMB packages.
An honorarium of $50 for specialists for an online survey of similar length was previously approved under the OMB package #201605-0925. Below are higher incentives that have also been approved for online surveys of similar length.
$100 for PCPs and specialists for a 20-minute survey web mixed mode (OMB package #0990-0415)
$75 for specialists and $55 for primary care providers (OMB package #0910-0730)
$45 for PCPs and $60 for specialists (OMB Package # 0910-0791),
According to item 76 in the Memorandum for the President’s Management Council, past experience can be utilized to justify a more elevated honorarium: “Agencies may be able to justify the use of incentives by relating past survey experience, results from pretests or pilot tests, or findings from similar studies. This is especially true where there is evidence of attrition and/or poor prior response rates (Ref. 19).”
An incentive will improve data quality by improving validity and reliability.
OMB’s guidance states that a “justification for requesting use of an incentive is improvement in data quality. For example, agencies may be able to provide evidence that, because of an increase in response rates, an incentive will significantly improve validity and reliability to an extent beyond that possible through other means” (Ref. 19).
There are only a limited number of physicians (particularly oncologists) in the online panel. Therefore, it is critical to maximize the number who respond to ensure sufficient power to determine meaningful differences by experimental conditions. An underpowered study increases the chance for Type II error, which may result in erroneously rejecting hypothesized models (Ref. 29).
The honoraria are intended to recognize the time burden placed on participants, encourage their cooperation, and to convey appreciation for contributing to this important study. The use of modest incentives is expected to enhance survey response rates and reduce nonresponse bias. Numerous studies have shown that incentives can reduce nonresponse bias for key subgroups. Relevant to the proposed study, Juster and Suzman (1995) found that high incentives ($100 per individual) reduced nonresponse bias for people with high incomes (Ref. 30).
In terms of studies using online panels, use of monetary incentives is particularly important as the use of such incentives has been found to increase initial response rates, convert refusals, and reduce subsequent attrition (Ref. 31).
This incentive is consistent with those used in online studies between the contractor (RTI) and the vendor.
Agencies may justify the use of incentives by “relating past survey experience” (Ref. 19). The contractor (RTI) and their online panel vendor are experts in their field. In their experience in the recruitment of physicians, an honorarium of $40-50 is the minimum amount to ensure successful recruitment and achieve high data quality. If a lower honorarium is offered, in their experience, this could result in longer fielding times and project delays, less attentive respondents (which could result in more item nonresponse), lower response rates, and it could increase panel attrition.
10. Assurance of Confidentiality Provided to Respondents
No personally identifiable information will be sent to the FDA. The independent contractor will maintain all information that can identify individual respondents in a form separate from the data provided to the FDA. The information will be kept in a secured fashion that will not permit unauthorized access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be approved by the FDA’s Institutional Review Board, the Research Involving Human Subjects Committee (RIHSC), prior to collecting any information.
All respondents will be assured that the information will be used only for research purposes and will be kept private to the extent allowable by law, as detailed in the survey consent form. The survey instructions will include information explaining this and respondents will be assured that their answers to screener and survey questions will not be shared with anyone outside the research team and that their names will not be reported with responses provided. Respondents will be told that the information obtained from all the surveys will be reported in aggregate in a summary document so that details of individual questionnaires cannot be linked to a specific respondent.
Additionally, the Internet panel includes a privacy policy that is easily accessible from any page on the site. A summary of the privacy policy will be included on all survey invites. The panel complies with established industry guidelines and states that members’ personally identifiable information will never be rented, sold, or revealed to third parties, except in cases where required by law. These standards and codes of conduct comply with those set forth by the American Marketing Association, the Council of American Survey Research Organizations, and others. Further, all Toluna employees and contractors are required to take yearly security awareness and ethics training based on these standards.
All electronic data will be maintained in accordance with the Department of Health and Human Services’ ADP Systems Security Policy, as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. Also, all data will be maintained in accordance with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).
11. Justification for Sensitive Questions
This data collection will not include sensitive questions. The complete list of questions is
available in Appendix B.
12. Estimates of Annualized Burden Hours and Costs
12a. Annualized Hour Burden Estimate
For both the pretests and main study, the questionnaire is expected to last no more than
30 minutes. This will be a one-time (rather than annual) collection of information. FDA
estimates the burden of this collection of information as follows:
Table 1.—Estimated Annual Reporting Burden1
Activity |
No. of Respondents |
No. of Responses per Respondent |
Total Annual Responses |
Average Burden per Response |
Total Hours |
Pretesting |
|||||
Number to complete the screener |
150 |
1 |
150 |
0.03 (2 minutes) |
5 |
Number of completes |
90 |
1 |
90 |
0.33 (20 minutes) |
30 |
Main Study |
|||||
Number to complete the screener |
3,525 |
1 |
3,525 |
0.03 (2 minutes) |
106 |
Number of completes |
2,115 |
1 |
2,115 |
0.33 (20 minutes) |
698 |
Total hours |
|
|
|
|
839 |
1There are no capital costs or operating and maintenance costs associated with this collection of information.
13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs
There are no capital, start-up, operating or maintenance costs associated with this
information collection.
14. Annualized Cost to the Federal Government
The total estimated cost to the Federal Government for the collection of data is $699,452 ($174,863 per year for four years). This includes the costs paid to the contractors to manipulate the stimuli, program the study, draw the sample, collect the data, and create and analyze a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the resultant data, and to draft a report ($97,000; 8 hours per week for three years).
15. Explanation for Program Changes or Adjustments
This is a new data collection.
16. Plans for Tabulation and Publication and Project Time Schedule
Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Part B for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.
Table 2. – Project Time Schedule |
|
Task |
Estimated Number of Weeks after OMB Approval |
Pretest completed |
20 weeks |
Main study data collected |
45 weeks |
Final methods report completed |
55 weeks |
Final results report completed |
75 weeks |
Manuscript submitted for internal review |
80 weeks |
Manuscript submitted for peer-review journal publication |
92 weeks |
17. Reason(s) Display of OMB Expiration Date is Inappropriate
No exemption is requested.
18. Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the certification.
References
1. Johar, K., "An Insider's Perspective: Defense of the Pharmaceutical Industry's Marketing Practices," Albany Law Review, 76:299-334, 2012-2013.
2. Wick, C., M. Egger, S. Trelle, et al., "The Characteristics of Unsolicited Clinical Oncology Literature Provided by Pharmaceutical Industry," Annals of Oncology, 18(9):1580-1582, 2007.
3. Fisher, J.A., M.D. Cottingham, and C.A. Kalbaugh, "Peering Into the Pharmaceutical 'Pipeline': Investigational Drugs, Clinical Trials, and Industry Priorities," Social Science & Medicine, 131:322-330, 2015.
4. Centerwatch, "FDA Approved Drugs for Oncology," https://www.centerwatch.com/drug-information/fda-approved-drugs/therapeutic--area/12/oncology (accessed on June 21, 2018).
6. Beaver, J.A., Howie, L.J., Pelosof, L., Kim, T., Liu, J. Goldberg, K.B., et al., “A 25-Year Experience of US Food and Drug Administration Accelerated Approval of Malignant Hematology and Oncology Drugs and Biologics: A Review. JAMA Oncology, 4:849-856, 2018.
7. Dodge, T. and A. Kaufman, "What Makes Consumers Think Dietary Supplements Are Safe and Effective? The Role of Disclaimers and FDA Approval," Health Psychology, 26:513-517, 2007.
8. Dodge, T., D. Litt, and A. Kaufman, "Influence of the Dietary Supplement Health and Education Act on Consumer Beliefs About the Safety and Effectiveness of Dietary Supplements," Journal of Health Communication, 16(3):230-244, 2011.
9. Mason, M.J., D.L. Scammon, and X. Fang, "The Impact of Warnings, Disclaimers, and Product Experience on Consumers' Perceptions of Dietary Supplements," The Journal of Consumer Affairs, 41(1):74-99, 2007.
10. France, K.R. and P.F. Bone, "Policy Makers' Paradigms and Evidence From Consumer Interpretations of Dietary Supplement Labels," The Journal of Consumer Affairs, 39(1):27-51, 2005.
11. Ghosh, A.K. and K. Ghosh, “Translating Evidence-Based Information into Effective Risk Communication: Current Challenges and Opportunities,” The Journal of Laboratory and Clinical Medicine, 145(4): 171–180, 2005.
12. Harewood, G.C. and L.M. Hendrick, “Prospective, Controlled Assessment of the Impact of Formal Evidence-Based Medicine Teaching Workshop on Ability to Appraise the Medical Literature,” Irish Journal of Medical Science, 179(1): 91–94, 2010.
13. Fritsche, L., T. Greenhalgh, Y. Falck-Ytter, H.H. Neumayer, and R. Kunz, “Do Short Courses in Evidence Based Medicine Improve Knowledge and Skills? Validation of Berlin Questionnaire and Before and After Study of Courses in Evidence Based Medicine,” British Medical Journal, 325(7376): 1338–1341, 2002.
14. Kahwati, L., D. Carmondy, N. Berkman, H.W. Sullivan, K.J. Aikin, and J. DeFrank, “Prescribers’ Knowledge and Skills for Interpreting Research Results: A Systematic Review,” Journal of Continuing Education in the Health Professions, 37(2):129-136, 2017.
15. Tversky, A. and D. Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science, 185(4157):1124-1131, 1974.
16. Gehlbach, H. and S. Barge, “Anchoring and Adjusting in Questionnaire Responses,” Basic and Applied Social Psychology, 34(5):417-433, 2012.
17. VanGeest, J., T. Johnson, and V. Welch, “Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review,” Evaluation and the Health Professions, 30, 303-321, 2007.
18. Shettle, C., & G. Mooney, “Monetary incentives in U.S. government surveys,” Journal of Official Statistics, 15, 231–250, 1999.
19. Office of Management and Budget. “Questions and Answers When Designing Surveys for Information Collections,” 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/pmc_survey_guidance_2006.pdf. Accessed December 18, 2018.
20. Asch, S., S.E. Connor, E.G. Hamilton, & S.A. Fox, “Problems in Recruiting Community-based Physicians for Health Services Research,” Journal of General Internal Medicine, 15(8), 591-599, 2000.
21. Epley, N. and T. Gilovich, “The Anchoring-and-Adjustment Heuristic: Why the Adjustments are Insufficient,” Psychological Science, 17(4):311-318, 2006.
22. Höhne, J.K. and D. Krebs, “Scale Direction Effects in Agree/Disagree and Item-Specific Questions: A Comparison of Question Formats,” International Journal of Social Research Methodology,21(1):91-103, 2017.
23. Saris, W.E., M. Revilla, J.A. Krosnick, and E.M. Shaeffer, “Comparing Questions with Agree/Disagree Response Options to Questions with Item-Specific Response Options” Survey Research Methods, 4:61–79, 2010.
24. Krosnick, J.A. and S. Presser, “Question and Questionnaire Design,” In: Handbook of Survey Research. (pp. 263‒314). Bingley, United Kingdom: Emerald Group Publishing Limited, 2010.
25. Martins, Y., R. Lederman, C. Lowenstein, et al., “Increasing Response Rates from Physicians in Oncology Research: A Structured Literature Review and Data From a Recent Physician Survey,” British Journal of Cancer, 106(6), 1021-6, 2012.
26. Aikin, K., K. Betts, V. Boudewyns, A. Stine, & B. Southwell, “Physician responsiveness to survey incentives and sponsorship in prescription drug advertising research,” Annals of Behavioral Medicine, 50(Suppl), s251, 2016.
27. Converse, J.M. and S. Presser, Survey Questions: Handcrafting the Standardized Questionnaire, (No. 63). Thousand Oaks, CA: SAGE Publications, 1986.
28. DeVellis, R.F. Scale Development: Theory and Applications, (Vol. 26). Thousand Oaks, CA: SAGE Publications, 2016.
29. Cohen, J., P. Cohen, S.G. West, & L.S. Aiken, Applied Multiple
Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, 2003.
30. Juster, F. T. & R. Suzman, “An Overview of the Health and Retirement Study,” Journal of Human Resources, 30, S7-S56, 1995.
31. Singer, E. & R.A. Kulka, “Paying Respondents for Survey Participation,” In M. Ver Ploeg, R. A. Moffitt, & C. F. Citro (Eds.), Studies of Welfare Populations: Data Collection and Research Issues, Washington, D.C.: National Academy Press, 2002.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | [Insert Title of Information Collection] |
Author | jcapezzu |
File Modified | 0000-00-00 |
File Created | 2021-01-20 |