Supporting Statement Part A - Study of Disclosures to HealthCare Providers 2021

Supporting Statement Part A - Study of Disclosures to HealthCare Providers 2021.docx

Study of Disclosures to Health Care Providers Regarding Data that Do Not Support Unapproved Use of an Approved Prescription Drug

OMB: 0910-0900

Document [docx]
Download: docx | pdf

United States Food and Drug Administration

Study of Disclosures to Health Care Providers Regarding Data that Do Not Support Unapproved Use of an Approved Prescription Drug

OMB Control No. 0910- NEW

SUPPORTING STATEMENT

Part A. Justification

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.


The Office of Prescription Drug Promotion’s (OPDP) mission is to protect the public health by helping to ensure that prescription drug promotion is truthful, balanced, and accurately communicated. OPDP’s research program provides scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health. Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that are most central to our mission. Our research focuses in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features we assess how elements such as graphics, format, and disease and product characteristics impact the communication and understanding of prescription drug risks and benefits; focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience; and our focus on research quality aims at maximizing the quality of our research data through analytical methodology development and investigation of sampling and response issues. This study will inform the first two topic areas, advertising features and target populations.


Because we recognize that the strength of data and the confidence in the robust nature of the findings is improved by utilizing the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage, which can be found at: https://www.fda.gov/aboutfda/centersoffices/officeofmedicalproductsandtobacco/cder/ucm090276.htm. The website includes links to the latest Federal Register notices and peer-reviewed publications produced by our office. The website maintains information on studies we have conducted, dating back to a survey on direct-to-consumer (DTC) advertisements conducted in 1999.


The Distributing Scientific and Medical Publications on Unapproved New Uses — Recommended Practices revised draft guidance (2014; Ref. 1)1, recommends that scientific and medical journal articles that discuss unapproved uses of approved drug products be disseminated with a representative publication that reaches contrary or different conclusions, when such information exists. Similarly, the Responding to Unsolicited Requests for Off-Label Information About Prescription Drugs and Medical Devices draft guidance (2011; Ref. 2)1 recommends that when conclusions of articles or texts that are disseminated in response to an unsolicited request have been specifically called into question by other articles or texts, a firm should disseminate representative publications that reach contrary or different conclusions regarding the use at issue.


Pharmaceutical firms sometimes choose to disseminate publications to healthcare providers (HCPs) that include data that appear to support an unapproved use of an approved product. At the same time, published data that are not supportive of that unapproved use may also exist. For example, unsupportive published information could describe an increased risk of negative outcomes (e.g., death, relapse) from the unapproved use of the approved product, suggesting that the unapproved use does not have a positive benefit-risk ratio. The purpose of this research is to examine physicians’ perceptions and behavioral intentions about an unapproved new use of an approved prescription drug when made aware of other data that are not supportive of the unapproved use. This research will also evaluate the effectiveness of various disclosure approaches for communicating the unsupportive information. We will use the results of this research to better understand: (1) physicians’ perceptions of an unapproved use of a prescription drug; (2) physicians’ perceptions about an unapproved use of an approved prescription drug when they are aware of the existence of unsupportive information about it; (3) physicians’ perceptions of disclosures referencing the existence of unsupportive information about that particular use; and (4) to examine the utility and effectiveness of various approaches to the communication of this information. In particular, we plan to examine how different approaches to the communication of unsupportive information affect physicians’ thoughts and attitudes about the unapproved use. Five approaches will be examined: (1) the provision of the unsupportive data in the form of a representative publication; (2) a disclosure that summarizes, rather than provides, the unsupportive data and including a citation to the representative publication; (3) a disclosure that does not provide or include a summary of the unsupportive data but does acknowledge that unsupportive data exist and includes a citation to the representative publication; (4) a general disclosure that does not provide or include a summary of the unsupportive data but acknowledges unsupportive data may exist, without conceding that such data do exist; or (5) nothing-- the absence of any presentation of unsupportive data or any disclosure about such data (control condition). We have four research questions:


RQ1: When considering a presentation of data about an unapproved use of an approved drug product, how does the existence of unsupportive data impact physician perceptions and intentions with regard to that unapproved use?

RQ2: How does the way in which the existence of unsupportive data is communicated, when the specific data is not presented, impact physicians’ perceptions and intentions with regard to an unapproved use of an approved drug product?

RQ3: How are physicians’ perceptions of and intentions toward an unapproved use of an approved drug product affected by the disclosure of specific unsupportive data versus disclosure statements about data that is not presented?  

RQ4: Do other variables (e.g., demographics) have an impact on these effects?

These research questions will be examined in two medical conditions.


We plan to conduct one pretest with 180 voluntary adult participants and one main study with 1,600 voluntary adult participants. Participants in the main study will be 510 oncologists in the oncology medical condition and 1,090 primary care physicians in the insomnia2 medical condition. All participants will be physicians who engage in patient care at least 50% of the time and do not work for a pharmaceutical company, marketing firm, or the Department of Health and Human Services. The gender, race/ethnicity, and ages of the participating HCPs will be self-identified by participants. We will aim to include a mix of demographic segments to ensure a diversity of viewpoints and backgrounds. Power analyses were conducted to ensure adequate sample sizes to detect small to medium effects.


The studies will be conducted online. The pretest and main studies will have the same design and will follow the same procedure. The base stimulus in both the pretest and main studies will consist of a sample publication supporting an unapproved use of an approved drug product. Within each medical condition, participants will be randomly assigned to one of five test conditions (as set forth in Figure 1). Following exposure to the stimuli, they will be asked to complete a questionnaire that assesses comprehension, perceptions, prescribing intentions, and demographics. In the pretest, participants will also answer questions about the study design and questionnaire.


Figure 1: Study Design


Accompanied by representative publication with unsupportive data


Accompanied by disclosure with summary of unsupportive data and including a citation for that data.



Accompanied by disclosure that unsupportive data exist and including a citation for that data, but without a summary of the unsupportive data

Accompanied by general disclosure that unsupportive data may exist and no citation


No disclosure or material about unsupportive data

Medical Condition 1 (cancer)






Medical Condition 2 (insomnia)







  1. Purpose and Use of the Information Collection

We will use the results of this research to better understand: (1) HCPs perceptions of an unapproved use of a prescription drug; (2) HCPs perceptions about an unapproved use of an approved prescription drug when they are aware of the existence of unsupportive information about it; (3) HCPs perceptions of disclosures referencing the existence of unsupportive information about that particular use; and (4) examine the utility and effectiveness of various approaches to the communication of this information. The findings of this study will also help inform FDA’s understanding about when disclosures about unsupportive data might be useful and what types of information should be included.

  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants in the pretests and main studies will self-administer the survey via the Internet, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 20 minutes.

  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. The available literature yields little information on this topic.

  1. Impact on Small Businesses or Other Small Entities

There will be no impact on small businesses or other small entities. The collection of information involves individuals, not small businesses.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In the Federal Register of July 6, 2020 (85 FR 40300), FDA published a 60-day notice requesting public comment on the proposed collection of information. FDA received two submissions that were PRA-related. Within these submissions FDA received multiple comments that the Agency has addressed below. For brevity, some public comments are paraphrased and therefore may not include the exact language used by the commenter. We assure commenters that the entirety of their comments was considered even if not fully captured by our paraphrasing in this document. The following acronyms are used here: HCP = healthcare provider; FDA and “The Agency” = Food and Drug Administration; OPDP = FDA’s Office of Prescription Drug Promotion.

(Comment 1) One comment asserted that FDA has not made the stimuli available for public comment and requested FDA publish a new 60-day notice after these comments have been addressed to give the public another opportunity to review and comment.


(Response 1) We have provided the purpose of the study, the design, the population of interest and have provided the questionnaire to individuals upon request. These materials have proven sufficient for public comment, and for academic experts to peer-review the study successfully. Our full stimuli are under development during the PRA process. We do not make draft stimuli public during this time because of concerns that this may contaminate our participant pool and compromise the research.


(Comment 2) One comment suggested that due to the task of reading the “scientific publication” stimuli and length of the questionnaire, FDA’s estimation of the time it will take to complete the study is too low, and thus the burden of the information collection is inaccurate.


(Response 2) The scientific “publications” in this study are each formatted as a one-page brief report. The text is presented in two columns and has the following headings: Introduction, Methods, Results, Discussion and Limitations. The survey contains primarily closed-ended questions with Likert-scales, and there are five open-ended questions. The expected time for the study is based on our prior experience conducting studies using similar protocols. We will also test the time during the pretest to ensure we stay within 20 minutes. If we determine the average time for completing the survey is greater than 20 minutes, we will revise the survey prior to fielding the main study.


(Comment 3) One comment asserts this proposed study overlaps with other OPDP research currently in progress and references several studies.


(Response 3) OPDP may conduct concurrent or overlapping studies on similar topics. While the studies referenced by the comment contribute to the evidence base for prescription drug promotion, prior studies had a different focus than the current study. Prior disclosure studies examined the effectiveness of disclosures in increasing understanding of efficacy claims (Disclosures in Professional and Consumer Prescription Drug Promotion) and the role of disclosures in mitigating potentially misleading presentations of preliminary or descriptive data about oncology drugs (Disclosures of Descriptive Presentations in Professional Oncology Prescription Drug Promotion). The third study mentioned by the comment (Physician Interpretation of Information About Prescription Drugs in Scientific Publications vs. Promotional Pieces) investigates how physician perception of professional prescription drug communications is influenced by variations in information context, methodologic rigor of the clinical study, and time pressure.


The current study uses an experimental design to compare various disclosure approaches for communicating unsupportive information about an unapproved new use. The findings of this study will help inform FDA’s understanding about when disclosures about unsupportive data might be useful and what types of information should be included.


(Comment 4) One comment expressed concern that the way in which the proposed research is described in the notice suggests that pharmaceutical firms disseminate supportive data but do not adequately disclose unsupportive data, and that this “implied bias” may taint the collection and interpretation of the data.


(Response 4) The sentences referred to in this comment appear in the Federal Register notices for the study to provide background and do not suggest that any firms are not following the recommendations in the two guidance documents referenced in that same background section. Rather, the background outlines the current FDA recommendations around disclosure of unsupportive data with these types of communications and the intent of the study to evaluate alternative approaches to the disclosure of unsupportive data. These background statements are not part of the materials that will be provided to study participants. Rather, study instructions tell participants only that they will be reviewing informational material about a prescription drug. No instructional materials provided to participants mention a pharmaceutical manufacturer. Therefore, we do not believe the collection and interpretation of study findings will be tainted or biased.


(Comment 5) One comment suggested deleting or amending all questions about HCPs’ prescribing decisions (Questions 4, 5, 10, 11, 14-23) because these decisions are likely to be influenced by many factors and are outside of FDA’s jurisdiction. This comment also asserted Question 10 is biased and worded to suggest that pharmaceutical firms disseminate supportive data but do not adequately disclose unsupportive data and suggests deleting or amending the question.


(Response 5) As explained earlier, the Public Health Service Act authorizes FDA to conduct research relating to health information, and the FD&C Act authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act. The purpose of the current experimental study is to examine physicians’ perceptions and behavioral intentions about an unapproved new use of an approved prescription drug when made aware of other data that are not supportive of the unapproved use, and to evaluate the effectiveness of various disclosure approaches for communicating the unsupportive information. The study is within FDA’s authority, and it will help to inform OPDP’s work to help ensure that prescription drug information is truthful, balanced, and accurately communicated, so that HCPs and consumers can make informed decisions.


Questions 4 and 5 were intended to assess the impact of various disclosure manipulations on hypothetical prescribing decisions. Measuring behavioral intention is a common method of assessing knowledge and attitudes. There is substantial theoretical and empirical support for our approach and strong behavioral intention has been shown to be predictive across a wide range of behaviors, including prescribing (Refs. 4-6). Based on the results of cognitive interviews, we have revised the measurement of behavioral intention to the following: “If you were considering prescribing [DRUG] to a patient with [DISEASE], how important would the information in the [DISPLAY FILL] be in your decision-making?”


Questions 14-23 provide important information to address the research questions for this study, including sources of information for studies that do not support an off-label use as well as what aspects of the study would be most important to prescribers.


Questions 10 and 11 are intended to evaluate whether there is enough information for the participants to make a prescribing decision based on the information in the brief study report and disclosure condition, not to assess the adequacy of pharmaceutical firms’ disclosure of unsupportive data generally. Pharmaceutical firms are not referenced in any study materials, and these questions do not imply anything about their dissemination activities.


(Comment 6) One comment recommended that the stimuli used to represent publications that reach contrary or different conclusions regarding the unapproved use be held to the same standards as the publication about the unapproved use. The comment suggests that this should include being considered scientifically sound by experts with scientific training and expertise to evaluate the safety or effectiveness of the drug or device.


(Response 6) Both the supportive and unsupportive data provided to study participants either in the form of publications or summary information were reviewed by FDA experts with the requisite scientific training and experience to ensure they are appropriate, realistic, and of similar quality.



(Comment 7) One comment recommended that the disclosure summary include specific information about the study design (i.e., study population and control group, key clinical endpoints (patient outcomes)), statistical significance (i.e., 95% Confidence Interval (CI), Hazard Ratio (HR) and p value)) and other key data needed to determine benefit-risk ratio, and to include the product manufacturer and study sponsor.


(Response 7) The proposed experimental study design includes five conditions to examine disclosure approaches for communicating unsupportive information. One of the five conditions provides study details as recommended by the comment. The other conditions have varying levels of detail about the unsupportive information about the unapproved new use of the prescription drug. There is also a control condition. We have purposely omitted the product manufacturer and study sponsor, as we know from other research this may unduly influence physicians’ beliefs about the quality of the study (Ref. 3).


(Comment 8) One comment suggested the disclosure correlate with the unapproved use described in the brief study report.


(Response 8) We agree with this point. The disclosure and unsupportive data provided to participants are relevant to the unapproved use information participants initially review.


(Comment 9) One comment suggested including hyperlinks to a citation for the data and including a representative publication with unsupportive data. This comment also suggested keeping track of how many study participants utilize the hyperlink.


(Response 9) We developed the stimuli for this study using information from multiple scientific publications. Thus, the content does not represent one particular study, and we are unable to provide hyperlinks. The revised design suggested in the comment may be a good suggestion for future research.


Several comments suggested changes to the proposed questionnaire.


(Comment 10) One comment suggested the instructions and lack of a “don’t know” response option may lead to forced guessing, which may undermine the utility of the study.


(Response 10) We have deleted Question 10 and revised Question 11 to read, “What additional information, if any, did you need in order to consider prescribing [DRUG] for [DISEASE]?”, and deleted the instructions to “give us your best guess on answers you do not know.”


(Comment 11) One comment recommends FDA focus on HCPs’ understanding of the data rather than asking about HCPs’ preference for receiving information (Q19 and Q20).


(Response 11) In response to the comment, we have removed Questions 19 and 20 from the survey. Question 3 (now Q4) assesses physician understanding of the disclosure.


(Comment 12) One comment suggested deleting or revising Questions 6 and 9 because outside influences could skew the results.


(Response 12) We are examining the impact of the various levels of information disclosure on participants’ ratings of how informative they find the information and how likely they would be to search for additional information about the drug. Participants will be randomly assigned to a condition, and any individual differences or potential biases should be spread across experimental conditions. Thus, if we find differences between and among conditions, we can be reasonably certain that the study manipulations caused the differences. In consideration of this comment and feedback from peer reviewers, we have revised Question 6 (now Q7) to read, “If you were considering prescribing [DRUG] for [DISEASE], how useful would the information [DISPLAY FILL] be?”

(Comment 13) One comment suggested deleting or revising Question 8 because it is unclear what it means for information to be “credible” in this context, and assessing credibility is very subjective.


(Response 13) To clarify, this question reads, “How credible is the information presented

[DISPLAY FILL]?” where [DISPLAY FILL] in Condition 1 is “on page 2,” in

Conditions 2, 3, and 4 is the text of the disclosure condition to which they have been

assigned, and in Condition 5 is “the material.” Thus, the information on which

participants are being asked to give their opinion is specified. This question has been used

in other studies without difficulty. Cognitive testing did not identify any difficulty with

respondents’ understanding of “credible” in this context.


(Comment 14) One comment suggested amending questions that are worded “contradict or do not support” because physicians may view a lack of support (inconclusive findings) as different from contradictory findings.


(Response 14) We did not intend for “do not support” to mean that the findings are inconclusive, although we acknowledge that it could be interpreted in such a way. Our intention was to refer to any findings that do not support the off-label use, such as findings that the drug is not effective for the off-label use or had increased risks. We explored potential confusion by asking separate questions on the concepts of “contradict” and “inconclusive” in cognitive testing. Cognitive testing suggested that respondents generally considered “findings that contradict” and “findings that have inconclusive support” to be very similar concepts. While respondents agreed that the two were technically distinct, they tended to assess the two similarly in this context. To gather additional empirical data, we will retain these as separate items in the pretest.






(Comment 15) One comment suggests many of the questions use unbalanced answer scales and recommends the answer scales should be balanced. For example, it may be hard for participants to distinguish between “A little” and “Somewhat” or “Very” and “Extremely.” Relatedly, the positive and negative options are not necessarily opposites (e.g., “Agree” or “Disagree”) or parallel in intensity (e.g., “Strongly Agree” or “Strongly Disagree”).


(Response 15) We are not using a bipolar scale measuring opposites. Bipolar scales are typically used when there are two opposing possibilities (e.g., “Strongly Agree” or “Strongly Disagree”). We chose a unipolar scale (e.g., “not at all important” to “extremely important”) because the questions are asking about the relative presence or absence of a quality. In the case of usefulness, for instance, it makes more sense for the scale to begin with the absence of usefulness (“not at all useful”) than the opposite of usefulness (“extremely useless”). By beginning with “Not at all,” the order of the scale balances out the unidimensional nature of the question (Ref. 7). In fact, a key advantage of a unipolar scale is that it does not depend upon defining opposites. The scale labels (i.e., “Not at all,” “A little,” “Somewhat,” “Very,” and “Extremely”) have been tested in multiple studies, and evidence shows that participants are able to distinguish between the response options (see, for example, Ref. 8).


(Comment 16) One comment expressed a lack of clarity on how Question 3 could yield interpretable responses and recommended replacing this open-ended question with closed-ended questions.


(Response 16) Open-ended items are often used when the intention is to understand respondents’ comprehension (Ref. 9). By asking respondents to rephrase the disclosure in their own words (as if explaining to a colleague), we can assess whether respondents understand the disclosure language as intended (Ref. 10). The responses to open-ended items are qualitative data and will be analyzed to assess what respondents feel to be key information (information included in their summary), what they feel is extraneous information (information not included in their summary), and any information that is confusing or unclear (information summarized incorrectly in the summary).


(Comment17) One comment suggested adding the following questions to the questionnaire:


  1. How often do you research and study off-label uses of approved drugs in a given week? With possible answer choices being “never, rarely, occasionally, frequently”

  2. How often are drug products used off-label in your practice? With possible answer choices being “never, rarely, occasionally, frequently”

  3. Would you prescribe this drug for (unapproved use of an approved drug product)? With possible answer choices being: “yes, no, need more information.”




(Response 17) For the first suggested question, we currently assess frequency of prescribing a drug off label (Q14), and the sources used to learn about off-label uses (old Q15 and old Q16, now Q17, Q18, and Q19). We think this combination of questions adequately covers the concept of how often participants prescribe and look for information about off-label uses. Regarding the response choices, the timeframe of a week is very narrow, and would be difficult to answer for those who prescribe off-label infrequently (e.g., a few times a year). In response to the comment and external peer review comment, we have revised response options for Q14 to be more specific (once a week or more often, several times each month, several times each year, less than once a year, have never prescribed a drug for an off-label use).


For the second suggestion, we agree that the frequency of prescribing within the practice would be useful to capture and have added a question to measure this. No difficulties were identified with this question during cognitive testing.


For the third suggestion, we agree that this would be a useful measure. In response to this comment and peer review, we have revised the questionnaire to ask about prescribing likelihood for the specific off-label use.


External Reviewers


In addition to public comment, OPDP sent materials to, and received comments from, two external peer reviewers in 2020. These individuals are:


1. Aaron Kesselheim, M.D., Professor of Medicine, Harvard Medical School. Faculty Member, Division of Pharmacoepidemiology and Pharmacoeconomics, Brigham and Women's Hospital.


2. Sojung (Claire) Kim, Ph.D., Assistant Professor, CHARM Lab Director, PRSSA Faculty Advisor, Department of Communication, George Mason University. 


  1. Explanation of Any Payment or Gift to Respondents


For the pretest and main study, PCPs will be provided an honorarium of $46 and oncologists will be provided an honorarium of $62. These incentives are lower than average for this population. The incentive is an effective method of drawing attention to the study and gaining cooperation for completing it. It is not intended as a payment for their time but rather a means for increasing response rates.

Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of these incentives.

Data quality: Because providing a market-rate incentive should increase response rates, it should also significantly improve validity and reliability to an extent beyond that possible through other means. Historically, physicians are one of the most difficult populations to survey, partly because of the demands on their professional time. Consequently, incentives assume an even greater importance with this group. Several studies have discussed the challenges of conducting research with HCPs and have concluded that offering substantial incentives is necessary to attain high response rates (see Refs. 11-13).


Recruiting physicians to participate in research has been shown to be difficult for reasons related primarily to the time burden (Ref. 14). Physicians’ time is limited and, thus, quite valuable. Cash incentives, rather than nonmonetary gifts or lottery entries, can help improve response rates and survey completion rates (Refs. 15- 18). A meta-analysis on methodologies for improving response rates in physician surveys examined 21 studies published between 1981 and 2006 that investigated the effect of monetary incentives on response rates in surveys of physicians. The authors found that the odds of responding to a survey with an incentive were 2.13 times greater than responding to a survey without incentives (Ref. 19). Martins and colleagues conducted a review of published oncology-focused studies to investigate methods for improving response rates. Their meta-analysis also showed that monetary incentives were effective at increasing response rates (Ref. 20). Previous research also suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as PCPs or physician specialists, e.g., Refs. 21-22) and ensuring participation from a cross section of physicians, which will improve data quality by improving validity and reliability.

Past experience: The Internet panel vendor for this study specializes in conducting research with physicians and has conducted hundreds of surveys during the past year. The vendor offers incentives to its panel members for completing surveys, with the amount of incentive determined by the length of the survey and physician specialty. Their experience indicates that the requested amount is reasonable for a 20-minute survey with physicians.



Reduced survey costs: Recruiting with market-rate incentives is cost-effective. Lower participation rates will likely impact the project timeline because participant recruitment will take longer and, therefore, data collection will be slower and more costly.

  1. Assurance of Confidentiality Provided to Respondents

No personally identifiable information will be sent to FDA. Data from completed surveys will be compiled into a SPSS or excel dataset by the panel vendor (SERMO) and sent to contractor (Westat), with no personally identifiable information (PII) for analysis. All information that can identify individual respondents will be maintained by the panel vendor and kept on a password protected secure server, and only authorized individuals employed by the vendor will have access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be reviewed by FDA’s Institutional Review Board prior to collecting any information.


All participants will be assured that the information they provide will be used only for research purposes and their responses will be kept secure to the extent allowable by law. In addition, the informed consent (Appendix A) includes information explaining to participants that their information will be kept confidential, their answers to screener and survey questions will not be shared with anyone outside the research team and their names will not be linked to their responses. Participants will be assured that the information obtained from the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.


The Internet panel includes a privacy policy that is easily accessible from any page on the site. A link to the privacy policy will be included on all survey invitations. The panel complies with established industry guidelines and states that members’ personally identifiable information will never be rented, sold, or revealed to third parties except in cases where required by law. These standards and codes of conduct comply with those set forth by American Marketing Association, the Council of American Survey Research Organizations, and others. All SERMO employees and contractors are required to take yearly security awareness and ethics training, which is based on these standards.

All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products). Upon final delivery of data files to Westat and completion of the project, SERMO will destroy all study records, including data files, upon request.


11. Justification for Sensitive Questions


This data collection will not include sensitive questions. The complete list of questions is available in Appendix B.


12. Estimates of Annualized Burden Hours and Costs


12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:

Table 1. Estimated Annual Reporting Burden1

Activity

No. of respondents

No. of responses per respondent

Total annual responses

Average burden per response

Total hours

Pretest screener

290

1

290

.08 (5 min)

23

Pretest completes

180

1

180

.33 (20 min)

59

Main study screener

2,526

1

2,526

.08 (5 min)

202

Main study completes, Medical Condition 1 (cancer)

510

1

510

.33 (20 min)

168

Main study completes, Medical Condition 2 (insomnia)

1,090

1

1,090

.33 (20 min)

360

Total

1,600




812

1There are no capital costs or operating and maintenance costs associated with this collection of information.


These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.

13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital

Costs


There are no capital, start-up, operating or maintenance costs associated with this information collection.

14. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the collection of data is $629,748 ($209,916 per year for 3 years). This includes the costs paid to the contractors to program the study, draw the sample, collect the data, and create a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the data, and to draft a report ($45,000 over 3 years).


15. Explanation for Program Changes or Adjustments


This is a new data collection.

16. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Section B for detailed information on the design, research questions, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.


Table 2. – Project Time Schedule

Task

Estimated Number of Weeks

after OMB Approval

Pretest data collected

13 weeks

Main study data collected

42 weeks

Final results report completed

54 weeks

Manuscript submitted for internal review

71 weeks

Manuscript submitted for peer-review journal review

91 weeks

17. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

18. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.



References

1. Distributing Scientific and Medical Publications on Unapproved New Uses — Recommended Practices - Revised Draft Guidance (2014). https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM387652.pdf


2. Responding to Unsolicited Requests for Off-Label Information About Prescription Drugs and Medical Devices Draft Guidance (2011). https://www.fda.gov/media/82660/download


3. Kesselheim, A. S., Robertson, C. T., Myers, J. A., Rose, S. L., Gillet, V., Ross, K. M., ... & Avorn, J. (2012). A randomized study of how physicians interpret research funding disclosures. New England Journal of Medicine367(12), 1119-1127.


4. Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J. Beckmann (Eds.), Action control: From cognition to behavior. Berlin, Heidelber, New York: Springer-Verlag. (pp. 11-39).


5. Murshid, M. A. & Mohaidin, Z. (2017). Models and theories of prescribing decisions: A review and suggested a new model. Pharmacy Practice, 15(2), 990.


6. Sable M. R., Schwartz L. R., Kelly P. J., Lisbon E., and Hall M. A. (2006). Using the theory of reasoned action to explain physician intention to prescribe emergency contraception. Perspectives in Sex and Reproductive Health38(1), 20–27. https://www.guttmacher.org/journals/psrh/2006/using-theory-reasoned-action-explain-physician-intention-prescribe


7. Schaeffer, N. C., & Presser, S. (2003). The science of asking questions. Annual Review of Sociology, 29.


8. Fox, J., Earp M., Kaplan, R. (2020, June). Item Scale Performance. 75th Annual American Association for Public Opinion Research Virtual Conference.


9. Ozuru, Y., Briner, S., Kurby, C. A., & McNamara, D. S. (2013). Comparing comprehension measured by multiple-choice and open-ended questions. Canadian journal of experimental psychology = Revue canadienne de psychologie experimentale67(3), 215–227.


10. Tanur, J. M. (Ed.). (1992). Questions about questions: Inquiries into the cognitive bases of surveys. Russell Sage Foundation.


11. Keating, N.L., Zaslavsky, A.M., Goldstein, J., West, D.W., and Ayanian, J.Z. (2008). Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care, 46(8), 878-881.


12. Dykema, J., Stevenson, J., Day, B., Sellers, S.L., and Bonham, V.L. (2011). Effects of incentives and prenotification on response rates and costs in a national web survey of physicians. Evaluation & the Health Professions, 34(4), 434-447.


13. Ziegenfuss, J.Y., Burmeister, K., James, K.M., Haas, L., Tilburt, J.C., and Beebe, T.J. (2012). Getting physicians to open the survey: Little evidence that an envelope teaser increases response rates. BMC Medical Research Methodology, 12(41).


14. Asch, S., Connor, S.E., Hamilton, E.G., and Fox, S.A. (2000.) Problems in recruiting community-based physicians for health services research.” Journal of General Internal Medicine, 15(8), 591-599.


15. Epley, N. and Gilovich, T. (2006.) The Anchoring-and-Adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17(4), 311-318.


16. Höhne, J.K. and Krebs, D. (2017.) Scale direction effects in agree/disagree and item-specific questions: A comparison of question formats. International Journal of Social Research Methodology, 21(1), 91-103.


17. Krosnick, J.A. and Presser, S. (2010). Question and Questionnaire Design. In Handbook of Survey Research (pp. 263‒314). Emerald Group Publishing Limited.


18. Saris, W.E., Revilla, M., Krosnick, J.A., and Shaeffer, E.M. (2010.) Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Methods, 4, 61–79.


19. VanGeest, J., Johnson, T., and Welch, V. (2007.) Methodologies for improving response rates in surveys of physicians: A systematic review. Evaluation and the Health Professions, 30, 303-321.


20. Martins, Y., Lederman, R., Lowenstein, C., et al. (2012.) Increasing response rates from physicians in oncology research: A structured literature review and data from a recent physician survey. British Journal of Cancer, 106(6), 1021-6.


21. Converse, J.M. and Presser, S. (1986.) Survey Questions: Handcrafting the Standardized Questionnaire (No. 63). SAGE Publications.


22. DeVellis, R.F. (2016.) Scale Development: Theory and Applications (Vol. 26). SAGE Publications.




1When final, this guidance will represent the FDA’s current thinking on this topic.

2Changed from diabetes to insomnia based on cognitive testing.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title[Insert Title of Information Collection]
Authorjcapezzu
File Modified0000-00-00
File Created2022-08-01

© 2024 OMB.report | Privacy Policy