Supporting Statement Part A Final 3-18-15

SUPPORTING STATEMENT PART A FINAL 3-18-15.doc

Comparative Price Information in Direct-to-Consumer and Professional Prescription Drug Advertisements

OMB: 0910-0791

Document [doc]
Download: doc | pdf


Comparative Price Information in Direct-to-Consumer and Professional Prescription Drug Advertisements


0910-NEW

SUPPORTING STATEMENT


Terms of Clearance: None.


A. Justification

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.


By their very nature, medical and health decisions are comparative (e.g., treat versus not treat). For consumers, these decisions may include the use of prescription drug products versus over the counter products versus herbal supplements, as well as one prescription brand versus another prescription brand. Similarly, advertising is often comparative. In prescription drug advertising, sponsors are permitted to include truthful, non-misleading information about the price of their products in promotion. This may extend to price comparison information, wherein sponsors may include information about the price of a competing product in order to make advantageous claims. Currently, when price comparisons are made, the ad should also include context that the two drugs may not be comparable in terms of efficacy and safety and that the acquisition costs presented do not necessarily reflect the actual prices paid by consumers, pharmacies, or third party payers. Despite the inclusion of this additional information, there is concern that adding contextual information about efficacy or safety is not sufficient to correct the impression that the products are interchangeable and that price is the main factor to consider. The Office of Prescription Drug Promotion (OPDP) plans to investigate, through empirical research, the impact of price comparison information and additional contextual information on prescription drug product perceptions. This will be investigated in direct-to-consumer (DTC) and healthcare-directed professional advertising for prescription drugs.


  1. Purpose and Use of the Information Collection

The purpose of this study is to examine the impact of price comparison information and additional contextual information on prescription drug product perceptions in DTC and health care-directed professional advertising for prescription drugs. The long-term objective is to improve the communication of accurate and non-misleading information in DTC and professionally-directed ads. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to consumers and health care professionals as clearly and usefully as possible.

  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants will self-administer the Internet survey via a computer, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 30 minutes in both the pretests and main study.

  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information by locating relevant articles through keyword searches using four different databases, including PubMed and PsycInfo. We also identified relevant articles from the reference list of articles found through keyword searches. We did not find duplicative experimental work on the impact of including cost comparison information and additional contextual information about product safety and efficacy in DTC prescription drug ads.

  1. Impact on Small Businesses or Other Small Entities

No small businesses will be involved in this data collection.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In the Federal Register of May 7, 2014 (79 FR 26255), FDA published a 60-day notice requesting public comment on the proposed collection of information. Two submissions were received; one from Ms. Lenisse Lippert of Quality Matrix Solutions, and one from AbbVie biopharmaceutical company, which contained multiple comments. We summarize and respond to these comments below.


Comment from Lenisse Lippert, Quality Matrix Solutions. “I would like to participate in the industry feedback on a proposed study to better understand direct-to-consumer advertisements that compare drug pricing, and how that information affects a consumer’s perception of a drug’s overall safety and efficacy versus the comparator product.”


Response: We thank Ms. Lippert for her comment.


Comment 1 from AbbVie: To prevent fatigue, on line market research surveys do not generally exceed 20 minutes. Given that FDA is trying to make the most of their survey opportunity by asking many questions, it would be wise to place the meatier pricing related questions earlier in the survey when respondents are still engaged.


Response: We take the survey length very seriously. We are sensitive to issues regarding respondent fatigue and its impact upon completion rates and thus have placed items that are most likely to be influenced by respondent fatigue (open-ended questions) at the beginning of the survey. We have employed similar online surveys on several previous studies, and we have obtained high completion rates, typically 90% or higher. For example, on a recent study (Experimental Study: Examination of Corrective Direct-to-Consumer Television Advertising [OMB control number 0910-0737])], we had a pool of 1,071 eligible respondents, and only 14 of those respondents failed to complete the survey. We anticipate that the completion rate for this study will be similar.


Comment 2 from AbbVie: In both surveys, respondents are asked many questions about product X that appear positively stated. Therefore, there is a risk of a bias by asking the critical pricing and language questions after the respondent has already been exposed to many product X questions and supposed attributes. To avoid bias, the most critical questions should appear as up front in the surveys as possible.


Response: Of greatest interest to FDA is the question of whether presence or absence of price comparison information and contextual information influences outcomes such as perceptions of comparative safety and efficacy, perceptions of the comparator product, and intentions to seek more information about the advertised product. Placing pricing related questions near the beginning of the survey would likely bias participants to think about pricing information more than they would under natural conditions, which may influence their responses to the abovementioned critical dependent variables. Although current question ordering may bias responses to pricing related questions, we believe this outcome is less consequential than the reverse, as suggested in this comment. Consequently, we intend to retain the current order of questions in the survey.


Comment 3 from AbbVie: It is unclear if the drug examples (X and Y) are real world medicines that could be taken by the patient respondents. If so, do respondents need to be aware of each product? If they need not be aware, you will need to balance the samples for any differences between cells. In addition, the cells will also need to be balanced for current drug usage to prevent additional bias.


Response: We have constructed a fictional product for use in this study to control for effects that might result as a consequence of having taken the product in the past. The comparator is a real product. We will measure participants’ experience with medication for this condition, prior exposure to advertising for the comparator, and prior experience taking the comparator. Responses to these questions can be used as covariates in analysis.


Comment 4 from AbbVie: The questions on the physician survey should be at a higher level language versus the general population. We note the questions in the patient questionnaire seem to vary in reading level required to comprehend them. We recommend that FDA review the questions for consistency so as not [to] introduce a reading bias.


Response: We appreciate this comment. We have conducted cognitive interviews (OMB control number 0910-0695) to refine and improve the survey questions. We will also be conducting two rounds of pretesting which will provide an additional opportunity to identify and remove questions that do not function as intended, further refining the questionnaire prior to the main study. These activities include consideration of language level and whether it is appropriate for the participants being surveyed.


Comment 5 from AbbVie: We recommend this ad explicitly present contextual information that the two drugs may not be comparable in terms of efficacy and safety (i.e., the products are not interchangeable) notwithstanding price comparisons. This would permit FDA to assess whether it has provided enough contextual information so that the audience understands that the products are not interchangeable. Consequently, there would be a response choice in the questionnaire that allows a respondent to acknowledge the products are not interchangeable. AbbVie suggests that an option be added that reads, “The brochure left the impression that Drug X’s efficacy (and safety) should not be compared to Drug Y’s; the products are not interchangeable.”


Response: The context language is based on feedback from the cognitive interviews. We appreciate the comment and have added a question to assess participants’ attitudes about the context with regard to interchangeability of the products being compared.


Comment 6 from AbbVie: It is not clear what type of cost information is being presented in these ads. We suggest that the advertisement should make clear what costs are being presented, for what doses, and over what time frames so that readers are comparing ‘apples to apples’ when viewing the ads. If study budget allows, it would be ideal to test a variety of cost information.


Response: The price comparison is for the same indication on a yearly basis. We agree that it would be informative to expand the study to test a variety of cost information but do not have the resources to do so.


External Reviewers


In addition to public comment, OPDP solicited peer-review comments on potential measures and study methodology from a panel of experts. These individuals are:


  1. A. Mark Fendrick, MD, Professor of Internal Medicine, University of Michigan.

  2. Dominick Frosch, PhD, Adjunct Associate Professor of Medicine, UCLA.

  3. Jeffrey T. Kullgren, MD, Research Scientist, VA Center for Clinical Management Research, VA Ann Arbor Healthcare System and Assistant Professor of Internal Medicine, University of Michigan Medical School.

  4. Joe K. Gerald, MD, PhD, Public Health Policy and Management, Mel and Enid Zuckerman College of Public Health, University of Arizona.

  1. Explanation of Any Payment or Gift to Respondents

The e-Rewards Consumer and Healthcare panels use different incentive models, tailored and appropriate for the respective audiences. e-Rewards Consumer Panel participants are enrolled into a points program that is analogous to a ‘frequent flyer’ card: respondents are credited with bonus points in proportion to their regular participation in surveys. The incentive options allow panelists to redeem from a large range of gift cards, points programs, and partner products or services. Traditionally, panelists earn bonus points for surveys that are longer or require special tasks by the panel member. The use of these virtual incentives helps maintain a high degree of panel loyalty, increase response rates, and prevent attrition from the panel. When a panelist’s point balance is equivalent to $10, panelists may elect to redeem the points for vouchers to a variety of national retailers. Consumers who complete the 30 minute survey will receive an estimated $7.50 in e-Rewards currency.


Physicians recruited through the Healthcare Panel are paid a cash incentive (a check is mailed to the respondent). As with the consumer panel, Research Now uses an incentive scale which is based on set time increments and the panelist profile. Endocrinologists who complete the 30 minute survey will receive an honorarium of $45, and PCPs who complete the 30 minute survey will receive an honorarium of $35. Research Now research has demonstrated that cash incentives, rather than virtual incentives, are most attractive to “time-poor/money-rich” physicians and help to improve survey completion rates.

  1. Assurance of Confidentiality Provided to Respondents

All participants will be provided with an assurance of privacy to the extent allowable by law. See Appendix A for the consent form.


No personally identifiable information will be sent to FDA. All information that can identify individual participants will be maintained by the independent contractor in a form that is separate from the data provided to FDA. For all data, alpha numeric codes will be used instead of names as identifiers. These identification codes (rather than names) are used on any documents or files that contain study data or participant responses.


The information will be kept in a secured fashion that will not permit unauthorized access. Throughout the project, any hard-copy files will be stored in a locked file cabinet in the Project Manager’s office, and electronic files will be stored on the contractor’s password-protected server, which allows only project team members access to the files. The privacy of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods have been approved by FDA’s Institutional Review Board (Research Involving Human Subjects Committee, RIHSC). These methods are currently under review by RTI’s Institutional Review Board. We will wait for approval prior to collecting any information.


All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).

  1. Justification for Sensitive Questions

This data collection will not include sensitive questions. The complete list of questions is available in Appendix B.

  1. Estimates of Annualized Burden Hours and Costs

For both the pretests and main study, the questionnaire is expected to last no more than 30 minutes. This will be a one-time (rather than annual) collection of information. FDA estimates the burden of this collection of information as follows:


Table 1.: Estimated Annual Reporting Burden1

Activity

Number of Respondents

Number of Responses per Respondent

Total Annual Respondents

Average Burden per Response

Total Hours

Sample outgo (pretests and main survey)

41,110

==

==

==

==

Screener completes

7,400

1

7,400

0.03 (2 minutes)

222

Eligible

4,933

==


==

==

==

Completes, Pretests Phase 1

400

1

400

0.5 (30 minutes)

200

Completes, Pretest Phase 2

1,000

1

1,000

0.5 (30 minutes)

500

Completes, Main Study

2,940

1

2,940

0.5 (30 minutes)

1,470

Total




==

2,392


1 There are no capital costs or operating and maintenance costs associated with this collection of information.



These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.


  1. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs

There are no capital, start-up, operating or maintenance costs associated with this information collection.

  1. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the collection of data is $876,381 ($292,127 per year for three years). This includes the costs paid to the contractors to create the stimuli, program the study, draw the sample, collect the data, and create and analyze a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the resultant data, and to draft a manuscript ($85,800; 10 hours per week for three years).

  1. Explanation for Program Changes or Adjustments

This is a new data collection.

  1. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Part B of the Supporting Statement for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.



Table 2. – Project Time Schedule

Task

Estimated Number of Weeks

after OMB Approval

Pretest data collected

6 weeks

Pretest data completed

14 weeks

Main study data collected

26 weeks

Final methods report completed

38 weeks

Final results report completed

48 weeks

Manuscript submitted for internal review

56 weeks

Manuscript submitted for peer-review journal publication

64 weeks


  1. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

  1. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.

8


File Typeapplication/msword
File Title[Insert Title of Information Collection]
Authorjcapezzu
Last Modified ByMizrachi, Ila
File Modified2015-03-18
File Created2015-03-18

© 2024 OMB.report | Privacy Policy