Risk and Benefit Perception Scale Development
0910-NEW
SUPPORTING
STATEMENT
A. Justification
Circumstances Making the Collection of Information Necessary
Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the FDA to conduct research relating to health information. Section 1003(d)(2)(c) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(b)(2)(c)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.
FDA requires that prescription drug advertisements be balanced in their presentation of risk and benefit information. Patients receive information on drugs not only from their doctors and pharmacies, through patient labeling and FDA-mandated Medication Guides, but also online, on social networks and via direct-to-consumer (DTC) television and print advertising. Moreover, research suggests that consumers struggle with the concepts of risk and efficacy 1 and often overestimate drug efficacy .2 As a result, it is important for FDA to understand and accurately measure how consumers are making sense of this information and how it impacts decisions related to prescription drugs.
FDA’s Office of Prescription Drug Promotion (OPDP) has an active research program that investigates how direct-to-consumer advertising influences consumer knowledge, perceptions, and behavior. As OPDP’s research program has matured, the way in which we measure risk and benefit perception has evolved over time. This has resulted in perception measures that, while internally valid, tend to vary by study. Consequently, FDA needs a pool of reliable and valid measurement items for assessing consumers’ drug risk and benefit perceptions—as well as other elements of prescription drug decision making—consistently across studies. The purpose of this project is to create that measurement pool, thus increasing the rigor and efficiency of FDA’s research.
Purpose and Use of the Information Collection
The purpose of this project is to develop and validate risk and benefit perception scales and to explore various methods for measuring perceptions, attitudes and intentions that can be used for OPDP research moving forward. The long-term objective is to improve the validity and reliability of risk and benefit perception measures to help ensure effective communication of product information in DTC ads. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to consumers as clearly and usefully as possible.
Use of Improved Information Technology and Burden Reduction
Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants will self-administer the Internet survey via a computer, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 30 minutes in both the pretests and main study.
Efforts to Identify Duplication and Use of Similar Information
We conducted a literature search to identify existing measures for use in this research. We did not find any scales that were already specifically validated and reliable for use in measuring perceptions based on prescription drug ads. However, the results of the literature review identified a number of useful scales, subscales, and individual items that were included in the initial pool of measures. Measures used in previous FDA studies were also included in that pool.
Impact on Small Businesses or Other Small Entities
No small businesses will be involved in this data collection.
Consequences of Collecting the Information Less Frequently
The proposed data collection is one-time only. There are no plans for successive data collections.
Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for this collection of information.
Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
In accordance with 5 CFR 1320.8(d), FDA published a 60 day notice for public comment in the FEDERAL REGISTER of April 21, 2014 (79 FR 22143). One comment was received from the company Eli Lilly, Inc. We respond to the points in Lilly’s comment below.
Comment: “Lilly seeks further clarity to better understand how FDA intends to apply the risk and benefit measurement items being developed through this study. FDA suggests in the Federal Register notice that the measurement items would be only used to enhance future FDA research initiatives; however, the precise nature and purpose of such planned research is unclear. Lilly suggests that any intended use of the measurement items to evaluate the effectiveness of drug advertising disseminated by industry would be inappropriate and beyond the jurisdiction and authorities granted to FDA.”
Response: Section 1701(a)(4) of the Public Health Service Act (42 CFR 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 903(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act. We believe that these statutes provide a broad authority for FDA to conduct research related to prescription drug promotion as described in the information collection request. As already explained in the information collection request, the nature and purpose of this research is “to understand and accurately measure how consumers are making sense of this information and how it impacts decisions related to prescription drugs.” We believe that this research is crucial in ensuring that consumers are receiving prescription drug information that is truthful and non-misleading, and that prescription drugs are not being misbranded. FDA expects that any other purpose of this research will become clear only upon its completion, and FDA intends to make the research results and the final scale publicly-available.
Comment: “Although FDA intends to narrow the pool of survey questions during the pretesting stage of the research, we have concerns that the current questionnaire is extremely cumbersome and would likely exceed 20 minutes to complete. Further, based on the currently designed instrument, it is questionable whether in fact FDA would have success in respondents’ fully completing the survey.”
Response: Since the submission of the 60-day notice, the cognitive interviews have been completed (OMB control Number 0910-0695). We did not reduce the number of items as much as expected based on those interviews. Thus, we are recommending changing the questionnaire to 30-minutes in length, and burden estimates have been calculated accordingly. Even so, no respondent would ever answer the full list of questions provided in the 60-day notice; instead, the full questionnaire is the pool of items from which the questionnaire will be developed. We will test subsets of these candidate items using a form A/form B approach so that no respondent ever answers more than a 30-minute survey. In addition, some items may only be tested on one pretest and not the other or in one wave of a survey, etc. No respondent would ever see all of these questions.
We take the survey length very seriously. We will be conducting two rounds of pretesting to refine the questionnaire and reduce the number of items, resulting in 30-minute (or shorter) questionnaires for the pretests and main study.
We are sensitive to issues regarding respondent fatigue and its impact upon completion rates. We have employed similar online surveys on several previous studies, and we have obtained high completion rates, typically 90% or higher. For example, on a recent study (Experimental Study: Examination of Corrective Direct-to-Consumer Television Advertising [OMB control number 0910-0737])], we had a pool of 1,071 eligible respondents, and only 14 of those respondents failed to complete the survey. We anticipate that the completion rate for this study will be similar.
Comment: “In general, specific questions proposed in the draft questionnaire may be unanswerable by the respondent if not addressed specifically in the test stimulus. For example, Q23 “How long will Drug X/Drug Y’s negative side effects last once they begin?” If the duration of a drug’s side effects is not communicated in the stimulus, data captured would be purely speculative on the part of the consumer, especially without inclusion of a “don’t know or no opinion” option for the respondent.”
Response: Respondents will be exposed to information about the drug’s indication and side effects in the ad and will then be asked to provide their perceptions of the drug’s effectiveness and risk profiles. The questions are not intended to measure factual knowledge about the fictitious drug. By definition, one’s perception is a subjective assessment and, thus, does not need to be tied directly to a verbatim statement in the advertisement. Whether or not participants are forming perceptions about other attributes of the drug, such as how long side effects last, is an empirical question and the purpose of this study. Refining the questions, such as adding a “don’t know” option, will be further addressed by pretesting.
Comment: “In addition to the redundant and overlapping questions, several proposed questions appear to be unanswerable. The drafted questionnaire creates a high burden in complexity and time for the consumer and may cause significant respondent fatigue that could result in unreliable or incomplete data collection. Given these significant design issues related to the draft study questionnaire, Lilly suggests that FDA provide further details on how the questions in the draft questionnaire will be narrowed from the pretest stage to the iterative stage of the research and further evaluate the burden and likelihood to complete for the iterative testing stage.”
Response: The pool of questions will be narrowed and refined through two methods. The first method involved cognitive testing of draft measures (For a full discussion of the cognitive interviews, see Section B.4). The goal of the cognitive interviews was to refine and narrow the measurement pool that will be subsequently pretested and then tested in an experimental study. The second method will involve iterative testing and analysis of draft measures to establish scale reliability and internal validity using survey methods. For a full discussion of the pretesting and experimental study see Section B.2.
Comment: “Additionally, it is not clear why some batteries of questions, such as
those questions under the validity testing section (Q63-Q77) are included. These questions do not seem aligned with the research objective.”
Response: These items are included for the purpose of testing the convergent validity of the other items in our item pool (measures or risk and benefit perceptions). The items in Q63-Q77 come from the previously validated Beliefs about Medicines Questionnaire (BMQ) (Horne, Weinman & Hankins, 1999)3. As an example, if the benefit perception items perform as intended, they should be highly correlated with positive beliefs about medicines, as measured by the BMQ scale.
Comment: “Finally, questions 78-82 seem better placed in a battery of questions for the screening or consumer selection phase.”
Response: We believe that the constructs captured by questions 78-82 may moderate the relationship between ad content and respondents’ risk and benefit perceptions. We include them on the survey to keep the screener as short as possible, which reduces the burden on individuals who ultimately do not qualify for the study. They will not be used for screening as we do not plan to include or exclude any individuals based on their responses to these questions.
Comment: “Lilly suggests that the survey design be improved to better align with the research objectives, to avoid bias and to mitigate extreme respondent fatigue. Lilly recommends that FDA modify the data collection instrument to address the points noted above and seek additional public comment on the revised design.”
Response:
Given our responses and points of clarification above, we believe
that the current design is rigorous and meets FDA’s research
objectives. The design allows us to test and validate measurement
items for consumers’ risk and benefit perceptions. By
randomizing respondents to the various ads with different benefit and
risk information, we have controlled for underlying differences in
respondent demographics and thereby have reduced the potential for
selection bias (Kunz, Vist & Ochman, 2008)4
and enhanced study validity. As we have described above, we also have
designed the study to minimize respondent fatigue by testing only the
most promising candidate items and by ensuring a survey length of no
more than 30 minutes.
External Reviewers
In addition to public comment, OPDP solicited peer-review comments on potential measures and study methodology from a panel of experts. These individuals are:
Brian Zikmund-Fisher, University of Michigan, [email protected];
Bob DeVellis, University of North Carolina – Chapel Hill, [email protected];
Geoff Norman, McMaster University, [email protected];
Vincent-Wayne Mitchell, City University London, [email protected];
Richard Netemeyer, University of Virginia, [email protected]
Explanation of Any Payment or Gift to Respondents
GfK typically provides two different types of respondent incentives: (1) periodic incentives based on the number of surveys completed and (2) survey-specific incentives for surveys that are particularly long or burdensome.
Periodic incentives are used to maintain a high degree of panel loyalty and to prevent attrition from the panel. For households without existing Internet access, GfK provides computer hardware and Internet service as an incentive. For households with Internet service, GfK enrolls panelists in a points program that is analogous to a “frequent flyer” card, and panelists are credited with points based on the number of surveys completed. Panelists receive cash-equivalent checks for these points every four to six months, typically amounting to $2 to $6 per month.
Survey-specific incentives are provided in cases when the survey length exceeds 20 minutes or there is an unusual request being made of the respondent, such as providing a specimen, viewing a specific television program, or completing a daily diary. Survey specific incentives are used to reduce non-response bias.
Because this study’s survey length is likely to exceed 20 minutes (and be closer to 30 minutes), we will need to provide survey-specific incentives. The incentive for this study would be 10,000 points (the equivalent of $10). NOTE: This survey-specific incentive is not based on federal guidelines for incentives, but rather on GfK policy.
10. Assurance of Confidentiality Provided to Respondents
All participants will be provided with an assurance of privacy to the extent allowable by law. See Appendix A for the consent form.
No personally identifiable information will be sent to FDA. All information that can identify individual participants will be maintained by the independent contractor in a form that is separate from the data provided to FDA. For all data, alpha numeric codes will be used instead of names as identifiers. These identification codes (rather than names) are used on any documents or files that contain study data or participant responses.
The information will be kept in a secured fashion that will not permit unauthorized access. Throughout the project, any hard-copy files will be stored in a locked file cabinet in the Project Manager’s office, and electronic files will be stored on the contractor’s password-protected server, which allows only project team members access to the files. The privacy of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods have been approved by FDA’s Institutional Review Board (Research Involving Human Subjects Committee, RIHSC). These methods are currently under review by RTI’s Institutional Review Board. We will wait for approval prior to collecting any information.
All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).
11. Justification for Sensitive Questions
This data collection will not include sensitive questions. The complete list of questions is available in Appendix B.
12. Estimates of Annualized Burden Hours and Costs
12a. Annualized Hour Burden Estimate
For both the pretests and main study, the questionnaire is expected to last no more than 30 minutes. This will be a one-time (rather than annual) collection of information. FDA estimates the burden of this collection of information as follows:
Table 1.--Estimated Annual Reporting Burden |
|||||
Activity |
No. of Respondents |
No. of Responses per Respondent |
Total Annual Responses |
Average Burden per Response1 |
Total Hours |
Pretest screener |
2,000 |
1 |
2,000 |
0.03 (2 minutes) |
60 |
Main study screener |
20,000 |
1 |
20,000 |
0.03 (2 minutes)
|
600 |
Pretest |
11,100 |
1 |
1,100 |
.5 (30 minutes)
|
550 |
Main Study |
10,200 |
1 |
10,200 |
.5 (30 minutes)
|
5,100 |
Total |
33,300 |
1 |
33,300 |
-- |
6,310 |
1With online surveys, several participants may be completing the survey at the time that the total target sample is reached. Those participants are allowed to complete the survey, which can result in the number of completes going slightly over the target number. Thus, if our target is 1,000, we have rounded up by an additional 100 to allow for some overage.
These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.
13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs
There are no capital, start-up, operating or maintenance costs associated with this information collection.
14. Annualized Cost to the Federal Government
The total estimated cost to the Federal Government for the collection of data is $1,669,260 ($556,420 per year for three years). This includes the costs paid to the contractors to create the stimuli, program the study, draw the sample, collect the data, and create and analyze a database of the results. The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the resultant data, and to draft a report ($85,800; 10 hours per week for three years).
15. Explanation for Program Changes or Adjustments
This is a new data collection.
16. Plans for Tabulation and Publication and Project Time Schedule
Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See section B for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.
Table 2. – Project Time Schedule |
|
Task |
Estimated Number of Weeks after OMB Approval |
Pretest data collected |
6 weeks |
Pretest data completed |
14 weeks |
Main study data collected |
26 weeks |
Final methods report completed |
38 weeks |
Final results report completed |
48 weeks |
Manuscript submitted for internal review |
56 weeks |
Manuscript submitted for peer-review journal publication |
64 weeks |
17. Reason(s) Display of OMB Expiration Date is Inappropriate
No exemption is requested.
18. Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the certification.
1 Lipkus, I. M. (2007). Numeric, verbal, and visual formats of conveying health risks: suggested best practices and future recommendations. Medical Decision Making, 27(5), 696-713.
2 Aikin, K. J., Swasy, J. L., & Braman, A. C. (2004). Patient and physician attitudes and behaviors associated with DTC promotion of prescription drugs–summary of FDA survey research results. Food and Drug Administration. Center for Drug Evaluation and Research.
3 Horne R., Weinman J., Hankins M. (1999). The beliefs about medicines questionnaire: The development and evaluation of a new method for assessing the cognitive representation of medication. Psychology and Health, 14, 1-24.
4 Kunz, R., Vist, G.E., & Ochman, A.D. 2008. Randomization to protect against selection bias in healthcare trials. The Cochrane Library, Issue 2.
File Type | application/msword |
File Title | [Insert Title of Information Collection] |
Author | jcapezzu |
Last Modified By | Mizrachi, Ila |
File Modified | 2015-04-15 |
File Created | 2015-04-15 |