Eye Tracking Study of Direct-to-Consumer Prescription Drug Advertisement Viewing
0910- NEW
SUPPORTING STATEMENT
Terms of Clearance:
None.
A. Justification
Circumstances Making the Collection of Information Necessary
Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the FDA to conduct research relating to health information. Section 1003(d)(2)(c) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(b)(2)(c)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.
Current regulations require that a product’s major risks be included in at least the audio of direct-to-consumer (DTC) prescription drug television ads; this disclosure of major risks is sometimes referred to as the major statement. FDA has proposed including such risk information in superimposed text as well as in the audio (75 FR 15376, "Direct-to-Consumer Prescription Drug Advertisements; Presentation of the Major Statement in Television and Radio Advertisements in a Clear, Conspicuous, and Neutral Manner"). In addition, Title IX of the Food and Drug Administration Amendments Act (FDAAA, Public Law 110-85) required a study to determine if the statement “You are encouraged to report negative side effects of prescription drugs to the FDA. Visit www.fda.gov/medwatch, or call 1-800-FDA-1088” (the MedWatch statement) is appropriate for inclusion in DTC television ads. These communications have been tested separately by FDA. The first study found that participants were better able to recall the drug risks when they were presented in superimposed text as well as in audio (OMB Control Number 0910-0634; “Experimental Evaluation of the Impact of Distraction”). The second study found that the inclusion of the MedWatch statement does not interfere with participants’ understanding of the risk information (OMB Control Number 0910-0652;”Experimental Study: Toll-Free Number for Consumer Reporting of Drug Product Side Effects in Direct-to-Consumer Television Advertisements for Prescription Drugs”). However, these two new communications have not been examined together.
In addition, questions continue to arise about the use of potentially distracting images and sounds during the major statement of risks in DTC television ads. The first study referenced above found no differences among ads that differed in the affective tone of static, non-moving visuals presented during the major statement of risks. Previous research has shown that factors such as multiple scene changes (e.g., Hoyer, Srivastava, & Jacoby, 1984) and music (e.g., Cassidy and MacDonald, 2007) in advertising can be distracting. The effects of this distraction during the major statement of risks on consumers’ perceptions and risk recall has not been tested in the presence of risk-reinforcing superimposed text.
Purpose and Use of the Information Collection
This project is designed to use eye tracking technology to determine 1) how superimposed risk information and the MedWatch statement are perceived in DTC ads and 2) the effect of distraction. Eye tracking technology is an effective method to determine the extent to which consumers attend to risk information presented in DTC television ads. This technology allows researchers to unobtrusively detect and measure where a participant looks while viewing a television ad and for how long, and the pattern of their eye movements may indicate attention to and processing of information in the ad.
We plan to collect descriptive eye tracking data on participants’ attention to (1) The superimposed text during the major statement of risk information and (2) the MedWatch statement. Further, we plan to examine experimentally the effect of distraction. We hypothesize that distracting audio and visuals during the major statement will decrease risk recall, risk perceptions, and attention to superimposed text risk information.
The purpose of this project is to gather data for the FDA to address issues surrounding the presentation of risk information in DTC television ads. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to consumers as clearly and usefully as possible.
Use of Improved Information Technology and Burden Reduction
Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants will self-administer the Internet survey via a computer, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 30 minutes in the main study and 60 minutes in the pilot study.
Efforts to Identify Duplication and Use of Similar Information
We conducted a literature search to identify duplication and use of similar information. As noted above, there is some research on distraction in advertising and FDA has conducted research on the Medwatch statement and superimposed risk text separately. However, to our knowledge there is no research on how these elements may combine in a DTC television ad to affect consumer understanding.
Impact on Small Businesses or Other Small Entities
No small businesses will be involved in this data collection.
Consequences of Collecting the Information Less Frequently
The proposed data collection is one-time only. There are no plans for successive data collections.
Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for this collection of information.
Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
In accordance with 5 CFR 1320.8(d), FDA published a 60 day notice for public comment in the FEDERAL REGISTER of November 29, 2013 (78 FR 71621). Two comments were received.
Comment 1. The first suggestion in this comment was to avoid biasing participants by ensuring that at the beginning of the study participants are not aware that (a) the study is being conducted by or for FDA and (b) the advertisements are the subject of interest in the study. We are aware of these issues and have designed the wording of the study materials accordingly.
The second suggestion was to increase the minimum display resolution from 1,280 x 1,024 to 1,920 x 1,080 and the minimum computer monitor size from 20 inches to 24 inches. We agree that a bigger screen is better and have changed the minimum conditions to the following specifications: a display resolution of 1,920 x 1,080 pixels on a monitor of at least 23 inches measured diagonally.
The third suggestion was to exclude individuals who wear progressive or other multifocal lenses and individuals with any form of strabismus or nystagmus from participating in the study. We will exclude individuals who wear bifocals or hard contact lenses while watching television. In response to the next comment, we explain why these individuals need to be excluded. We do not believe we need to exclude participants who wear progressive or other multifocal lenses to collect usable data with the eye trackers in this study. Because we will use binocular tracking (where we track both eyes) we do not need to exclude individuals with strabismus or nystagmus; if we encounter these conditions in one eye, we will track the other eye. In addition, we cannot test for or diagnose these conditions and individuals may not know they have these conditions, making excluding these individuals difficult.
Comment 2. The first request in this comment was to specify the study timeline, comment on whether the results of this study will be incorporated into the draft guidance, “Presenting Risk Information in Prescription Drug and Medical Device Promotion,” and state whether the draft guidance will be re-issued for public comment. Regarding the study timeline, data collection on the study cannot begin until OMB approval is received. We estimate that data collection will be completed within a year after OMB approval. If the results of the study suggest that changes are needed to the draft guidance we will consider that at the time. The draft guidance will be reissued for public comment if changes are necessary as a result of the study.
The second request in this comment was to explain why individuals who wear bifocals or hard contact lenses would be excluded and to consider including such individuals in the study to avoid biasing the sample. First, only individuals who can only wear bifocals or hard contacts to watch television will be excluded from the study. If individuals can wear regular glasses or soft contacts during the study, they may participate. There are two reasons to exclude participants who wear only bifocals or contact lenses to watch television. The first is that the glasses themselves may have ‘lines’ on them which impact the data being recorded by the eye tracker’s camera. To record properly, the eye trackers must make accurate estimates of the pupil, and the ‘lines’ on the glasses distort these estimates. A similar problem exists with hard contact lenses, which are smaller than soft lenses and project sharp lines around their circumference. The second reason to exclude individuals who wear bifocals to watch television is that many people who wear bifocals move their heads up and down to get their best vision of a particular target. This head bobbing also impacts eye tracking because the cameras must constantly adjust to head movement. If we do not screen for these conditions and have several individuals who cannot be tracked well, we will have to discard their data, which will impact both the study design (which is based on the assumption of having equal sample sizes across conditions) and the power of our statistical tests. In an effort to measure any sampling bias, we will move this question to the end of the pilot study screener so we can compare the demographic information of those who are excluded with those who are not.
This comment suggested that we create a more “real-world” environment in the study by using a 30-minute video clip instead of a 2-5 minute video clip as proposed. We understand the concern, but there are tradeoffs inherent in any study. Although a 30-minute video clip may be a stronger proxy for “typical” TV viewing, it would require more resources and a greater burden on participants. We have taken steps to try to increase the ecological validity of the experiment. First, we have created ads that are very realistic. Second, we will use a real TV show clip that is close to 5 minutes long, which is the length of a typical news story segment. Third, we will include two additional “real” advertisements, rather than just showing the experimental ad.
External Reviewers
In addition to public comment, OPDP sent materials and received comments from three individuals for external peer review in 2014. These individuals are:
1. Mariea Hoy, PhD, Professor, The University of Tennessee, Knoxville, [email protected]
2. F.G.M. (Rik) Pieters, Professor, Tilburg School of Economics and Management,
[email protected]
3.
Thales Teixeira, PhD, Assistant Professor, Harvard Business School,
[email protected]
Explanation of Any Payment or Gift to Respondents
We plan to recruit using market-rate incentives, which currently are $75 for 30 minutes of testing and $100 for 60 minutes of testing. Floaters would receive $125 and be available for a large part of the day (up to four hours) to replace any no-shows. Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of these incentives.
Data quality: Because providing a market-rate incentive should increase response rates, it should also significantly improve validity and reliability to an extent beyond that possible through other means. Previous research suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as those with lower education, e.g., Guyll, Spoth, & Redmond, 2003). Furthermore, there is some evidence that using incentives can reduce nonresponse bias in some situations by bringing in a more representative set of respondents (Castiglioni & Pforr, 2007; Singer, 2002; Singer, 2006). This may be particularly effective in reducing nonresponse bias due to topic saliency (Groves et al., 2006).
Burden on respondent: There is some amount of burden on participants of eye-tracking studies because they are required to report to a specific location at a specific scheduled time where there are trained staff and the appropriate equipment. Participants must arrange for child care if they have children and they must arrange for transportation and/or parking. Therefore, providing a market-rate incentive should help to offset participant burden.
Past experience: The contractor who will recruit and conduct the eye tracking study has, over several years, tested hundreds of participants in dozens of studies at various levels of incentives. Their experience indicates that the requested amount is reasonable for broad demographic recruitment which also results in very few no-shows. Prior research corroborates our past experience, finding that monetary incentives at similar market rates increased participation rates (Guyll et al., 2003).
Reduced survey costs: Recruiting with market-rate incentives is cost-effective. Using the market rate of $75 and $100, our participation rate is typically between 90-100%, meaning that there are very few ‘no shows’. This incentive amount is most cost-effective because the cost of a no-show is roughly 10 times the cost of paying the incentive. When participants fail to show up in large numbers, the cost of the study increases significantly because it involves additional payment for the presence of multiple researchers, expensive equipment, and the rental of a facility that is sitting idle. Further, lower participation rates will likely impact the project timeline because participant recruitment will take longer and, therefore, data collection will be slower.
Assurance of Confidentiality Provided to Respondents
All participants will be provided with an assurance of privacy to the extent allowable by law. See Appendix A for the consent form.
No personally identifiable information will be sent to FDA. All information that can identify individual participants will be maintained by the independent contractor in a form that is separate from the data provided to FDA. For all data (interviews, videos, eye and questionnaire data), alpha numeric codes will be used instead of names as identifiers. These identification codes (rather than names) are used on any documents or files that contain study data or participant responses.
The information will be kept in a secured fashion that will not permit unauthorized access. Throughout the project, any hard-copy files will be stored in a locked file cabinet in the Project Manager’s office, and electronic files will be stored on the contractor’s password-protected server, which allows only project team members access to the files. The privacy of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be approved by FDA’s Institutional Review Board (Research Involving Human Subjects Committee, RIHSC) and RTI’s Institutional Review Board prior to collecting any information.
All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).
Justification for Sensitive Questions
This data collection will not include sensitive questions. The complete list of questions is available in Appendix B.
Estimates of Annualized Burden Hours and Costs
12a. Annualized Hour Burden Estimate
FDA estimates the burden of this collection of information as follows:
Table 1.--Estimated Annual Reporting Burden |
|||||
Eye Tracking Study of DTC Prescription Drug Advertisement Viewing |
Number of respondents |
Number of responses per respondent |
Total annual responses |
Average burden per response |
Total hours |
Pilot Study Screener |
200 |
1 |
200 |
0.03 (2 minutes)
|
6 |
Main Study Screener |
2,000 |
1 |
2,000 |
0.03 (2 minutes)
|
60 |
Pilot Study |
30 |
1 |
30 |
1
|
30 |
Main Study |
300 |
1 |
300 |
0.50 (30 minutes)
|
150 |
Total |
246 |
These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.
Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs
There are no capital, start-up, operating or maintenance costs associated with this information collection.
Annualized Cost to the Federal Government
The total estimated cost to the Federal Government for the collection of data is $356,582 ($178,291 per year for two years). This includes the costs paid to the contractors to conduct program the study, draw the sample, collect the data, and create and analyze a database of the results ($299,382.00). The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the resultant data, and to draft a report ($57,200; 10 hours per week for 2 years).
Explanation for Program Changes or Adjustments
This is a new data collection.
Plans for Tabulation and Publication and Project Time Schedule
Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See section B below for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.
Table 2. – Project Time Schedule |
|
Task |
Estimated Number of Weeks after OMB Approval |
Pretest data collected |
6 weeks |
Summary or pretest data completed |
14 weeks |
Main study data collected |
26 weeks |
Final methods report completed |
38 weeks |
Final results report completed |
48 weeks |
Manuscript submitted for internal review |
56 weeks |
Manuscript submitted for peer-review journal publication |
64 weeks |
Reason(s) Display of OMB Expiration Date is Inappropriate
No exemption is requested.
Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the certification.
References
Cassidy, Gianna, and Raymond A. R. MacDonald (2007), “The Effect of Background Music and
Background Noise on the Task Performance of Introverts and Extraverts,” Psychology of Music, 35, 517-537.
Castiglioni, L., & Pforr, K. (2007). The effect of incentives in reducing non-response bias in a multi-actor survey. Presented at the 2nd annual European Survey Research Association Conference, Prague, Czech Republic, June, 2007.
Groves, R., Couper, M., Presser, S., Singer, E., Tourangeau, R., Acosta, G., & Nelson, L. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720-736.
Guyll, M., Spoth, R., & Redmond, C. (2003). The effects of incentives and research requirements on participation rates for a community-based preventive intervention research study. Journal of Primary Prevention, 24(1), 25-41.
Singer, E. (2002). The Use of Incentives to Reduce Nonresponse in Household Surveys. (R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little, Eds.)Survey nonresponse, (051), 163-178. University of Michigan Institute for Social Research. Retrieved from http://www.isr.umich.edu/src/smp/Electronic.
Singer, E. (2006). Nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 637-645.
File Type | application/msword |
File Title | [Insert Title of Information Collection] |
Author | jcapezzu |
Last Modified By | Mizrachi, Ila |
File Modified | 2014-04-30 |
File Created | 2014-04-30 |