ss Part A 10-13-2017

ss Part A 10-13-2017.docx

Consumer and Healthcare Professional Identification of and Responses to Deceptive Prescription Drug Promotion

OMB: 0910-0849

Document [docx]
Download: docx | pdf

FOOD AND DRUG ADMINISTRATION

Consumer and Healthcare Professional Identification of and Responses to Deceptive Prescription Drug Promotion

0910-NEW

SUPPORTING STATEMENT

Terms of Clearance – None.

  1. Justification

  1. Circumstances Making the Collection of Information Necessary

Regulatory Background. Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act. Under the FD&C Act and implementing regulations, promotional labeling and advertising about prescription drugs are generally required to be truthful, non-misleading, and to reveal facts material to the presentations made about the product being promoted (see FD&C Act section 201(n), 502(a) and (n) (21 U.S.C. 321(n) and 352(a) and (n)); see also 21 CFR 202.1).


Rationale. Prescription drug promotion sometimes includes false or misleading (collectively, deceptive1) claims, images, or other presentations; for instance, representations that a drug is more effective or less risky than is demonstrated by evidence. A number of empirical studies have examined the occurrence and influence of deceptive promotion, both in regard to prescription drugs2 and other products.3 No research to our knowledge, however, has investigated the ability of consumers and healthcare professionals (HCPs) to independently identify and discount deceptive prescription drug promotion.


The ability of consumers and HCPs to identify deceptive prescription drug promotion has important public health implications. If unable to identify deceptive promotion, consumers may ask their HCPs to prescribe specific drugs that they would not otherwise request. Likewise, HCPs who are unable to identify deceptive promotion may prescribe specific drugs that they would not otherwise prescribe. On the other hand, if consumers and HCPs are able to identify deceptive promotion, they may appropriately discount or disregard such information in their medication decisions, and perhaps even report deceptive promotion to appropriate government regulators who can take corrective action.


Reports of deceptive promotion are useful to FDA because they allow investigators to focus their efforts in an era where the amount of promotion far exceeds the resources available to review everything. The FDA Bad Ad program, for example, encourages HCPs to report deceptive prescription drug promotion,4 a goal which requires that HCPs successfully identify such promotion when it appears in the course of their duties. Likewise, similar programs could be implemented for consumers to report deceptive prescription drug promotion to FDA.


The mission of the Office of Prescription Drug Promotion (OPDP) within the FDA is to protect the public health by helping to ensure that prescription drug promotion is truthful, balanced, and accurately communicated, and to guard against deceptive promotion through comprehensive surveillance, enforcement, and educational programs. As part of this mission, it is critical that OPDP adequately understand the capacity of consumers and HCPs to detect false and misleading claims as well as these populations’ processing of such claims. This understanding will help OPDP to identify best practices for addressing false and misleading claims in prescription drug promotion. The research described here will provide evidence to inform consideration of the approaches best suited to fulfill OPDP’s mission to protect the public from deceptive promotion.


  1. Purpose and Use of the Information Collection

This project will examine the ability of consumers and HCPs to identify deceptive prescription drug promotion, and the influence of such promotion on their attitudes and intentions toward the promoted drug. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to consumers in a way that is clear, useful and non-misleading. The results from this research will be used by FDA to inform its understanding of DTC promotion, inform regulatory policy, and may also help to identify areas for further research.



  1. Use of Improved Information Technology and Burden Reduction

Automated information technology will be used in the collection of information for this study. The contracted research firm will collect data through Internet administration. Participants will self-administer the survey instrument via a computer, which will record responses and provide appropriate probes when needed. FDA estimates that 100% of the respondents will use electronic means to fulfill the agency’s request. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each respondent, and by keeping surveys to 30 minutes or less.



  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. We conducted a systematic review of the scientific literature by locating relevant articles through keyword searches using popular databases such as PubMed and PsycInfo. We also identified relevant articles from the reference list of articles found through keyword searches. We did not find duplicative experimental work on the present topic.


  1. Impact on Small Businesses or Other Small Entities

No small businesses will be involved in this data collection.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.



  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency


In the Federal Register of January 4, 2017 (82 FR 855), FDA published a 60-day notice requesting public comment on the proposed collection of information. Comments received along with our responses to the comments are provided below. Comments that are not PRA-relevant or do not relate to the proposed study are not included. For brevity, some public comments are paraphrased and therefore may not reflect the exact language used by the commenter. We assure commenters that the entirety of their comments was considered even if not fully captured by our paraphrasing. Question numbering here (e.g., Q30) reflects numbering from the original draft questionnaire, shared by request at the time of the 60-day notice. The following acronyms are used here: FRN = Federal Register Notice; DTC = direct-to-consumer; HCP = healthcare professional; FDA and “The Agency” = Food and Drug Administration; OPDP = FDA’s Office of Prescription Drug Promotion.



Comment 1, regulations.gov tracking number 1k1-8ubr-t0de (verbatim with header and footer language removed):


We are supportive of the study, but have the following recommendations.

We propose that additional study arms be included that explore various scenarios/websites which test both the number of deceptive claims in conjunction with the degree of deception. Currently, the study is structured to measure the impact of the number of deceptions in a promotional website (Study 1) separately from the degree of the deception (explicit vs implicit, in Study 2). However, it would also be beneficial to measure other combinations to see which factor or combination of factors had the greatest impact on HCPs and Consumers’ overall perception of the website. For example, a single explicit lie may be more impactful than 15 implied deceptions. The current study will not be able to draw any conclusions regarding that scenario. Testing additional combinations of the number of deceptions in a website along with deceptive claims of varying severity would enable a better comparison and understanding of what ultimately drives HCPs and Consumers’ perception of deceptive prescription promotion.


Response to Comment 1: We thank the commenter for their support and for this suggestion. While certainly a viable research idea, cost implications of creating and testing additional stimuli for this purpose bar us from pursuing it. We encourage researchers to pursue this idea in future research.


Comment 2, regulations.gov tracking number 1k1-8v15-11b6 (some comments summarized for brevity; others provided verbatim):


  1. Given the stated purpose of the pretests, sample size can be substantially reduced, and revised to a qualitative approach.


Response to Comment 2a: In addition to the quantitative pretest, we have already conducted a qualitative test of stimuli and questionnaire materials via cognitive interviews. Changes based on cognitive interviews are reflected in our updated survey materials. In regard to sample size, the number of pretest participants per experimental condition (n = 50) was chosen based on a power analysis, and is considered to be the minimum effective size to allow for assessment of the quantitative outcomes specified in the 60-day FRN. Examples of quantitative outcomes include assessment of response rates and timing of the survey.


  1. To reduce bias, add a screening question to exclude respondents who are opposed to taking prescription medicines.


Response to Comment 2b: The survey length does not allow for a full exploration of attitudes toward prescription drug use. However, to assess opposition to prescription drug use more generally, we added one item to the survey that has been used successfully in previous FDA surveys. This item will be used in the pretest survey as a potential covariate and may or may not be retained in the main study survey depending on its performance.


The item reads: “In what situations would you consider taking prescription drugs?”

      • I would never take them.

      • I would take them only for serious health conditions.

      • I would take them for moderate and serious health conditions.

      • I would take them for most health conditions, including minor problems.


  1. Consider revising item scales to include a mid-point to allow respondents to express neutral views (unless objective is to force a selection).


Response to Comment 2c: Given the focus of the questions, we believe that offering a neutral response option is not necessary to measure opinions and attitudes accurately. Consequently, our objective is to force a selection and have participants make at least a weak commitment in either a positive or negative direction. Of concern is that offering a neutral midpoint could potentially encourage “satisficing”--cuing participants to choose a neutral response because it is offered.5 Additionally, providing a midpoint leads to the loss of information regarding the direction in which people lean.6 Research has found that neither format (either with or without a neutral point) is necessarily better or produces more valid or reliable results.7 Instead, it should be left to the researcher to determine the goals of the study. During cognitive testing, a majority of participants were satisfied with the response options and all participants felt comfortable choosing a response in the absence of a midpoint. Use of a midpoint is an issue we have examined in previous studies and we determined that we achieve valid and reliable responses without a midpoint. To increase consistency with measures used in previous studies, and in support of the arguments presented above, we are opting to exclude a midpoint. Finally, if a participant does not feel that they can choose a response because of a lack of a neutral option, they will be able to skip the question.


  1. In Study 1, remove Q21 and Q30 due to potentially leading nature of items.

Response to Comment 2d: To avoid redundancy, we dropped Q21. In Q30, we ask participants to click on anything they think is misleading, and we note that if they do not think anything is misleading, they can click “none.” Consequently, we are not strongly presupposing there are misleading claims. To address some of the wording concerns for this item, we changed the question to ask about inaccurate information instead of misleading information and we moved the “None” response to be more prominent above the image.


Comment 3, regulations.gov tracking number 1k1-8v3z-nzst (summarized for brevity):

The commenter expresses concern about the practical utility of the research, reasons for which are covered by comments 3a through 3e. In the case that FDA continues with the research, the commenter makes several recommendations which are covered by comments 3f through 3cc. Comments 3f through 3h concern the study stimuli, comment 3i pertains to subject recruitment, and comments 3j through 3cc concern the study questionnaires.


  1. The identification of deceptive promotion is FDA’s assigned responsibility, not the duty of HCPs and consumers.


Response to Comment 3a: As discussed above, the mission of OPDP within the FDA is to protect the public health by helping to ensure that prescription drug promotion is truthful, balanced, and accurately communicated, and to guard against false and misleading promotion through comprehensive surveillance, enforcement, and educational programs. As part of this mission, it is critical that OPDP adequately understand the capacity of consumers and HCPs to detect false and misleading claims as well as these populations’ processing of such claims. This understanding will help FDA/OPDP to identify best practices for addressing deceptive claims in prescription drug promotion. Moreover, we note that sponsors are not generally required to submit promotional pieces to FDA prior to dissemination, and limited resources prevent OPDP from reviewing all promotional materials in the marketplace. Voluntary HCP and consumer reporting of false and misleading promotional pieces contribute to the accomplishment of FDA/OPDP’s mission.


  1. Deceptive drug promotion is not a prevalent issue that requires further studying.

Response to Comment 3b: Numerous studies have examined the prevalence of false or misleading claims and presentations in DTC advertising, and the FDA frequently issues compliance letters addressing false and misleading claims and presentations.8 Consequently, FDA disagrees with this assertion.


  1. FDA’s proposed studies fail to acknowledge the role of the HCP as the “learned intermediary.”


Response to Comment 3c: The present research takes into consideration both consumer and HCP responses to false or misleading promotion. Consumers often wish to participate in shared decision-making with HCPs when selecting prescription drugs and may request specific prescription drugs from their HCPs based on promotions they have seen in the marketplace. Because information consumers receive through DTC prescription drug promotion can impact these requests, it is important to investigate consumers’ ability to assess prescription drug product efficacy and risks as conveyed in promotional pieces. And although HCPs have medical training and clinical expertise, we are not aware of research that investigates whether such training and expertise translates into an ability to detect false or misleading promotion in the marketplace. Consequently, the present research investigates both consumer and HCP ability to identify and discount deceptive prescription drug promotion.


  1. The proposed studies are duplicative of recent FDA research concerning HCP willingness to report deceptive promotion.


The commenter suggests that if FDA wishes to investigate consumer reporting, the Agency should create two separate studies. The first should gauge consumer aptitude in identifying false or misleading prescription drug promotion. Depending on the results of the first study, the Agency could potentially undertake a second study, surveying subject willingness to report false or misleading drug promotion. This approach would avoid potential error associated with influence of earlier questions regarding deception on later questions regarding reporting.


Response to Comment 3d: FDA conducted a survey of HCPs in 2013 in which respondents were asked about their familiarity with the Bad Ad program and willingness to report misleading advertising.9 The current study is quite different in scope from the previous research. The current study consists of an experimental design that will enable us to determine whether HCPs can detect misleading advertising, not just whether they are willing to report it. We do include questions at the end of the survey asking similar questions as those in the 2013 survey, but the purpose here is in connection to HCP ability to detect misleading advertising. Moreover, our use of similar questions here reflects a well-established technique in scientific research, used to determine whether previous findings can be replicated or not.


In response to the second comment recommending division of this project into two separate studies, we believe that proposal to be an inefficient use of resources. Regarding concerns about the order of questions affecting subsequent responses, we chose to distribute deception-related items throughout the survey, rather than ask all deception items first and then other outcome measures second. Also, we include “masking” items on the same screen as deception-related items to mask the intent of the questions. The results from cognitive interviews confirm that this approach was successful. Consequently, we have no evidence to suggest that earlier questions related to deception will influence subsequent questions related to reporting.


  1. FDA already has created and implemented consumer programs to report deceptive promotion.


Response to Comment 3e: The proposed research can inform program needs at present, whether such needs involve reevaluation of past programs such as EthicAd, or extensions of existing programs such as the Bad Ad program or other actions.


  1. Validating Stimuli. It is not clear how the Agency will determine that a study stimulus is deceptive. FDA notes in the PRA Notice that the “term deceptive is not meant to imply equivalence (or lack thereof) with use of the same term by the U.S. Federal Trade Commission.” It seems unrealistic for FDA to conduct research with primary care physicians (PCPs) and consumers who do not understand the Agency’s standards or have access to the training and resources of an FDA reviewer.


Further, except for literal falsity, whether a particular communication is false or misleading must be based on empirical evidence. Promotional pieces do not exist in a vacuum. These communications interact with the overall health information ecosystem, including the Internet. FDA needs to first validate that the study stimuli are indeed deceptive before including the stimuli in either proposed study with the presumption that they are deceptive.


Response to Comment 3f: Our reference to the Federal Trade Commission’s (FTC) definition of the term “deceptive” was offered as a point of clarification for our use of the same term as shorthand within the FRN for the longer phrase “false or misleading.” In other words, by using “deceptive” as a term of art in this narrow context, we are not evoking the specific meaning and interpretation of the same term used by the FTC.


We disagree with the suggestion that participants need to have access to the training and resources of an FDA reviewer before FDA can evaluate their ability to identify deceptive promotion. As further explained below, FDA is not asking participants to determine whether nuanced text meets the regulatory standards for deceptive promotion; instead, we are presenting material that meets both the regulatory standard for a deceptive promotion and could be identified as such by consumers or healthcare providers with no prior experience with the regulations.


We agree with the second point about the need to validate that the study stimuli are deceptive, and we are doing this in several ways for this study. For example, some of the specific claims used in our experimental manipulations are established as being factually incorrect because the promoted drug is a member of a class of drugs for which the claim could not be true (e.g., describing a serotonin-norepinephrine reuptake inhibitor (SNRI), which is required to have a black box safety warning for suicide risk, as lacking in significant safety concerns). Other claims or presentations in the stimuli are based on similar claims cited as violative in past warning letters or that unambiguously fail to follow the law (e.g., minimizing presentation of important safety information, such as a black box warning, by setting it in small, low contrast type). For one manipulated claim, we provided participants with access to the background information needed to identify the presentation as deceptive in the form of a footnote. In the case of Study 2, where a crucial aspect of the experimental design is to test an implicitly misleading claim in relation to an explicitly false claim and against a nonviolative control, we tested candidate claims in cognitive interviews to verify that the audience tended to interpret the implicit claims as intended.


Further, it is important to note that we included a control condition in both studies, which will enable us to compare responses to a website that has no violations. The control conditions serve as a baseline for perceived deception, which will also allow us to examine how consumers and providers perceive websites with no violations.


  1. Media. The Agency proposes using Web sites as the only stimuli. FDA should consider testing additional non-electronic media, including DTC and HCP print promotional materials. The Agency should also base the promotional stimuli on realistic “mock” package insert (PI) documents. The commenter requests that FDA make available for public comment these materials.


Response to Comment 3g: Previous research on DTC and HCP-directed prescription drug promotional materials has, to varying extents, included all available media formats, and assessment of outcomes using these formats has proven useful. We agree that investigating recognition of misleading prescription drug information in multiple formats--including print, television, web, and other modes--would be valuable. However, we also recognize that no single study can effectively examine all promotional formats or presentations, and we chose to focus on branded drug websites for several reasons. First, websites, while not necessarily more or less useful than any other format, are arguably quite prevalent and important in today’s technological age where a large segment of the consumer population is connected to the Internet and known to seek information regarding prescription drugs using the Internet. For example, online promotion is the fastest growing category of DTC drug marketing, and branded websites account for the largest share of this category.10 Second, almost all print and television ads for prescription drugs encourage viewers to visit branded websites for more information, making these sites an important extension of promotion in other formats.11 Third, FDA has issued multiple warning and notice of violation letters for branded drug websites that incorrectly communicate information to visitors, suggesting that there may be a problem with a proportion of such sites presenting misleading information. Fourth, websites serve as a fairly newer format for promotion relative to television and print promotion, and by consequence warrant further study. There has been significantly less research on consumer and provider interpretation of branded drug websites than other promotional formats,12 and the extant research suggests that some websites still do not present a fair balance of risk and benefit information.13


Based on these considerations, we believe that focusing this study on branded drug websites will be the most effective use of the FDA’s limited resources. The fictitious websites included in this study were modeled on real products (including the package insert) to ensure realism and relevance.


In response to the request to share stimuli, we generally do not share stimuli before the study has been conducted to avoid possible inadvertent publication and therefore contamination of the subject pool, which would compromise the research.


  1. Disease States. The Agency’s two studies propose testing stimuli concerning chronic pain or obesity. The commenter suggests that FDA instead consider testing stimuli featuring a fictitious product for a disease state which involves more complex safety information. Such stimuli would be more reflective of the current healthcare environment, where product labeling is increasingly complex.


Response to Comment 3h: The fictitious websites used in this research do include complex safety information, which reflect the risks for real chronic pain and obesity products in the marketplace. For example, one of the fictitious products includes a black box warning, and the other includes severe and complex safety information, such as potential drug interactions and contraindications.


  1. Study 1 Stimuli. In Study 1, the “degree of deception will be manipulated over three levels by altering the number of deceptive claims (none, fewer, more).” FDA states that the deceptive claims will include “various types of violations.” Under the potential design, the most egregious deceptive claim(s) might only be contained in the “more” level. This could potentially skew study results, as subjects would be more likely to identify such egregious claims. FDA should develop a scale that is used to determine the egregiousness of the deception. The scale should include specific examples of egregiousness by category.


Response to Comment 3i: Although some claims do not overlap between the “fewer violations” and “more violations” conditions, we strategically manipulated the stimuli so that one of the more “egregiously” deceptive claims (which appears in a callout bubble) is present in both conditions. There is also overlap in those two conditions for another manipulated element, where we minimized the prominence of the Important Safety Information. Additionally, we included an item (Q30) that would provide participants the opportunity to click on anything they think may be inaccurate. Using this question, we would expect that the more egregious claims will be chosen more often. In this way, this item would serve as a proxy measure of egregiousness. Further, our various questions that ask about perceived deceptiveness of the Web sites will provide an initial assessment of the degree of deception--with higher scores representing greater perceived deception. Because of space constraints on the survey, we are unable to ask participants to rate the egregiousness of the violative claims. Although we appreciate the value that developing a scale to determine the egregiousness of each of the deceptive claims would add, adopting this suggestion in the present research would be outside of the scope of this study and would have an impact on overall cost considerations.


  1. FDA proposes that the HCP samples for both studies will only include physician subjects. The commenter believes the samples should include other types of HCPs, including nurse practitioners, physician assistants, and pharmacists. As the Agency’s recent research showed, “Nurse practitioners and physician assistants tended to see the [Bad Ad] program as more useful than [PCPs] and specialists. They also reported a greater likelihood of reporting false or misleading advertising in the future.” Given these findings, it would be helpful also to investigate the ability of other HCPs independently to identify false or misleading promotion.


Additionally, during the recruiting process, FDA should ensure enrollment of a diversity of subjects across demographic categories. Previous research indicates that certain demographic groups respond to drug promotion in different manners. Uneven representation within certain categories could potentially skew study results.


Response to Comment 3j: FDA acknowledges and agrees with the assertion that including other types of HCPs in this research would provide value. Yet, sampling from these additional groups requires funding that may not be justified in this initial investigation of the topic area. Nonetheless, we do intend to strive for diversity in both our HCP and consumer samples. HCPs and consumers will vary in terms of age, race, and ethnicity, and consumers will additionally vary in terms of their education level.


  1. Leading Questions. The overall format of the questionnaires is quite leading. As previously mentioned, questions asking whether sample advertisements are “deceptive,” “misleading,” “bad,” and “not believable” could easily pollute data from later questions inquiring whether a subject would potentially report such promotion to FDA. The Agency should state all questions in an objective manner.


Response to Comment 3k: Leading questions are those that “suggest a possible answer or make some responses seem more acceptable than others.”14 In keeping with standard practice for balancing the valence of attitudinal questions, we have included a mix of positive and negative statements in the questionnaire. In fact, there are presently more positively framed items than negatively framed items. Moreover, the slider questions referenced by the commenter are semantic differentials, which show both a negatively framed word and its positive counterpart on opposite ends of the response scale (e.g., “deceptive/truthful,” “misleading/accurate,” “not believable/believable”). We do not see how these items could be construed as leading because both the positive and negative frames are presented. Finally, as stated in our response to Comment 3d, we have evidence to suggest that we successfully masked the true focus of the questionnaire, so the deception-focused items should not bias subsequent responses.


l. Recall Questions. Certain questions (e.g., Q1-Q3 of Study 1, Q4 of Study 2) ask test subjects to recall specific risks and side effects of the featured drug products. Such questions are not valid instruments to assess whether a subject perceives a stimuli to be false or misleading. Recall is likely influenced by the presentation of the content (e.g., size, visual display), not by the content itself. This research, however, is not material to the stated purpose of the studies. The recall questions should be omitted from the questionnaires.


Response to Comment 3l: Q1-Q2 of Study 1 measure risk recall and risk recognition. These are important outcome measures for our study because we vary how the risks are presented in the different experimental conditions, minimizing them (in terms of size and format) in the violative conditions. Including these risk recall and recognition measures allow us to test whether minimizing the risks influences participants’ ability to remember them. Further, because minimization of risk is a misleading violation in its own right, reduced risk recall or recognition among participants in the violative conditions would provide relevant context for interpreting more direct measures of deception. Q4 of Study 2 will enable us to determine if participants can recall seeing the disclosure statements in the Web sites. This is relevant to the question of whether participants identify false or misleading content because the disclosure statement provides information that would help participants assess the truth of the headline claim. None of these items are intended to be direct measures of whether the stimuli are misleading; instead, they are outcomes that may be affected by misleading content.


m. Repetitive Questions. The questionnaires are repetitive in nature. For example, in Q4-Q11 of Study 1, subjects are asked a series of eight questions to measure “Perceived Website Deception.” The questions are redundant (e.g., Believable/Not believable, Truthful/Deceptive, Factual/Distorted, Accurate/Misleading). This duplication may cause the subject to believe the promotional material is actually false or misleading.


Response to Comment 3m: The use of multiple items to tap into a singular construct is considered a best practice in social science research, particularly when assessing complex psychological constructs like those in this survey. Our intent is to combine responses to these items into a single composite score. Our cognitive interviewing of these items suggests that they have slightly different meanings for many participants and thus are not viewed as completely redundant. Further, there is no evidence to suggest that the use of multiple items to assess this construct led participants to believe that the promotional material was actually false or misleading or that this series of questions was designed to capture whether they thought the website was misleading. Consequently, we successfully masked the true intent of this item by including other bipolar response options unrelated to misleadingness.


We dropped Q21 to reduce redundancy across items.


n. Definitions and Terms. The questionnaires do not define certain key terms (e.g., effectiveness, risk, misleading). Subjects, especially consumers, may interpret these terms based on different standards. FDA might consider providing user-friendly definitions for the consumer subjects. The Agency should also utilize patient-friendly medical terms, rather than complex terminology (e.g., glaucoma, hepatic failure, SNRI).


Response to Comment 3n: Sophisticated medical terminology will only be used in the HCP survey. To use the example of “hepatic failure,” consumers will instead see “decreased liver function.” We have verified in cognitive interviews that preceded this study (and also in our previous scale development efforts) that the terminology used is generally well understood by our participant sample.


o. Sliding Scale Format. FDA should consider replacing the sliding scale format with a “Yes-No-I Don’t Know” scheme. The sliding-scale format is at times confusing in form and could potentially introduce error. Alternatively, the Agency should consider changing the sliding scale to an odd number system to permit a “neutral” response and/or use a variation of the Likert scale.


Response to Comment 3o: Use of a sliding scale allows for greater precision and variation in response, as opposed to a “Yes-No-Don’t Know” format. Research suggests that scales with five to seven points are more valid and reliable than those with only two to three categories.15 Additionally, we tested the sliding-scale format in previous cognitive interviews and found that it worked well; participants had little difficulty understanding this format. Further, as noted in the response to Comment 2c, we want to avoid leading participants to choose a “Don’t know” response; providing this option may cue participants to select this response and avoid deeper thinking on the topic. Regarding the use of an even numbered scale rather than odd numbered scale, please see our response to Comment 2c.


p. An “FDA employee” category should be added to Question S2 [Consumer] of Study 1. These individuals should also be terminated from the study.


Response to Comment 3p: Consistent with previous surveys, we added a category to exclude employees of the Department of Health and Human Services, which includes employees of FDA.


q. Question S3 [Consumer] of Study 1 should be rewritten as follows: “Have you ever been diagnosed with chronic or long-lasting pain (more than aches and pains that go away quickly or are minor)?” (emphasis added). This change aligns the question with the description of the study in the PRA Notice: “Study 1 will sample consumers with diagnosed chronic pain that has lasted at least 3 months.”


Response to Comment 3q: We did not restrict people to be diagnosed with chronic pain because the prevalence was too small, which would increase the costs of the study. Using our current screening questions, we achieve an 11 percent prevalence rate.16 The objective of our sampling plan is to target people that would be in the audience for the ads; being diagnosed is not a criterion.


r. Question S5 [Consumer] of Study 1 should be eliminated. Whether a subject still has chronic pain has no bearing on the study’s purpose. Also, consider eliminating Question Q12 of Study 1. This question would only apply to those consumers currently being treated for chronic pain, not those who previously had the condition.


Response to Comment 3r: Assessing whether participants currently experience chronic pain helps to ensure a motivated sample for which the fictitious medication would potentially be of interest. Originally, we included participants that reported suffering from chronic pain in the past, but we did not require that they are currently suffering from chronic pain (although we had an item that asked “Do you still have this chronic or long-lasting pain?). After further consideration we opted to revise the screener so that participants remain eligible if (a) they say “Yes” I still have chronic pain, or (b) they say “No” (or remain silent) about still having chronic pain and they are currently taking a prescription drug for chronic pain. This would also make the inclusion criteria for Study 1 consistent with the inclusion criteria for Study 2, which requires that a person currently suffers from the medical condition of interest. Consequently, Q12 of Study 1 will be relevant for all consumers completing the questionnaire.


s. Consider revising Question S5 [PCP] of Study 1 to inquire: 1) what percentage of the PCP’s patients has each condition, and 2) how long the PCP has treated patients with each condition. A PCP’s familiarity and experience with the treatment of the particular condition provides context and serves as a reference for detecting any potential deception in promotional materials.


Response to Comment 3s: We appreciate how these additional questions could provide valuable context and propose adding new items to our pretest survey (see below). We have found, in past work, that HCPs often have difficulty recalling precise information about their practice. Consequently, our approach is to assess this information more generally. However, to include some additional context, we included two additional items:

  • Rate your current knowledge about prescription drugs for [weight loss/chronic pain] on a scale of 0 to 10, where 0 means knowing nothing and 10 means knowing everything you could possibly know about the topic.


  • [If “chronic pain”] Approximately what proportion of your current patients do you treat for chronic pain? (None or very few have chronic pain; a small proportion have chronic pain; about one-half have chronic pain; a large proportion have chronic pain; almost all have chronic pain).


t. Question Q2 of Study 1 should have a third answer choice: “Don’t remember.”

Response to Comment 3t: In cognitive interviews, very few people chose this response option. Moreover, in previous research, because so few people chose this response option, we often end up collapsing this response option with the response indicating that the referent was not mentioned in the website.


u. Questions Q5 and Q7 of Study 1 should be deleted. Whether a subject considers the website to be “Bad/Good” or “Boring/Interesting” has no relevance to FDA’s study goals.


Response to Comment 3u: These items help to mask the overall intent of the other items in this series (e.g., to assess whether the website is misleading). Also, they provide useful information about personal relevance and attitude toward the website, which we can use as potential covariates.


v. The commenter recommends revising Question Q17 of Study 1: “How likely are you to ask your doctor about [Drug]?”


Response to Comment 3v: The intent of this item is to assess information-seeking more broadly, which can include, but is not limited to, asking one’s doctor about a drug. While assessing how consumers access information from various sources (doctor, family members, etc.) is of interest, our survey does not have room to ask about each source individually. Given that there are multiple sources of information a consumer might consult for more information on a drug, we decided to address information-seeking more broadly with one question, rather than attempting to list all possible options.


w. Questions Q19 and Q21 of Study 1 should be removed. These questions require participants to guess whether the material would mislead people or “takes advantage of less experienced” consumers/providers. FDA should only ask participants about individual perception. Additionally, it is unclear what the Agency means by “takes advantage of less experienced” consumers/providers.


Response to Comment 3w: To avoid redundancy, we dropped Q21. We retained Q19 to ensure assessment of a critical construct. Because deception is a complicated construct to measure, we included a variety of items to capture the various dimensions of this construct. Based on a review of the literature, we recommend using a variety of relatively sensitive measures of ability to detect misleading advertisements to ensure we capture potentially meaningful variance. The inclusion of Q19 and Q21 were based on findings from the literature review that included measures that tapped into third-person perception17--which is among the most widely replicated phenomena across media contents18, such as DTC prescription drug advertising.19 By including an item that taps into third-person effects, we will be able to explore if consumers are more likely to think that others will be misled, even if they do not think they themselves are susceptible to being misled by the website.


x. Question Q24 of Study 1 should be one of the first questions of the survey. A subject will likely answer this question most accurately immediately after reviewing the website and before answering other questions that could influence this answer.


Response to Comment 3x: To avoid bias, the most critical questions should appear as up front as possible in the surveys. Although current question ordering may bias responses to the attention item, this outcome is less consequential and we chose instead to prioritize the key dependent variables (putting those measures that rely on memory at the start of the survey). Consequently, we intend to retain the current order of questions in the survey.


y. The box for Question Q30 of Study 1 prompts the subject to respond, even if the individual did not select anything in the website as false or misleading. FDA should consider using a tiered response:


Q30a: Did you notice anything on the website that is false or misleading?

1. Yes (go to question 30b).

2. No (go to question 31).

Q30b: What information was false or misleading? [open box comment]


Response to Comment 3y: A programming note was missing in the original survey draft. The current survey programming reflects the approach suggested by the commenter.


z. The commenter recommends revising Question Q32 of Study 1 to: “If there was a way to report misleading prescription drug websites or ads to the Food and Drug Administration (FDA) by sending an email or calling a toll-free phone number, how likely would you report misleading material?”


Response to Comment 3z: We have adopted this recommendation in the revised survey.

aa. As previously stated in footnote 21, Questions Q34, Q41, and Q42 of Study 1 should be deleted.


Footnote 21 reads: For example, FDA completed a HCP study incorporating information asked at Q34, Q41, and Q42 of Study 1. It is not clear why the Agency is undertaking another study focusing on such questions. These questions should be eliminated.


Response to Comment 3aa: Please see our response to Comment 3d.


bb. Question S1 of Study 2 should be rewritten as follows: “Have you ever been diagnosed with obesity, defined as body mass index greater than or equal to 30?” This change aligns the question with the description of the study in the PRA Notice: “Study 2 will sample consumers diagnosed with obesity….”


Response to Comment 3bb: For this study, our intent was to target people that would be in the audience for these ads, and being diagnosed is not a requirement for personal relevance. The target audience is consumers with a body mass index greater than or equal to 30.


cc. The “Debriefing” does not accurately portray the purpose of the studies. The purpose of the studies is not “to learn about how people feel about information provided in prescription drug websites aimed at consumers/providers and how people use this information to understand how well prescription drugs work.” The commenter recommends that the “Debriefing” read: “The purpose of this study is to investigate the ability of consumers/providers to identify false or misleading prescription drug promotion and how likely consumers/providers are to report false or misleading prescription drug promotion to regulatory authorities.”


Response to Comment 3cc: We have adopted this recommendation.

Comment 4, regulations.gov tracking number 1k1-8v3r-jacf (summarized for brevity):

  1. The commenter expressed concern about the practical utility of the consumer-oriented arms of the research. Namely, if consumers are unfamiliar with the prescribing information for the product, it is unclear on which basis they can determine a claim to be deceptive.


Response to Comment 4a: Please see our response to Comment 3f, which addresses a similar theme and may provide useful context. The concern addressed by the commenter is framed as a limitation of the study and appears to question the relevance of examining consumers’ ability to detect deception in prescription drug promotion. We believe the opposite is correct: the merit of conducting the study is reinforced by the observation that it is unclear how consumers can determine a claim to be deceptive if they lack relevant background information or knowledge about an advertised drug. While prescription drug promotions are required to present truthful and non-misleading information, some prescription drug promotion nevertheless includes false or misleading claims, images, or presentations. DTC prescription drug promotion can help provide consumers with truthful information about drugs. When it does so, it can help consumers to make well-informed decisions when determining whether to explore treatment options and when making ultimate treatment choices, and it can provide useful and actionable information about a product’s efficacy and risks to consumers already on treatment, among other outcomes. Yet, because the information in prescription drug promotion is not always truthful, consumers must make judgments about whether it is true, misleading, or false. And the same background knowledge that a consumer might rely on to identify a claim as deceptive would also be used to decide that a claim is true. As the commenter points out, this background information may be incomplete or inadequate for the task, and yet some presume that consumers (and, for that matter, health care providers) are typically able to distinguish between true claims and those that are false or misleading. Concerns like the one voiced here and the empirical literature on the topic suggest there is reason to doubt this presumption, thus warranting the proposed study.


  1. The commenter expressed concern that the varied causes of obesity will result in a heterogeneous population which could potentially confound the results of the study.


Response to Comment 4b: We consider diversity within this illness population to be an asset. Also, random assignment will help to control extraneous influences because it will create groups that, on average, are probabilistically similar to each other. Because randomization eliminates most other sources of systematic variation, researchers can be reasonably confident that any effect that is found is the result of the intervention and not some preexisting differences between the groups.20 Consequently, the varied causes of obesity should not impact the results. The primary intention of the research is to empirically examine consumer and HCP ability to detect and report deceptive prescription drug promotion, but we have to choose stimuli (and by extension, an illness population) in order to empirically test our research questions. By choosing illness conditions with diverse patient populations, we can better grasp how consumers and HCPs from all walks of life react to deceptive prescription drug promotion. Also see response to comment 3j.


Comment 5, regulations.gov tracking number 1k1-8v3v-v60p (verbatim with header and footer language, introductory language, and supporting references removed):


We strongly support FDA’s proposed project as part of the agency’s broader research efforts to better understand the impact of prescription drug promotion and direct-to-consumer advertising (DTC). Research regarding deceptive advertising is becoming increasingly important as DTC continues to grow at unprecedented rates. One analysis estimated DTC spending in 2015 at $5.2 billion – a growth of over 60% in just four years. Five drugs – Humira, Lyrica, Eliquis, Cialis, and Xeljanz – accounted for one-quarter of this $5.2 billion. Importantly, these figures are an underestimate, as they do not account for spending on digital ads and social media.


The risks and benefits of DTC have been well noted and debated. DTC may promote patient dialogue with health care providers and remove the stigma associated with certain diseases. However, there are also significant concerns that DTC may be misleading, overemphasize a drug’s benefits as compared to risks, and lead to inappropriate prescribing and overutilization.


Again, we applaud the FDA’s efforts in this important area. The need to better understand the ability of consumers and health care professionals to detect and report misleading DTC is critical as the use of DTC continues to grow. Thank you for the opportunity to provide these comments.


Response to Comment 5: FDA appreciates this support.


External Reviewers

In addition to the comments above, the following experts reviewed the study design, methodology, and questionnaires:


  • Jisu Huh, Ph.D., Professor, University of Minnesota.

  • Guang-Xin Xie, Ph.D., Associate Professor, University of Massachusetts.

  • Jessie M. Quintero Johnson, Ph.D., Assistant Professor, University of Massachusetts.

  1. Explanation of Any Payment or Gift to Respondents

The e-Rewards Consumer and Healthcare panels use different incentive models that are tailored and appropriate for the respective audiences. e-Rewards Consumer Panel participants are enrolled into a points program analogous to a frequent flyer card: respondents are credited with bonus points in proportion to their regular participation in surveys. The incentive options allow panelists to redeem from a large range of gift cards, points programs, and partner products or services. Traditionally, panelists earn bonus points for surveys that are longer or require special tasks by the panel member. The use of these virtual incentives helps maintain a high degree of panel loyalty, increase response rates, and prevent attrition from the panel. When a panelist’s point balance is equivalent to $10, panelists may elect to redeem the points for vouchers to a variety of national retailers. Consumers who complete the 20-minute survey will receive an estimated $5.00 in e-Rewards currency.



Physicians recruited through the Healthcare Panel are paid a cash incentive (a check is mailed to the respondent). As with the consumer panel, Research Now uses an incentive scale based on set time increments and the panelist profile. PCPs who complete the 20-minute survey will receive an honorarium of $27. Research Now proprietary research has demonstrated that cash incentives, rather than virtual incentives, are most attractive to “time-poor/money-rich” physicians and help to improve survey completion rates.



  1. Assurance of Confidentiality Provided to Respondents

No personally identifiable information will be sent to FDA. The independent contractor will maintain all information that can identify individual respondents in a form separate from the data provided to FDA. The information will be kept in a secured fashion that will not permit unauthorized access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be approved by FDA’s Institutional Review Board (Research Involving Human Subjects Committee, RIHSC) prior to collecting any information.



All participants will be assured that the information will be used only for research purposes and will be kept private to the extent allowable by law, as detailed in the survey consent form. The experimental instructions will include information explaining this to respondents. Participants will be assured that their answers to screener and survey questions will not be shared with anyone outside the research team and that their names will not be reported with responses provided. Participants will be told that the information obtained from all of the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.



The Internet panel includes a privacy policy that is easily accessible from any page on the site. A link to the privacy policy will be included on all survey invitations. The panel complies with established industry guidelines and states that members’ personally identifiable information will never be rented, sold, or revealed to third parties except in cases where required by law. These standards and codes of conduct comply with those set forth by American Marketing Association, the Council of American Survey Research Organizations, and others. All Research Now employees and contractors are required to take yearly security awareness and ethic training based on these standards.



All electronic data will be maintained consistent with the Department of Health and Human Services’ ADP Systems Security Policy, as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).



Upon completion of the project, Research Now will destroy all study records, including data files, upon request.

  1. Justification for Sensitive Questions

This data collection will not include sensitive questions. The questionnaire is available upon request.











  1. Estimates of Annualized Burden Hours and Costs

12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:

Table 1.--Estimated Annual Reporting Burden1

Activity

No. of Respondents

No. of Responses per Respondent

Total Annual Responses

Average Burden per Response

Total Hours

Pilot study screener completes

4,286 (chronic pain)

714 (obesity)

612 (HCP)

5,612 total

1

5,612

0.0333

(2 minutes)

187

Main study screener completes

10,714 (chronic pain)

1,786 (obesity)

1,531 (HCP)

14,031 total

1

14,031

0.03333

(2 minutes)

468

Pilot study completes

150 (chronic pain)

150 (obesity)

300 (HCP)

600 total

1

600

0.333

(20 minutes)

200

Main study completes

375 (chronic pain)

375 (obesity)

750 (HCP)

1,500 total

1

1,500

0.333

(20 minutes)

500

Total

1,355

1 There are no capital costs or operating and maintenance costs associated with this collection of information.



  1. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs


There are no capital, start-up, operating or maintenance costs associated with this information collection.


  1. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the research is $874,207. This includes the costs paid to the contractor to perform a literature review, design two studies, create and test measures and experimental stimuli, recruit a consumer and HCP sample, collect and analyze data, write reports of work completed, and present findings. The task order was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the resultant data, and to draft a report.



  1. Explanation for Programs Changes or Adjustments

This is a new data collection.

  1. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See part B for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and posting on FDA’s website.


Table 2: Estimated Project Timetable

Task

Estimated Completion Date

60-day FRN publication

January, 2017

External peer review

February, 2017

RIHSC review

July, 2017

30-day FRN publication

December, 2017

OMB Review of PRA package

December, 2017

Pretesting

February, 2018

Main Study Data Collection

October, 2018

Data Analysis

February, 2019


  1. Reason(s) Display of OMB Expiration Date is Inappropriate

No exemption is requested.

  1. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.

1 Our use of the term deceptive is not meant to imply equivalence (or lack thereof) with use of the same term by the U.S. Federal Trade Commission. As used in this document, this term refers to presentations that are considered false or misleading within the context of prescription drug promotion.


2 Faerber, A. E. and D. H. Kreling. “Content Analysis of False and Misleading Claims in Television Advertising for Prescription and Nonprescription Drugs.” Journal of General Internal Medicine, 29(1): 110-118, 2014.; Symonds, T., C. Hackford, and L. Abraham. “A Review of FDA Warning Letters and Notices of Violation Issued for Patient-Reported Outcomes Promotional Claims Between 2006 and 2012.” Value in Health, 17: 433-437, 2014.


3 Mitra, A., M. A. Raymond, and C. D. Hopkins. “Can Consumers Recognize Misleading Advertising Content in a Media Rich Online Environment?” Psychology & Marketing, 25(7): 655-674, 2008.I; Hastak, M. and M. B. Mazis. “Deception by Implication: A Typology of Truthful but Misleading Advertising and Labeling Claims.” Journal of Public Policy & Marketing, 30(2): 157-167, 2011.



4 O’Donoghue, A. C., V. Boudewyns, K. J. Aikin, E. Geisen, et al. “Awareness of the FDA’s Bad Ad Program and Education Regarding Pharmaceutical Advertising: A National Survey of Prescribers in Ambulatory Care Settings.” Journal of Health Communication, 20: 1330-1336, 2015.


5 Krosnick, J. A. and S. Presser. “Question and Questionnaire Design.” In: Handbook of Survey Research (pp. 263-314). Bingley, United Kingdom: Emerald Group Publishing Limited, 2010.


6 Converse, J. M. and, S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire (No. 63). Thousand Oaks, CA: SAGE Publications, 1986.


7 DeVellis, R. F. Scale Development: Theory and Applications (Vol. 26). Thousand Oaks, CA: SAGE Publications, 2016.


8 Faerber, A. E. and D. H. Kreling. “Content Analysis of False and Misleading Claims in Television Advertising for Prescription and Nonprescription Drugs.” Journal of General Internal Medicine, 29(1): 110-118, 2014.; Symonds, T., C. Hackford, and L. Abraham. “A Review of FDA Warning Letters and Notices of Violation Issued for Patient-Reported Outcomes Promotional Claims Between 2006 and 2012.” Value in Health, 17: 433-437, 2014.

9 O’Donoghue, A. C., V. Boudewyns, K. J. Aikin, E. Geisen, et al. “Awareness of the FDA’s Bad Ad Program and Education Regarding Pharmaceutical Advertising: A National Survey of Prescribers in Ambulatory Care Settings.” Journal of Health Communication, 20: 1330-1336, 2015.


10 Sullivan, H. W., K .J. Aikin, E. Chung-Davies, and M. Wade. “Prescription Drug Promotion from 2001-2014: Data from the U.S. Food and Drug Administration.” PLoS ONE, http://dx.doi.org/10.1371/journal.pone.0155035, 2016.


11 Liang, B. A. and T. K. Mackey. “Prevalence and Global Health Implications of Social Media in Direct-to-Consumer Drug Advertising.” Journal of Medical Internet Research, 13(3), e64, 2011.


12 Southwell, B. G. and D. J. Rupert. “Future Challenges and Opportunities in Online Prescription Drug Promotion Research.” International Journal of Health Policy and Management, 5(3), 211-213, 2016.


13 Davis, J. J., E. Cross, and J. Crowley. “Pharmaceutical Web Sites and the Communication of Risk Information.” Journal of Health Communication, 12, 29-39, 2007.


14 Singleton, Jr., R. A., B. C. Straits, and M. M. Straits. Approaches to Social Research. Oxford, United Kingdom: Oxford University Press, 1993.


15 Aday, L. A. and L. J. Cornelius. Designing and Conducting Health Surveys: A Comprehensive Guide. Hoboken, New Jersey: John Wiley & Sons, 2006.


16 Nahin, R. L. “Estimates of Pain Prevalence and Severity in Adults: United States, 2012.” Journal of Pain, 16(8): 769-780, 2015.


17 Xie, G. X. “Deceptive Advertising and Third-Person Perception: The Interplay of Generalized and Specific Suspicion.” Journal of Marketing Communications, 22(5), 494-512. doi:10.1080/13527266.2014.918051, 2014.


18 Sun, Y., Z. Pan, and L. Shen. “Understanding the Third-Person Perception: Evidence from a Meta-Analysis.” Journal of Communication, 58(2), 280-300, 2008.


19 DeLorme, D. E., J. Huh, and L. N. Reid. “Perceived Effects of Direct-To-Consumer (DTC) Prescription Drug Advertising on Self and Others.” Journal of Advertising, 35(3), 47-65, 2006.


20 Fisher, R. A. The Design of Experiments. Edinburgh, United Kingdom: Oliver and Boyd, 1937.

15


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBetts, Kevin
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy