ss Part B 10-13-2017

ss Part B 10-13-2017.docx

Consumer and Healthcare Professional Identification of and Responses to Deceptive Prescription Drug Promotion

OMB: 0910-0849

Document [docx]
Download: docx | pdf

B. Statistical Methods

  1. Respondent Universe and Sampling Methods

The eligible study population is U.S., non-institutionalized adults aged 18 and older. The selected sample will be drawn from Research Now’s opt-in online survey panels according to the specific research objectives for this project. Consumers will be recruited through Research Now’s e-Rewards Consumer Panel using a multimode approach (e.g., email, online marketing, and by invitation, with over 300 diverse online and offline affiliate partners and targeted website advertising). This panel is demographically balanced, including racial and ethnic minorities, a wide range of different age groups, and individuals with relatively less educational attainment. This panel provides access to over 2 million adult panelists in the United States. Using Research Now’s database and a screening questionnaire approved by FDA, 2,100 individuals for the pretest and main study combined will be recruited.



Physicians will be recruited through Research Now’s Healthcare Panel using a multimode approach combining email, fax, and direct mail to recruit physicians to participate in online surveys. This approach provides a greater reach into the U.S. physician market; in total, Research Now has access to an estimated 235,288 primary care physician (PCP) panelists. In addition to using effective recruitment methods, they also purchase key association and governmental databases that verify a physician’s practicing status. These verification resources include the Drug Enforcement Agency number (DEA#) and the American Medical Association (AMA) Medical Education Number (ME#).



Participants in the study will be volunteers, and will not be randomly or systematically selected by Research Now. FDA does not intend to generate nationally or locally representative results or precise estimates of population parameters from this study. Therefore, the sample used is a convenience sample rather than a probability sample. Furthermore, the data will not be weighted because no legitimate weights can be constructed from nonprobability samples such as the one used here. Eligible participants for the consumer sample will be adults who speak English and self-identify as having chronic pain or a body mass index (BMI) that qualifies them as obese. At least 20% of the sample will consist of people who have completed a high school education or less. We will exclude individuals from the consumer sample who work in the healthcare, marketing, advertising, or pharmaceutical industries. Eligible participants for the physician sample will be adults who speak English and practice patient care at least half time in the areas of family practice, general practice, and internal medicine.



We will recruit pretest samples from the same consumer and physician populations as the main study samples. We will exclude pretest study participants from the main study.











  1. Procedures for Collection of Information

Consumer and physician panel members eligible to participate in the current survey will be contacted through an email or router1 invitation from the panel managers, which will include a secure, non-identifiable link to the web-based survey. In each survey invite, panelists are informed about the survey topic in a topline, nonleading way before participation. Once the individual opts in to review this market research, a more in-depth description of the survey and the consent form will be presented informing the panelist that the survey is confidential and voluntary. Panelists are rewarded for taking part in surveys with a structured incentive scheme, reflecting the length of survey and nature of the sample. Recruitment will continue until the target sample size for completed surveys is reached. Participants can complete each survey only once.



Throughout the pretesting and main study, we will closely monitor recruitment and data collection to ensure that screening criteria are being met, participants from key demographic groups are adequately represented, and response rates are acceptable. We will also conduct random checks to ensure that participant responses are captured accurately.

The proposed design for the main studies, including sample sizes, is summarized below and described next.



Study 1: Degree of Deception Based on the Number of Deceptive Claims


Experimental Condition


Population

None (Control)

Fewer Violations

More Violations

Total

HCPs

125

125

125

375

Consumers w/ chronic pain

125

125

125

375






Study 2: Type of Deception Based on Implicit and Explicit Claims


Experimental Condition


Population

None (Control)

Implicit

Explicit

Total

HCPs

125

125

125

375

Obese consumers

125

125

125

375



The purpose of Study 1 is to assess consumer and HCP response to promotional websites with varying levels of false or misleading presentations. In Study 1, degree of deception will be manipulated over three levels by altering the number of deceptive claims (none, fewer, more). It is possible that consumers and HCPs are only able to identify ads as deceptive when they include a greater number of violations, whereas ads with few violations may not be identified as deceptive. The experimental stimuli will be in the form of a webpage for a fictitious drug targeted toward consumers who have chronic pain or toward HCPs. The deceptive websites will contain various types of violations. The website with fewer violations will contain a subset of the deceptive claims, imagery, or other presentations included in the website with more violations. For example, if the fewer-violations website includes two violations, then the more-violations website will include the same two violations plus two or three additional violations (in the form of claims and/or graphics).



Study 1 will help FDA address several key questions:

■ What proportion of consumers and HCPs correctly identify a promotional piece as deceptive? Does the ability to identify deceptive promotion vary depending on the number of deceptive claims in a promotional piece?

■ Does the degree of deception affect consumers’ and HCPs’ attitudes and behavioral intentions toward the promoted drug, including intended reporting to regulatory authorities?

■ Is the effect of deceptive promotional pieces mediated by a person’s ability to identify a promotional piece as deceptive (that is, do people who recognize a piece as deceptive discount the information in the piece, thereby adjusting their attitudes and intentions toward the product)?



Whereas Study 1 focuses on the level of deception (based solely on the number of false or misleading claims), Study 2 focuses on the type of deception (implicit versus explicit). Many deceptive promotional claims are implicit or misleading rather than being explicitly false (Ref. 1 and 4). An implicit claim suggests or implies an unstated piece of information. An explicit claim fully and clearly expresses information and leaves nothing to be implied. Study 2 will compare perceptions and beliefs that consumers and HCPs hold about a drug following exposure to one of three versions of a prescription drug website: (1) an explicitly false website, (2) a factually true but implicitly misleading website, or (3) a website with no deceptive claims (the control group).



As with Study 1, we envision a pair of one-way factorial experiments, one conducted with a sample of consumers and the other with HCPs (see Exhibit 2). Similar to Study 1, Study 2 will investigate how misleading implicit claims and explicitly false claims in prescription drug promotional pieces influence a person’s ability to detect and respond appropriately to deception. The experimental stimuli will be in the form of a pharmaceutical website mock-up targeted toward the relevant experimental population, obese consumers or HCPs who treat obese patients. As with Study 1, the drug profile, including indication, risks, and logo branding will be fictitious. For the implicit misleading claim manipulations, we are interested in whether people infer false beliefs from the implicit communications.



Study 2 will help FDA address several key questions:

■ What proportion of consumers and HCPs correctly identify a promotional piece as deceptive? Does the ability to identify deceptive promotion vary depending on whether deceptive claims in a promotional piece are explicit versus implicit?

■ Does the type of deception affect consumers’ and HCPs’ attitudes and behavioral intentions toward the promoted drug, including intended reporting to regulatory authorities?

■ Is the effect of deceptive promotional pieces mediated by a person’s ability to identify a promotional piece as deceptive (that is, do people who recognize a piece as deceptive discount the information in the piece, thereby adjusting their attitudes and intentions toward the product)?

For both studies, identifying how to measure consumers’ and HCPs’ ability to identify deceptive promotion as well as their reaction to such promotion is fundamental to achieving the research goals. A literature review revealed the importance of using a variety of measures to capture detection of deception. For direct measures, we will incorporate questions that ask participants to indicate whether there was any deception in the promotional piece and to rate the promotional piece in terms of how deceptive, credible, or trustworthy it was. Additionally, we will include claim-specific direct measures that allow people to click on any part of the website that they deem deceptive. Using responses to this variable, we can assess whether participants think there is any deception in a promotional piece; in instances where they do think there is deception, we can assess what aspects of the website contributed to that belief. We will also include indirect measures that identify whether participants believed the website expressed particular claims (e.g., claim recognition) as well as participants’ beliefs about the veracity of any deceptive claims (e.g., claim truth, agreement, or acceptance). Moreover, we will assess whether participants believe the messages merit reporting to regulatory authorities (that is, the FDA).



Analysis Plan

For each study, we will conduct a series of one-way analyses of variance (ANOVAs) to examine the impact of level of deception (Study 1) or type of deception (Study 2). Before conducting analyses, we will assess whether the inclusion of covariates is justified. If they are, we will conduct the analyses both with and without covariates (e.g., sex, age, race/ethnicity, education) included in the model. If the one-way ANOVA is significant, we will implement a series of planned contrasts to test for significant differences among the three experimental arms. Within each study, the analyses will be conducted separately for consumers and providers.

Power

Pretests. Our proposed sample size for each pretest is 150, with a target of 50 participants per experimental group in a set of 1 × 3 designs. We use Cohen’s (1988) conventional thresholds for interpreting the magnitude of effect sizes, where effects with f values in the order of 0.40 and higher are large, 0.25 to but not including 0.40 are medium, and 0.10 to but not including 0.25 are small. Assuming power of 0.80 and an alpha equal to 0.05, our omnibus tests will be able to detect medium-sized effects (f = 0.26). With the same parameters, we will also be able to conduct planned contrasts testing various combinations of group means with enough sensitivity to detect moderately small effects (f = 0.23) for orthogonal contrasts, where no family-wise error correction is required because each contrast is statistically independent. The design is also sensitive to detect medium sized effects (f = 0.27) for up to three non-orthogonal contrasts, assuming a Bonferroni-adjusted alpha of 0.0167. An example set of three non-orthogonal contrasts would be those required to test directional differences between each pair of group means (e.g., Group 1 < Group 2; Group 2 < Group 3; Group 1 < Group 3; though it is worth noting that a more efficient test for these comparisons can be achieved with two orthogonal contrasts).

Main Studies. We have powered the main studies to detect moderately small effects. Our sensitivity analysis suggests that with 125 participants in each experimental group (N = 375), omnibus F tests will be able to detect moderately small differences among groups (f = 0.18) with power of 0.90 and an alpha equal to 0.05. Using the same power and alpha levels, we will be able to conduct orthogonal planned contrasts testing various combinations of group means with enough sensitivity to detect moderately small differences as low as f = 0.17. The main study design is also sensitive to detect moderately small differences (f = 0.20) for up to six non-orthogonal planned contrasts, assuming a Bonferroni-adjusted alpha of 0.0083.

  1. Methods to Maximize Response Rates and Deal with Non-response

Response rates can vary greatly depending on many factors including the sample composition, panel type, invitation content, time of day and incentive offering. In addition, outside factors including email filters, recipient ISP downtime and general conditions on the Internet can impact response rates. To help ensure that the participation rate for the internet panel is as high as possible, FDA and the contractor will:


  • Design an experimental protocol that minimizes burden (short in length, clearly written, and with appealing graphics);

  • Administer the experiment over the Internet, allowing respondents to answer questions at a time and location of their choosing;

  • Send out at least one email reminder after the initial invitation; 

  • Use both email and router invitations, which have been shown to be an effective strategy to increase response2;

  • Provide respondents with a helpdesk link that they can access at any time for assistance.


Additionally, the Panel leverages the social media concept and has developed ‘panel communities’ in order to maximize member engagement and overcome challenge of declining survey response rates and multi-panel membership. We will also conduct a demographic comparison of responders and non-responders and incorporate any findings into our discussion of results.










  1. Test of Procedures or Methods to be Undertaken

Two types of pretesting (qualitative and quantitative) are employed as a test of procedures and methods.3 The first type of pretesting—already conducted—is qualitative. Cognitive testing with nine individuals was used to refine study stimuli and questions. Additionally, as described in this package, one round of quantitative pretesting for each study will be employed. Pretests will be used to evaluate the procedures and measures used in the main studies. Each of the two pretests will have the same design as its respective main study (pretest 1 for Study 1 and pretest 2 for Study 2). The purpose of both pretests will be to (1) ensure that the mock websites are understandable, viewable, and delivering intended messages; (2) identify and eliminate any challenges to embedding the mock websites within the online survey; (3) ensure that survey questions are appropriate and meet the analytical goals of the research; and (4) pilot test the methods, including examining response rates and timing of survey. The two pretests will be conducted simultaneously. Based on pretest findings, we will refine the mock websites, survey questions, and data collection process, as necessary, to optimize the full-scale study conditions.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


The contractor, RTI International, will collect and analyze data on behalf of FDA as a task order under Contract HHSF223201510002B. Vanessa Boudewyns, Ph.D., is the Project Director, 202-728-2092 (x22092). Review of contractor deliverables and supplemental analyses will be provided by the Research Team, Office of Prescription Drug Promotion (OPDP), Office of Medical Policy, CDER, FDA, and coordinated by Kevin R. Betts, Ph.D., (240) 402-5090, and Amie C. O’Donoghue, Ph.D., (301) 796-1200.





1 The router is an automated tool that uses real-time traffic to contact persons already engaged with the vendor’s website/platform.

2 Boudewyns, V., Tan, S., Betts, K. R., Aikin, K. J., & Squire, C., (May, 2017). Contrasting the Effect of Router- versus Email-Based Recruitment on Invitation Response to Online Surveys. Paper presented at AAPOR, New Orleans, LA.


3 Pretesting is suggested by OMB as a method to test procedures. See Office of Management and Budget Standards and Guidelines for Statistical Surveys (September, 2006). Available at http://www.whitehouse.gov/sites/default/files/omb/assets/omb/inforeg/statpolicy/standards_stat_surveys.pdf. Last accessed January 12, 2012.

4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBetts, Kevin
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy