SSB - Examination of Online DTC Prescription Drug Promotion

SSB - Examination of Online DTC Prescription Drug Promotion.doc

Examination of Online Direct-to-Consumer Prescription Drug Promotion

OMB: 0910-0714

Document [doc]
Download: doc | pdf


Examination of Online Direct-to-Consumer Prescription Drug Promotion

0910-Number


SUPPORTING STATEMENT B







Submitted by


Office of Prescription Drug Promotion

Center for Drug Evaluation and Research


Food and Drug Administration









December, 2011

Revised: June, 2012











B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Respondent Universe and Sampling Methods

The 10,200 participants for this study will be drawn from the pool of 48,000 internet panel members. All panel members complete prescreening questionnaires on a variety of topics, and we will recruit participants who indicated that they have been medically diagnosed with one of the following conditions: high cholesterol, high blood pressure, clinical depression, acid reflux, or seasonal allergies (see Appendix C for the screener and Appendix E for the recruitment and reminder emails).  The following sections provide details of the sampling design, target population, and weighting procedures. 

Sampling Design

Knowledge Network’s (KN) consumer panel (“KnowledgePanel”) is representative of the U.S. adult population and is a probability-based online consumer panel. KnowledgePanel consists of approximately 50,000 adults who were randomly sampled and invited to participate as panelists. Panelists typically serve for three years and complete two to three surveys per month during their tenure. To ensure full representation of U.S. adults, KN also recruits cell phone only households and equips panelists from non-Internet households with computer hardware and dial-up Internet access.

KN has recruited KnowledgePanel members through two random sampling strategies: random digit dialing (RDD) and address-based sampling (ABS). This is distinct from other online panels that use opt-in recruitment methods and allow individuals to volunteer as panelists. KN does not permit opt-in recruitment, and only individuals randomly selected through RDD and ABS are eligible to join. While RDD was originally KnowledgePanel’s primary sampling strategy, KN began supplementing RDD with ABS in 2008 and eliminated RDD sampling altogether in late 2009. Almost all of this study’s participants will have been recruited by ABS; however, because panelists serve for multiple years, this study may include a small number of participants who were recruited via RDD.

Random Digit Dialing (RDD) Methodology

KN’s first sampling strategy was to conduct list-assisted RDD sampling based on the U.S. residential landline telephone universe. KN randomly selected households from the landline universe and attempted to match each phone number to a valid postal address. After matching the telephone number and mailing address (67%–70% of cases), KN mailed the household an advance letter informing them that they had been selected to participate in KnowledgePanel. Unmatched numbers were not sent an advance letter.

Following the mailing, KN-trained interviewers contacted sampled households by telephone, invited them to join KnowledgePanel, and (if desired) completed an enrollment form. Interviewers dialed telephone numbers for up to 90 days, with at least 14 dial attempts for cases in which no one answered the telephone and for numbers known to be residential. Extensive refusal conversion was performed.

KN intentionally oversampled households from telephone exchange groups with high concentrations of African American and Hispanic households (based on Census data). The oversampling helped ensure sufficient minority representation on KnowledgePanel, and KN adjusts for the oversampling in the panel’s weighting procedures. RDD sampling is done without replacement, meaning that phone numbers that have already been sampled are not sampled again.

Address-Based Sampling Method

KN’s current sampling strategy is to randomly select households based on mailing address. Specifically, KN randomly samples mailing addresses from the U.S. Postal Service’s Delivery Sequence File (which covers 97% of U.S. homes) and mails an invitation to households informing them that they have been selected to participate in KnoweldgePanel. After receiving the mailing, households accept the invitation and join KnowledgePanel through one of three methods:

(1) Completing a paper enrollment form and returning it in a postage-paid envelope

(2) Calling KN’s toll-free hotline and completing an enrollment interview

(3) Visiting a secure KN Web site and completing an online enrollment form

KN conducts extensive refusal conversion with nonresponsive households by sending a series of follow-up mailings and, if an address and telephone number can be matched, by placing telephone recruitment calls.

As with RDD recruitment, KN intentionally oversamples African American and Hispanic households to help ensure sufficient minority representation on KnowledgePanel. KN also oversamples young adult households (aged 18 to 24) to help ensure sufficient representation. The panel’s weighting procedures adjust for the oversampling.

Target Populations

For this study, participants on KnowledgePanel will be randomly selected if they meet three eligibility criteria: (1) adult aged 18 or older, (2) diagnosed with one of the study’s five illness conditions (seasonal allergies, high cholesterol, acid reflux, high blood pressure, or depression), and (3) capable of viewing and listening to DTC Web sites and embedded videos on a computer or mobile device. To eliminate cross-contamination of experimental arms, only one person per household will be sampled.

Potential participants will be identified by examining prescreening data and eliminating panelists that have not been diagnosed with one of the target illnesses. A random sample of panelists with the target illnesses will be selected, and we will send them an e-mail invitation (via KN) inviting them to participate in the study. The e-mail invitation will contain a unique hyperlink, specifically for that panelist, to the study survey. Reminder invitations and telephone reminder calls will be used to try to convert nonresponders. If sampled panelists are still unresponsive, additional panelists will be randomly sampled to replace the nonresponders.

Weights

With the KnowledgePanel, sample weights can be applied so that the study data more accurately represent the target population of U.S. adults. Weighting is a common procedure in survey studies and helps to account for nonresponse, noncoverage, underrepresentation of minority groups, and other types of sampling and survey error. (For example, if the study sample has fewer African Americans than the target population, weights would be applied to correct this discrepancy. African Americans’ responses would be given more importance relative to other participants’ responses and would have a relatively larger influence on statistical output, such as means and percentages.)

For this study, three types of weights will be constructed and applied to the dataset:

  • Base Weight—Accounts for deviations from pure probability sampling in KnowledgePanel’s sampling procedures. This weight is constructed prior to study sampling.

  • Panel Post-Stratification Weight—Accounts for nonresponse and noncoverage effects in the entire KnowledgePanel. This weight is constructed prior to study sampling.

  • Study Post-Stratification Weight—Accounts for nonresponse and undersampling or oversampling in the study sample. This weight will be constructed at the conclusion of data collection.

First, while KN randomly samples KnowledgePanel members through RDD and ABS methods, it incorporated several strategies to improve efficiency and boost representation of age and racial minorities. These strategies, while valuable, are a deviation from equal probability sampling. Consequently, KN constructs statistical weighting adjustments to offset these sampling deviations. These adjustments are the sample’s base weight.

Second, as with any survey, KnowledgePanel is subject to several types of survey error. These errors occur because not all households can be reached (noncoverage), not all households accept the panel invitation (nonresponse), and not all panelists choose to continue participating after enrollment (attrition). KN addresses these sources of error by creating a panel post-stratification weight.

Finally, this study will likely sample and invite panelists who do not respond (nonresponse) or decline to participate (nonconsent). Consequently, KN will apply a study-specific post-stratification weight to account for discrepancies between participants and the entire KnowledgePanel.

Base Weight

When creating KnowledgePanel, KN used eight different strategies that deviated from pure probability sampling, and KN creates a base weight to account for these deviations. The sampling strategies, which are corrected by the base weight, include the following.

RDD Deviations

  • Undersampling of telephone numbers unmatched to a valid mailing address. KN attempted to match all RDD-generated residential telephone numbers to a mailing address. The remaining unmatched numbers are undersampled as a recruitment efficiency strategy. This undersampling was suspended between July 2005 and April 2007, and RDD recruitment ceased in July 2009.

  • RDD selection proportional to household’s telephone landlines. As part of sampling, KN documents the number of separate telephone landlines in each household. The probability of selecting a multiple-line household was down-weighted by the inverse of the number of landlines. RDD recruitment ceased in July 2009.

  • RDD oversampling of African American and Hispanic telephone exchanges. KN intentionally oversampled telephone exchanges with a higher density of minority households (African American and Hispanic) to increase panel membership for those groups. KN oversampled these telephone exchanges at approximately twice the rate of other exchanges. RDD recruitment ceased in July 2009.

ABS Deviations

  • Address-based sample telephone matching. As part of ABS recruitment, KN matched sampled mailing addresses with landline telephone numbers and placed recruitment calls to unresponsive households. These calls resulted in a slightly disproportionate number of landline households being enrolled in KnowledgePanel.

  • ABS oversample stratification. In late 2009, KN implemented geographic stratification into ABS sampling. Census blocks with high-density minority communities were oversampled (Stratum 1), and the balance of the census blocks (Stratum 2) were undersampled.

Other Deviations

  • Minor oversampling of Chicago and Los Angeles in early pilot surveys. KN conducted two pilot surveys in Chicago and Los Angeles during the creation of KnowledgePanel, which increased the representation from these two cities. With natural attrition and panel growth, the effects of this deviation have mostly disappeared.

  • Oversampling of the four largest states and central region states. When KN created KnowledgePanel, survey demand in the four largest states (California, New York, Florida, and Texas) necessitated initial oversampling. Likewise, the central region states were oversampled for a brief period. With natural attrition and panel growth, the effects of this deviation have mostly disappeared.

  • Undersampling of households without MSN TV service access. Until January 2010, KN provided non-Internet households with MSN Web TV units as a way to participate in KnowledgePanel. Certain small areas of the United States were not serviced by the MSN network; consequently, non-Internet households from these areas could not be recruited. In January 2010, KN began providing laptop computers with dial-up access to non-Internet households, thus eliminating this undercoverage.

Panel Post-Stratification Weight

KN creates a panel post-stratification weight to reduce the effects of any nonresponse and noncoverage bias in overall panel membership. The post-stratification adjustment will be based on demographic distributions from the U.S. Census Bureau’s most recent Current Population Survey (CPS). The one exception: The benchmark for Internet access among the U.S. population will be obtained from the most recent CPS supplemental survey measuring Internet access.

The panel post-stratification weight will be applied prior to this study’s sampling design, and it is based on the following variables:

  • Gender (Male/Female)

  • Age (18–29, 30–44, 45–59, and 60+)

  • Race/Hispanic Ethnicity (White/Non-Hispanic, Black/Non-Hispanic, Other/Non-Hispanic, 2+ Races/Non-Hispanic, Hispanic)

  • Education (Less than High School, High School, Some College, Bachelor and beyond)

  • Census Region (Northeast, Midwest, South, West)

  • Metropolitan Area (Yes/No)

  • Internet Access (Yes/No)

Study Post-Stratification Weight

At the conclusion of data collection for this study, KN will create a study post-stratification weight to adjust for any nonresponse. Demographic and geographic distributions of KnowledgePanel members who are diagnosed with the appropriate illnesses will be used as benchmarks in the adjustment, and KN will apply the weight to eligible, completed participants within each experimental arm.

The study post-stratification weight is based on the following variables, regardless of whether these variables were significantly different between participants and nonrespondents:

  • Gender (Male/Female)

  • Age (18–54, 55-64, and 65+)

  • Race/Hispanic ethnicity (White/Non-Hispanic, Black/Hispanic/Other)

  • Education (Less than High School, High School, Some College, Bachelors and higher)

  • Census Region (Northeast, Midwest, South, West)

  • Metropolitan Area (Yes/No)

  • Internet Access (Yes/No)

2. Procedures for the Collection of Information

Design Overview

This research will be conducted in three concurrent studies. The design and hypotheses for each study are outline below. We will use ANOVAs, planned comparisons, and regressions to test hypotheses.

The purpose of Study 1 is to investigate whether the presentation of risk information on branded drug websites influences consumers’ perceptions and understanding of the risks and benefits of the product. For generalizability of results, we will conduct Study 1 once with participants diagnosed with high cholesterol and again with participants diagnosed with seasonal allergies. In Study 1, we will examine the format (e.g., whether the risk information is presented in a paragraph or as a bulleted list) and visibility of risk information on a prescription drug website. Risk visibility will be manipulated by having the risk information on the homepage; having the risk information on the homepage with a signal to scroll; or having a hyperlink, with a signal to click on the link, on the homepage that leads to a secondary page with the risk information. The signal will direct participants to the important safety information. Participants will be randomly assigned to experimental conditions in a factorial design as follows:

Table 5. -- Study 1 Proposed Design (3 x 5)


Format


Risk Visibility


Paragraph

Bullet List


Checklist

Highlighted Box

Animated Spokesperson

On Homepage






On Homepage with Signal






On Secondary Page with Signal







The purpose of Study 2 is to investigate how special visual features on branded drug websites influence perceptions and understanding of the risks and benefits of the product. For generalizability of results, we will conduct Study 2 once with participants diagnosed with high blood pressure and again with participants diagnosed with acid reflux. The special features we will examine are a personal testimonial video and an animated mechanism of action visual. Benefit information will be presented in either a personal testimonial video, an animated mechanism of action visual, or in text (the control). We will examine these special features in the context of the prominence of the presentation of risk information in two levels, more prominent and less prominent. An example of a more prominent display of risk information might involve including the risks as part of the spoken testimonial, whereas a less prominent display may involve a scrolling text of the risks after the animated video. We will include a control condition in which participants view a webpage with no special features. Participants will be randomly assigned to experimental conditions in a factorial design as follows:

Table 6. -- Study 2 Proposed Design (2 x 2 + 1)


Special Features

Risk Presentation

Personal Testimonial

Animated Visual

Control Group


Prominent




Less Prominent





The Study 3 design tests whether participants are misled by a link from a branded prescription drug website to a disease awareness website with off-label information, and whether the presence of context attenuates this potential effect. We will conduct Study 3 with participants diagnosed with depression. Participants will be randomly assigned to experimental conditions in a factorial design as follows:

Table 7. -- Study 3 Revised Design (4 x 1)


Context

No Link (Control)

None

External Only

External & Not Sponsored






The three context conditions will include a link. For example, “For more information about Disease X, please visit [link].” An example of the “none” context condition is, “if the link is clicked, there is an interim page that says ‘Loading.’” An example of the “external only” context is, “if the link is clicked, there is an interim page that says ‘You are leaving the Drug X website and entering an external website.’” An example of the “external and not sponsored” context is “if the link is clicked, there is an interim page that says ‘You are leaving the Drug X website and entering an external website not controlled or endorsed by Pharmaceutical Company Y.’”


Procedure

All parts of this study will be administered over the internet. A total of 10,200 interviews will be completed. Participants will be randomly assigned to view one version of a DTC prescription drug branded website. Following their perusal of this website, they will answer questions about their recall and understanding of the benefit and risk information, their perceptions of the benefits and risks of the drug, and their intent to perform various behaviors such as asking a doctor about the medication.

Demographic information will be collected. In addition, participants will answer questions about their health literacy, knowledge of their medical condition, time since diagnosis, severity of their medical condition, current prescription drug use, health information seeking, and internet use. The entire procedure is expected to last approximately 25 minutes. This will be a one-time (rather than annual) information collection.

Participants

Data will be collected using an Internet protocol. Approximately 10,200 consumers who have seasonal allergies, acid reflux, high cholesterol, high blood pressure, or depression will be recruited for the study. Participants must be 18 years or older.

Hypotheses

Study 1 Hypotheses

1. Locating risk information on the homepage (with or without a signal) will lead consumers to have greater perceived risk and greater risk comprehension than locating this information on a secondary page with a hyperlink. Locating risk information on the homepage with a signal will lead consumers to have greater perceived risk and greater risk comprehension than locating this information on the homepage without a signal.

2. Presenting risk information in a bulleted list or checklist format will lead consumers to have greater perceived risk and greater risk comprehension than presenting this information in paragraph format.

3. Presenting risk information in a highlighted box format will lead consumers to have greater perceived risk and greater risk comprehension than presenting this information in bulleted list, checklist, or paragraph format.

4. We have competing hypotheses for the animated spokesperson. If the use of audio increases attention to the animated spokesperson, then presenting risk information via an animated spokesperson will lead consumers to have greater perceived risk and greater risk comprehension than presenting this information in any other format. If the animated spokesperson distracts consumers and/or the preset pace of the audio presentation is difficult for consumers to follow, then presenting risk information via an animated spokesperson will lead consumers to have lower perceived risk and lower risk comprehension than presenting this information in any other format.

Study 2 Hypotheses

1. The presence of any special feature will lead consumers to have lower perceived risk, greater perceived efficacy, greater benefit comprehension, and greater intentions to ask their doctor about the drug than the absence of these features.

2. More prominently displayed risk information will lead consumers to have greater perceived risk and greater risk comprehension than less prominently displayed risk information.

Study 3 Hypotheses

1. Participants who view the link to external information, compared to those who do not, will have greater perceived efficacy and lower correct benefit comprehension.

2. This effect may be attenuated by context, such that participants who view the link without context, compared to those who view the link with either type of context, will have greater perceived efficacy and lower correct benefit comprehension. We will explore whether the type of context (external only vs. external and not sponsored) affects perceived efficacy and benefit comprehension.

For all studies, the following variables will be tested as potential moderators of the predicted effects: demographics, health literacy, knowledge of their medical condition, time since diagnosis, severity of their medical condition, current prescription drug use, health information seeking, and internet use.

All other comparisons are exploratory.

Analysis Plan

For all three studies, we will conduct ANOVAs with planned comparisons to test the hypotheses outline above. We will conduct the ANOVAs both without covariates and with covariates to determine whether effects are moderated by demographics, health literacy, knowledge of their medical condition, time since diagnosis, severity of their medical condition, current prescription drug use, and internet use. If a main effect is significant, we will conduct pairwise-comparisons to determine which conditions are significantly different from one another, with Bonferroni-adjusted p-values.

Power

The following tables show the power calculations for Studies 1-3. The assumptions made in deriving the sample size for each study were: 1) 0.90 power, 2) 0.05 alpha and a Bonferroni-adjusted alpha, and 3) an effect size between small and medium. The tables below shows the sample size required to detect differences with effect sizes ranging from conventionally “small” (f = 0.10) to “medium” (f = 0.25).


Table 8. -- Study 1: A priori power analysis to determine sample size needed in F tests (ANOVA: fixed effects, main effects, and interactions) to achieve power of 0.90 (Faul et al., 2007).1


Main effect of visibility

Effect size f*

Post-hoc comparisons among formats

Effect size f*

Input










0.10

0.15

0.25

0.10

0.15

0.25


α error probability

0.05

0.05

0.05

.005**

.005

.005


Power (1 – β error probability)

0.90

0.90

0.90

0.90

0.90

0.90


Numerator df

1

1

1

1

1

1


Number of groups

10

10

10

2

2

2

Output









Critical F

3.85

3.86

3.90

7.90

7.93

8.01


Denominator df

1042

458

160

1673

744

269


Sample size per cell

106

47

17

838

372

136


*An effect size of 0.10 is traditionally considered small, whereas an effect size of 0.25 is considered medium (Cohen, 1988).2 Here we have shown three different effect sizes centering around small to medium effects.


**Bonferonni-adjusted for 10 comparisons.


We will have 200 participants per cell, with a total of 6,000 participants in the 15 cells (a 3 x 5 design for two different medical conditions). With this sample size, we will be able to detect small effects in the test of the main effect of visibility and medium effects in post-hoc comparisons among formats.









Table 9. -- Study 2: A priori power analysis to determine sample size needed in F tests (ANOVA: fixed effects, main effects, and interactions) to achieve power of 0.90 (Faul et al., 2007).3


Main effect of prominence

Effect size f*

Post-hoc comparisons among conditions

Effect size f*

Input










0.10

0.15

0.25

0.10

0.15

0.25


α error probability

0.05

0.05

0.05

.005**

.005

.005


Power (1 – β error probability)

0.90

0.90

0.90

0.90

0.90

0.90


Numerator df

1

1

1

1

1

1


Number of groups

4

4

4

2

2

2

Output









Critical F

3.85

3.86

3.90

7.90

7.93

8.01


Denominator df

1048

464

166

1673

744

269


Sample size per cell

263

117

43

838

372

136


*An effect size of 0.10 is traditionally considered small, whereas an effect size of 0.25 is considered medium (Cohen, 1988).4 Here we have shown three different effect sizes centering around small to medium effects.


**Bonferonni-adjusted for 10 comparisons.


We will have 200 participants per cell, with a total of 2,000 participants in the 10 cells (a 2 x 2 + 1 design in two different medical conditions). With this sample size, we will be able to detect small effects in the test of the main effect of prominence and medium effects in post-hoc comparisons among conditions.








Table 10. -- Study 3: A priori power analysis to determine sample size needed in F tests (ANOVA: fixed effects, main effects, and interactions) to achieve power of 0.90 (Faul et al., 2007).5


Main effect of condition

Effect size f*

Post-hoc comparisons among conditions

Effect size f*

Input










0.10

0.15

0.25

0.10

0.15

0.25


α error probability

0.05

0.05

0.05

.008**

.008

.008


Power (1 – β error probability)

0.90

0.90

0.90

0.90

0.90

0.90


Numerator df

3

3

3

1

1

1


Number of groups

4

4

4

2

2

2

Output









Critical F

2.61

2.62

2.64

7.05

7.07

7.15


Denominator df

1417

629

226

1548

689

249


Sample size per cell

356

159

58

776

346

126


*An effect size of 0.10 is traditionally considered small, whereas an effect size of 0.25 is considered medium (Cohen, 1988).6 Here we have shown three different effect sizes centering around small to medium effects.


**Bonferonni-adjusted for 6 comparisons.


We will have 250 participants per cell, with a total of 1,000 participants in the 4 cells (a 4 x 1 design in one medical condition). With this sample size, we will be able to detect small effects in the test of the main effect of condition and medium effects in post-hoc comparisons among conditions.

3. Methods to Maximize Response Rates and to Deal with Issues of Non-Response

This experimental study will use an existing Internet panel to draw a sample.  The panel comprises individuals who share their opinions via the Internet regularly.  To help ensure that the participation rate is as high as possible, FDA will:

  • Design an experimental protocol that minimizes burden (short in length, clearly written, and with appealing graphics);

  • Administer the experiment over the Internet, allowing respondents to answer questions at a time and location of their choosing;

  • Email a reminder to the respondents who do not complete the protocol four days after the original invitation to participate is sent;

  • Provide a toll-free hotline for respondents who may have questions or technical difficulty as they complete the experiment. 

4. Test Procedures

The contractor will run nine participants through the procedure to assess blatant glitches in questionnaire wording, programming, and execution of the study. We will also conduct pretests with 1,200 consumers before running the main studies to ensure that stimuli and questionnaire wording is clear. Finally, we will run the main studies as described elsewhere in this document.

5. Individuals Involved in Statistical Consultation and Information Collection

The contractor, RTI, will collect the information on behalf of FDA as a task order under Contract HHSF223200910135G. Doug Rupert, MPH, is the Project Director for this project, telephone (919) 541-6495. Data analysis will be overseen by the Research Team, Office of Prescription Drug Promotion (OPDP), Office of Medical Policy, CDER, FDA, and coordinated by Helen W. Sullivan, Ph.D., M.P.H., 301-796-4188, and Amie C. O’Donoghue, Ph.D., 301-796-0574.



References


1. Macias, W. and L. Stavchansky Lewis, "How Well Do Direct-to-Consumer (DTC)

Prescription Drug Web Sites Meet FDA Guidelines and Public Policy Concerns?" Health Marketing Quarterly, vol. 22, pp. 45–71, 2005.

2. Hicks, K. E., M. S. Wogalter, and W. J. Vigilante, Jr., "Placement of Benefits and Risks in

Prescription Drug Manufacturers’ Web Sites and Information Source Expectations," Drug Information Journal, vol. 39, pp. 267–278, 2005.

3. Huh, J. and B. J. Cude, "Is the Information ‘Fair and Balanced’ in Direct-to-Consumer

Prescription Drug Web Sites?" Journal of Health Communication, vol. 9, pp. 529–540, 2004.

4. Sheehan, K. B., "Direct-to-Consumer (DTC) Branded Drug Web Sites Risk Presentation and

Implications for Public Policy," Journal of Advertising, vol. 36, pp. 123–135, 2007.

5. Davis, J. J., E. Cross, and J. Crowley, "Pharmaceutical Web Sites and the Communication of

Risk Information," Journal of Health Communication, vol. 12, pp. 29–39, 2007.

6. Naik, S. and S. P. Desselle, "An Evaluation of Cues, Inducements, and Readability of

Information on Drug-Specific Web Sites," Journal of Pharmaceutical Marketing and Management, vol. 17, pp. 61–81, 2007.

7. Vigilante, Jr., W. J., and M. S. Wogalter, "Assessing Risk and Benefit Communication in

Direct-to-Consumer Medication Web Site Advertising," Drug Information Journal, vol. 39, pp. 3–12, 2005.

8. Prevention Magazine. 14th Annual Survey of Consumer Reactions to DTC Advertising of Prescription Medicines. Emmaus, PA: Rodale, Inc., 2011.



1 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A, (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.

2 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed). Hillsdale, NJ: Lawrence Erlbaum & Associates, Inc.

3 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A, (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.

4 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed). Hillsdale, NJ: Lawrence Erlbaum & Associates, Inc.

5 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A, (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.

6 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed). Hillsdale, NJ: Lawrence Erlbaum & Associates, Inc.

20


File Typeapplication/msword
File TitleExamination of Online Direct-to-Consumer Prescription Drug Promotion
Authorjuanmanuel.vilela
Last Modified ByCTAC
File Modified2012-06-30
File Created2012-06-30

© 2024 OMB.report | Privacy Policy