Market Claims in DTC Print Ads SS Part A 2016

Market Claims in DTC Print Ads SS Part A 2016.pdf

Market Claims in Direct-to-Consumer Prescription Drug Print Ads

OMB: 0910-0824

Document [pdf]
Download: pdf | pdf
Market Claims in Direct-to-Consumer (DTC) Prescription Drug Print Ads
OMB Control No. 0910- NEW
Supporting Statement Part A

A. Justification
1. Circumstances Making the Collection of Information Necessary
Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the
FDA to conduct research relating to health information. Section 1003(d)(2)(c) of the
Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(b)(2)(c)) authorizes
FDA to conduct research relating to drugs and other FDA regulated products in carrying out
the provisions of the FD&C Act.
The marketing literature divides product attributes (“cues”) into intrinsic and extrinsic.
Intrinsic cues are physical characteristics of the product (e.g., size, shape), whereas extrinsic
cues are product-related but not part of the product (e.g., price and brand name). 1,2 Research
has found that both intrinsic and extrinsic cues can influence perceptions of product quality.3
Consumers may rely on product cues in the absence of explicit quality information. The
objective quality of prescription drugs is not easily obtained from promotional claims in
DTC ads; thus consumers may rely upon extrinsic cues to inform their decisions. Market
claims such as “#1 prescribed” and “new” may act as extrinsic cues about the product’s
quality, independent of the product’s intrinsic characteristics. Prior research has found that
market leadership claims can affect consumer beliefs about product efficacy, as well as their
beliefs about doctors’ judgments about product efficacy.4 One limitation of these prior
studies is the lack of quantitative information about product efficacy in the information
provided to respondents. Research indicates that providing consumers with efficacy
information generally improves understanding and facilitates decision-making.5,6 Efficacy
information may moderate the effect of the extrinsic cue by providing insight into
characteristics that would otherwise be unknown. Other research has shown that consumers
are able to use information about efficacy to inform judgments about the product.6,7 The
1

Lee, M., & Lou, Y.-C. (2011). Consumer reliance on intrinsic and extrinsic cues in product evaluations: a conjoint
approach. Journal of Applied Business Research (JABR), 12(1), 21-29.
2
Teas, R. K., & Agarwal, S. (2000). The effects of extrinsic product cues on consumers’
perceptions of quality, sacrifice, and value. Journal of the Academy of marketing Science, 28(2), 278-290.
3
Rao, A. R., & Monroe, K. B. (1989). The effect of price, brand name, and store name on
buyers' perceptions of product quality: an integrative review. Journal of marketing Research, 351-357.
4
Mitra, A., Swasy, J.L., Aikin, K.J. (2006). How do consumers interpret market leadership
claims in direct-to-consumer advertising of prescription drugs? Advances in Consumer Research, 33, 381-387.
5
O'Donoghue, A., Sullivan, H., Aikin, K., Chowdhury, D., Moultrie, R., & Rupert, D., (2014). Presenting efficacy
information in direct to consumer prescription drug advertisements. Patient Education Counsel, 95(2), 271-80.
6
Schwartz, L. M., Woloshin, S., & Welch, H. G. (2009). Using a drug facts box to
communicate drug benefits and harmstwo randomized trials. Annals of Internal Medicine, 150(8), 516-527.
7
Sullivan, H. W., O’Donoghue, A. C., & Aikin, K. J. (2013). Presenting quantitative information about placebo rates to
patients. JAMA Internal Medicine, -. doi: 10.1001/jamainternmed.2013.10399.

1

Office of Prescription Drug Promotion (OPDP) plans to investigate, through empirical
research, the impact of market claims on prescription drug product perceptions with and
without quantitative information about product efficacy. This will be investigated in directto-consumer (DTC) print advertising for prescription drugs.
2. Purpose and Use of the Information Collection
The purpose of this study is to examine the impact of market claims and quantitative
efficacy information on prescription drug product perceptions in DTC print advertising for
prescription drugs. The long-term objective is to improve the communication of accurate
and non-misleading information in DTC ads. Part of FDA’s public health mission is to
ensure the safe use of prescription drugs; therefore it is important to communicate the risks
and benefits of prescription drugs to consumers as clearly and usefully as possible.
3. Use of Improved Information Technology and Burden Reduction
Automated information technology will be used in the collection of information for this
study. One hundred percent (100%) of participants will self-administer the Internet survey
via a computer, which will record responses and provide appropriate probes when needed.
In addition to its use in data collection, automated technology will be used in data reduction
and analysis. Burden will be reduced by recording data on a one-time basis for each
participant, and by keeping surveys to less than 30 minutes in both the pretests and main
study.
4. Efforts to Identify Duplication and Use of Similar Information
We conducted a literature search to identify duplication and use of similar information by
locating relevant articles through keyword searches using two databases, PubMed and
EBSCO Academic. We also identified relevant articles from the reference list of articles
found through keyword searches. We found one study examining the impact of market
leadership claims in DTC advertising on product perceptions.[4] We have cited this work
above and are expanding upon it to examine the role quantitative effectiveness information
may have in modifying the impact of these claims. We did not find any duplicative work on
the relative importance of quantitative efficacy information and market claims in DTC ads in
driving product choice.
5. Impact on Small Businesses or Other Small Entities
No small businesses will be involved in this data collection.

2

6. Consequences of Collecting the Information Less Frequently
The proposed data collection is one-time only. There are no plans for successive data
collections.
7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for this collection of information.
8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the
Agency
In the Federal Register of July 20, 2015 (80 FR 138, 42823-42825), FDA published a 60day notice requesting public comment on the proposed collection of information. Six
submissions were received; three from biopharmaceutical companies (AbbVie, Eli Lilly,
Merck), two that were anonymous, and one from Danny Weiss, PharmD. The comments
from the two anonymous submitters and Dr. Weiss requested the United States ban DTC
advertising for pharmaceuticals. This is outside the scope of this project. We summarize
and respond to the other comments below.
Comments 1 from AbbVie: Respondents may view “benefits” and “risks” more generally
versus “side effects” as a specific inquiry. For example, “side effects” could be interpreted
as adverse effects or adverse events, and as such, elicit a much more specific response than
“risks” which could be seen more broadly. We suggest that “side effects” be eliminated from
Q4 to keep Questions 3 and 4 as both general in nature.
Response: We are interested in recall of both risks and side effects, and so we inquire about
both. Inquiring about risks only may artificially reduce the quantity of recall. Moreover, we
counterbalance the presentation of Q3 and Q4 in efforts to account for any influence of
question ordering. It would be feasible to instead inquire about risks and side effects in
separate questions; however, in our experience, we find that consumers tend to think about
risks and side effects together, which makes sense given the typical presentation of risks and
side effects in direct-to-consumer promotional materials.
Comment 2 from AbbVie: The answers to questions 7 through 12 may be biased by
attitudes toward advertising in general and may go well beyond the pharmaceutical ad they
are shown.
Response: By asking these questions, we hope to detect any differences in perceived
effectiveness and risk between those exposed to different experimental conditions. For
example, those exposed to an ad with a #1 prescribed market claim may perceive the
product to be more effective than those in the control condition. We acknowledge
participants may bring their own opinions about advertising to the study. However, these
opinions tend to be evenly distributed across experimental conditions based on random
assignment procedures. Thus, any differences result from the experimental manipulations.

3

Comment 3 from AbbVie: We acknowledge we have not seen the test ad; but we wish to
point out that questions 13 and 17 rely on the ad presenting numeric efficacy and safety
information that can be interpreted by respondents.
Response: Prior research has shown that consumers can reach numeric judgments about
efficacy and risk despite no numeric information being presented.8 As described in our study
design (see Exhibit 1 in section B.2), we are not manipulating quantitative safety
information and not all test ads contain quantitative efficacy information. We have worked
with an expert reviewer in OPDP to produce efficacy claims that are realistic for this drug
product class.
Comment 4 from AbbVie: Question 18 relies on the ad presenting information about the
seriousness of one or more “side effects” that the respondent could rank. We do not usually
see print ads that present details about the extent of the seriousness of one or more side
effects. In the absence of this presentation, how are respondents to answer this question?
Response: We find that consumers are generally able to differentiate between the
seriousness of various risks and side effects, and also that they can make judgments about
the overall (gist) seriousness of the risks and side effects. We ask this question with the
intention to detect whether or not exposure to market claims and efficacy information
impacts risk perceptions.
Comment 5 from AbbVie: The answers to questions 21-26 may reflect a patient’s
perception of their doctor rather than the ad. Therefore, the answers may not reflect what
was communicated in the ad but rather reflect the patient-doctor relationship (e.g. patient
perception of their doctor).
Response: We are endeavoring to replicate the results of Mitra et al (2006), who found that
market leadership claims affected consumer beliefs about doctor’s judgments.
Comment 6 from AbbVie: In the table headers for questions 27 and 28, please change
“claim” to “statement” so that it matches the text in the question.
Response: We will make this change.
Comment 7 from AbbVie: It is beneficial to rotate the order of response choices in
questions 27 and 28 as is done in prior questions. Some of the features a-h are broad (b.
pictures and images) while some are specific (e. percentages). It would be better to compare
the very general features in a question and group the very specific features into another
question to compare like features.
Response: We will make this change.

8

O’Donoghue, A.C., Sullivan, H.W., Aikin, K.J., Chowdhury, D., Moultrie, R.R. & Rupert, D.J. (2014). Presenting
efficacy information in direct-to-consumer prescription drug advertisements. Patient Education and Counseling, 95,
271-280.

4

Comment 8 from AbbVie: For questions 35-38, rather than rank from Strongly Disagree to
Strongly Agree, which are absolutes, it would be better to rank by frequency from Never to
Always; this moves the response to how often patients perceive this and away from
absolutes.
Response: We acknowledge that it is difficult to rank agree/disagree on all drugs. However,
a scale range of always-never is uni-polar; we can’t assess whether respondents think the
opposite, e.g., that new drugs tend to be more risky or that the #1 prescribed drug is more
risky. Our intention is to use these items as a moderator when examining the impact of the
experimental manipulations (i.e., market claims, efficacy claims) on benefit and risk
perceptions, intentions to take the product, and other outcomes. We believe the most
relevant scale for this analysis is the current strongly disagree to strongly agree scale.
Although it would be interesting to assess participant responding using both scales, doing so
may not add significant value relative to the additional burden it would pose for participants.
Comment 9 from AbbVie: We suggest that all the features of Q43a-h be stated in the
affirmative/positive. For example, h. should be worded as, “the drug has few side effects,” to
be consistent with features a-g that are positively stated.
Response: The proposed item, “the drug has few side effects,” assesses a different outcome
than our current question, “the drug has serious side effects.” We have also added items
assessing “drug cost and/or copay” and “doctor’s recommendation.” For consistency, we
will change the wording so that all features are neutral. For instance: the drug’s side effects,
opinions of people I know, how often the drug is prescribed
Comment 10 from Lilly: Given the proposed FDA research questions, Lilly believes the
design is appropriate and the sample size will allow for breakouts by each cell. In
advertising A/B tests, in which this is similar to, all aspects of the stimulus not being tested
are held the same in order to reduce bias and isolate the feature being tested. We strongly
recommend that this guideline is followed in this study.
Response: We intend to hold all features other than the manipulations constant in the
stimuli.
Comment 11 from Lilly: One research objective for the main study suggests that the study
will measure perceptions of the doctors’ acceptance of the drug by respondents. Since
respondents will only be seeing a print ad and not interacting with a doctor, we believe the
research setting will be too artificial to gain meaningful insights into this topic. We
recommend removing the section (Questions 21-26).
Response: Please see response to Comment 5 from AbbVie.
Comment 12 from Lilly: The details of the follow-up study are less clear than the main
study. What are the techniques and what are the dependent measures on which the
respondent will be asked to decide?

5

Response: The follow up study assesses the relative weighting of a market claim and
efficacy in decision-making. Participants are asked to choose a drug out of two options that
vary in (a) the presence of a market claim and (b) efficacy. We will examine product
preference as a function of efficacy using logistic regression. The difference in efficacy
between the two drugs on each choice set will be a continuous predictor variable and drug
choice will be a binary outcome variable. Critically, we will examine whether, and to what
extent, the efficacy-choice relationship varies as a function of an added market claim; thus,
market claim presence will be an interaction term. The experiment uses a discrete choice
approach common in psychology and economics.9
Comment 13 from Lilly: We suggest FDA stratify the sample for both studies across
demographic variables to ensure it is representative of the US diabetic population.
Response: We are applying demographic quotas to achieve a representative sample.
Comment 14 from Lilly: The questionnaire employs a number of different Likert scales that
differ on the number of scale values and definition of values. Lilly suggests using a standard
5-point scale with a mid-point and definitions for each value for all scalar questions.
Response: We have changed the Likert scales to be internally consistent.
Comment 15 from Lilly: For questions 9 and 16, by asking the respondents to perceive
overall quality of the drug, the survey risks introducing perceptions outside of experimental
control into the study. Overall quality is a very broad topic and might be dependent on the
graphics, wording, and personal biases that are outside of the market claims and efficacy
levels being tested. We suggest removing these questions, or changing the question to
“Overall efficacy.”
Response: By asking these questions, we hope to detect any differences in perceived quality
between those exposed to different experimental conditions. For example, those exposed to
an ad with a #1 prescribed market claim may perceive the product to be of higher quality
than those in the control condition. By keeping all ad elements beyond the experimental
manipulations (market claims, efficacy claims) constant, we can ensure that significant
differences between conditions are a result of the manipulations rather than any extraneous
factors. Random assignment to conditions should also distribute any random variance
equally across all cells.
Comment 16 from Lilly: We recommend removing questions 13 and 17 as they have the
potential to be misinterpreted or simply difficult for the respondent to answer if the stimulus
is not communicating prevalence of the drug’s side effects or benefits using precise
numbers.
Response: Please see answer to Comment 3 from AbbVie.

9

See, for example, Train, K. E. (2009). Discrete choice methods with simulation. Cambridge University Press.

6

Comment 17 from Lilly: For questions 27 and 28, we recommend slightly changing the
wordings for the possible answer choices to “Yes/No, claim is/is not mentioned as a benefit
in the ad” for Q27, and “Yes/No, claim is/is not mentioned as a side effect or risk in the ad”
for Q28.
Response: We agree that more specific wording would be helpful and have revised the
answer choices to read “Yes, statement is mentioned in the ad” and “No, statement is not
mentioned in the ad.”
Comment 18 from Lilly: Recommend removing Q31 as the question is an inverse of Q30 to
avoid confounding data.
Response: We have removed Q31 (skepticism).
Comment 19 from Lilly: The instructions for the Q35 through 38 section seems to have an
omitted word. We recommend revising to “how much do you agree or disagree with the
following statements?”
Response: Thank you for pointing this out. We will correct this.
Comment 20 from Lilly: We agree with placement of demographic questions (Q39-) at the
end but recommend re-evaluating them and consider removing them so as to avoid lack of
response due to respondent fatigue.
Response: The comment about respondent fatigue is well taken. However, we are adhering
to good questionnaire design in putting our most important dependent measures first and are
willing to accept the potential tradeoff in missing demographic data.
Comment 21 from Lilly: We suggest providing a more complete list of choices for Q43 and
placing this question earlier in the study.
Response: We appreciate this suggestion and have added questions about cost.
Comment 22 from Merck: Merck supports the importance of communicating information
that can be understood by consumers so that they can make better decisions about
prescription drugs. We believe that FDA should focus their efforts and research first on
improving the health literacy of approved patient labeling, and then on DTC print
advertising. In addition, FDA should consider exploring the inclusion of benefit information
in patient labeling, which may help improve consumer understanding and comprehension of
patient labeling.
Response: We share the goal of improving communications about prescription drugs. There
are efforts underway within FDA examining ways to improve patient labeling (see
Boudewyns et al., 2015). Although this comment is outside the scope of this project, we
will share this information internally.

7

Comment 23 from Merck: Merck believes the current study design limits the practical
utility of the information collected. The study proposes presenting efficacy information in
the form of simple quantitative information. Prior OPDP research acknowledged the
limitations of studying simple quantitative information. For many prescription drugs,
clinical trial outcomes are often more complicated than simple frequencies, which limit the
applicability of this research. Numeracy challenges are common in people with inadequate
health literacy. Numeracy challenges are not well represented in online research, and hence
the proposed methodology may not detect a lack of comprehension.
Response: We are pleased Merck has read FDA’s prior research in the area of
communicating quantitative information. As this is the first study examining the impact of
quantitative efficacy information on the perception of market share claims, we felt it was
better to start with relatively straightforward, though not simplistic, quantitative efficacy
information. We have worked with an expert reviewer in OPDP to product efficacy claims
that are realistic for this drug product class. The efficacy claim communicates both the level
of expected benefit and the likelihood of experiencing that benefit. We encourage additional
research on this topic utilizing increasingly complex quantitative information.
We have included a measure of numeracy in our questionnaire. We acknowledge that online
panels may underrepresent individuals with extremely low health literacy. Thus, any
differences we find as a function of numeracy in our sample may be magnified in the
general population.
Comment 24 from Merck: Merck recommends a mixed-method approach to reach limitedliteracy respondents. The phone or web approach allows for a broad, diverse geographic
sample. Respondents with low health literacy are not typically represented in these
databases, and may need to be recruited in less traditional places, such as literacy centers,
senior centers, and health clinics. Additionally, if a desktop computer is required, this may
inadvertently eliminate respondents from low socioeconomic status, who are less likely to
have a desktop computer and more likely to have internet only on their mobile device.
Response: We acknowledge that internet administration is not perfect and have chosen this
method to maximize our budget. We will permit the survey to be taken on a variety of
devices. We are excluding phones because the stimuli cannot be fully viewed on a very
small screen.
Comment 25 from Merck: For the follow-up study, we recommend reducing the number of
trials for respondents across health literacy levels, as respondent fatigue can occur, resulting
in reduced focus and unreliably responses. Refining the methodology to present fewer
choices to each respondent, and assuring the clarity of the information presented, would help
to enhance comprehension.
Response: We agree that minimizing respondent burden is a priority. We estimate that the
48 trials and instructions would require less than 8 minutes, on average. Pretest data may
reveal that the experiment can be shortened without loss to validity, in which case we will
reduce the number of trials.

8

Comment 26 from Merck: Questions 6, 32, and 50 include percentages. According to
Health Literacy Missouri, natural frequencies (1 out of 10) may be more useful than
percentages. Research suggests that less literate readers may interpret numbers as more
risky when in frequency form (1 out of 10) versus percentage form (10%).
Response: We have worked with an expert reviewer in OPDP to product efficacy claims
that are realistic for this drug product class.
Comment 27 from Merck: We suggest adding the following screener question to increase
the odds of recruiting limited-literacy respondents: “How confident are you in filling out
medical forms by yourself?”
Response: We acknowledge that internet panels underrepresent individuals with very low
literacy. Thus, it is important to acknowledge that our findings may not apply to very low
literacy individuals. It would be prohibitively expensive for us to screen for literacy up front
in order to establish quotas. We will measure health literacy and included it in analyses.
External Reviewers
In addition to public comment, OPDP solicited peer-review comments on potential measures
and study methodology from a panel of experts. These individuals are:
Sujity Sansgiry, Ph.D., Associate Professor of Pharmaceutical Health Outcomes and Policy,
University of Houston.
Christine Skubisz, Ph.D., Assistant Professor, School of Communications, Emerson College.

9. Explanation of Any Payment or Gift to Respondents
Ipsos’s i-Say panelists participate in two main incentive programs: sweepstakes drawings
and a per-survey point system. For the sweepstakes component, drawings are held several
times a year among panelists who have participated in surveys, with various prizes offered
worth up to $5,000. Panelists are entered for each survey in which they participate. Points
can be redeemed for electronic gift cards, prepaid cards, PayPal payments and charitable
donations. Participants in the Main Study and Pretests 1 and 2 will receive 180 i-Say points
(the equivalent of $1.80) in compensation for the 30-minute studies. Participants in Pretest 3
and the Follow-Up study will receive 90 i-Say points (the equivalent of $0.90) in
compensation for the 15-minutes studies. All participants are entered in the sweepstakes.
10. Assurance of Confidentiality Provided to Respondents
All participants will be provided with an assurance of privacy to the extent allowable by
law. See Appendix A for the consent form.

9

Survey data is collected via Ipsos Interactive Services, a secure online survey platform. Each
respondent uses a unique login and password to access the survey. Public facing servers
such as those hosting online surveys are separate from servers with project and protected
data. The system is regularly inspected for FISMA compliance.
Physical and digital access is restricted throughout Ipsos’s offices. Access to servers and
data can only be achieved through a legitimate network account and requires a network
logon and password. Also, only employees specifically assigned to the project can access
project material and data. Hard or paper materials, such as printed materials and
questionnaires, are kept in access-restricted locations and under lock and key.
No personally identifiable information will be sent to FDA. All information that can
identify individual participants will be maintained by the independent contractor in a form
that is separate from the data provided to FDA. For all data, alpha numeric codes will be
used instead of names as identifiers. These identification codes (rather than names) are used
on any documents or files that contain study data or participant responses.
The information will be kept in a secured fashion that will not permit unauthorized access.
Throughout the project, any hard-copy files will be stored in a locked file cabinet in the
Project Manager’s office, and electronic files will be stored on the contractor’s passwordprotected server, which allows only project team members access to the files. The privacy of
the information submitted is protected from disclosure under the Freedom of Information
Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the
agency’s regulations (21 CFR part 20). These methods have been approved by FDA’s
Institutional Review Board (Research Involving Human Subjects Committee, RIHSC).
All electronic data will be maintained in a manner consistent with the Department of Health
and Human Services’ Security Policy. All data will also be maintained in consistency with
the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA
Regulated Products).
11. Justification for Sensitive Questions
This data collection will not include sensitive questions. The complete list of questions is
available in Appendix B.
12. Estimates of Annualized Burden Hours and Costs
The first two pretests and main study are expected to last no more than 30 minutes. The
third pretest and follow-up study are expected to last no more than 15 minutes. This will be
a one-time (rather than annual) collection of information. FDA estimates the burden of this
collection of information as follows:

10

Table 1: Estimated Burden1
Activity

No. of respondents

Sample outgo (pretests and
main survey)
Screener completes

1,638

1

1,638

Eligible
Completes, Pretest 1

1,556
252

==
1

==
252

Completes, Pretest 2

252

1

252

Completes, Main Study

495

1

495

Completes, Pretest 3

108

1

108

Completes, Follow-up Study

216

1

216

==

==

==

Total
1

No. of
Total annual
Avg.
responses per respondents burden per
respondent
response
16,384
==
==
==
.03
(2 mins.)
==
0.5
(30 mins.)
0.5
(30 mins.)
0.5
(30 mins.)
0.25
(15 mins.)
0.25
(15 mins.)
==

Total
hours
==
49.1
==
126.0
126.0
247.5
27.0
54.0
629.6

There are no capital costs or operating and maintenance costs associated with this collection of information.

These estimates are based on FDA’s and the contractor’s experience with previous
consumer studies.

13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs
There are no capital, start-up, operating or maintenance costs associated with this
information collection.
14. Annualized Cost to the Federal Government
The total estimated cost to the Federal Government for the collection of data is $529,742
($176,581 per year for three years). This includes the costs paid to the contractors to create
the stimuli, program the study, draw the sample, collect the data, and create and analyze a
database of the results. The contract was awarded as a result of competition. Specific cost
information other than the award amount is proprietary to the contractor and is not public
information. The cost also includes FDA staff time to design and manage the study, to
analyze the resultant data, and to draft a manuscript ($85,800; 10 hours per week for three
years).
15. Explanation for Program Changes or Adjustments
This is a new data collection.

11

16. Plans for Tabulation and Publication and Project Time Schedule
Conventional statistical techniques for experimental data, such as descriptive statistics,
analysis of variance, and regression models, will be used to analyze the data. See Section B
for detailed information on the design, hypotheses, and analysis plan. The Agency
anticipates disseminating the results of the study after the final analyses of the data are
completed, reviewed, and cleared. The exact timing and nature of any such dissemination
has not been determined, but may include presentations at trade and academic conferences,
publications, articles, and Internet posting.

Table 2. – Project Time Schedule
Task
Pretest data collected
Pretest data completed
Main study data collected
Final methods report completed
Final results report completed
Manuscript submitted for internal review
Manuscript submitted for peer-review journal publication

Estimated Number of Weeks
after OMB Approval
6 weeks
14 weeks
26 weeks
38 weeks
48 weeks
56 weeks
64 weeks

17. Reason(s) Display of OMB Expiration Date is Inappropriate
No exemption is requested.
18. Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the certification.

12


File Typeapplication/pdf
File TitleMicrosoft Word - Market Claims in DTC Print Ads SS Part A 2016.doc
AuthorDHC
File Modified2016-07-25
File Created2016-07-25

© 2024 OMB.report | Privacy Policy