Supporting Statement Part A Terms and Phrases 2021

Supporting Statement Part A Terms and Phrases 2021.docx

Assessment of Terms and Phrases Commonly Used in Prescription Drug Promotion

OMB: 0910-0895

Document [docx]
Download: docx | pdf

United States Food and Drug Administration

Assessment of Terms and Phrases Commonly Used in Prescription Drug Promotion

OMB Control No. 0910-NEW

SUPPORTING STATEMENT

Part A. Justification

  1. Circumstances Making the Collection of Information Necessary

Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes FDA to conduct research relating to health information. Section 1003(d)(2)(C) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) (21 U.S.C. 393(d)(2)(C)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.

The Office of Prescription Drug Promotion’s (OPDP) mission is to protect the public health, in part, by helping to ensure that prescription drug promotional material is truthful, balanced, and accurately communicated, so that patients and healthcare providers can make informed decisions about treatment options. OPDP’s research program provides scientific evidence to help ensure that our policies related to prescription drug promotion will have the greatest benefit to public health. Toward that end, we have consistently conducted research to evaluate the aspects of prescription drug promotion that are most central to our mission, focusing in particular on three main topic areas: advertising features, including content and format; target populations; and research quality. Through the evaluation of advertising features we assess how elements such as graphics, format, and disease and product characteristics impact the communication and understanding of prescription drug risks and benefits; focusing on target populations allows us to evaluate how understanding of prescription drug risks and benefits may vary as a function of audience; and our focus on research quality aims at maximizing the quality of research data through analytical methodology development and investigation of sampling and response issues. This study will inform all three topic areas.

Because we recognize the strength of data and the confidence in the robust nature of the findings is improved through the results of multiple converging studies, we continue to develop evidence to inform our thinking. We evaluate the results from our studies within the broader context of research and findings from other sources, and this larger body of knowledge collectively informs our policies as well as our research program. Our research is documented on our homepage, which can be found at: https://www.fda.gov/aboutfda/centersoffices/officeofmedicalproductsandtobacco/cder/ucm090276.htm. The website includes links to the latest Federal Register notices and peer-reviewed publications produced by our office. The website maintains information on studies we have conducted, dating back to a direct-to-consumer survey conducted in 1999.

The present research involves assessment of how consumers and primary care physicians (PCPs) interpret terms and phrases commonly used in prescription drug promotion, as well as those used to describe prescription drugs and prescription drug promotion more generally. This includes both what these terms and phrases mean to each population (e.g., definitions) and what these terms and phrases imply (e.g., about efficacy and safety). Some examples of interest include: “natural” or “naturally-occurring,” and “targeted” or “targeted therapy.” The full list for assessment will include approximately 30 terms and phrases for each population. To accommodate such a large number, presented terms and phrases will be accompanied by only limited context (terms within sentences and phrases within paragraphs, as opposed to full promotional materials). Understanding the most prevalent interpretations of these terms and phrases can help OPDP determine the impact of specific language in prescription drug promotion. For example, certain terms and phrases, when used without additional contextual information, might overstate the efficacy or minimize the risk of a product. Additionally, from a health literacy perspective, it is helpful to ascertain general understanding of such terms and phrases as this may aid in the development of best practices around communicating these concepts.

  1. Purpose and Use of the Information Collection

The objective of this research is to provide an assessment of terms and phrases commonly used in prescription drug promotion, including what these terms and phrases mean to consumers and PCPs (e.g., definitions) and what these terms and phrases imply (e.g., about efficacy and safety). We will also assess terms and phrases used to describe prescription drug promotion. The results from this research will be used by FDA to inform its understanding of direct-to-consumer (DTC) and physician-directed promotion, inform regulatory policy, and may also help to identify areas for further research.

  1. Use of Improved Information Technology and Burden Reduction

Burden will be reduced by recording data on a one-time basis for each respondent, and by keeping study procedures to 20 minutes for the Phase 2 surveys and to 60 minutes for the Phase 1 interviews. The Phase 2 consumer sample will self-administer the survey instrument via a computer and the Phase 2 PCP sample will self-administer the survey via a printed, mailed survey, an approach that has been tailored based on each population’s expected likelihood to respond. Administration of Phase 1 requires interviewing and thus will not involve self-administration of the survey. In addition to its use in data collection, automated technology will be used in data reduction and analysis.

  1. Efforts to Identify Duplication and Use of Similar Information

We conducted a literature search to identify duplication and use of similar information. We conducted a review of the scientific literature by locating relevant articles through keyword searches using popular databases such as PubMed and PsycInfo. We also identified relevant articles from the reference list of articles found through keyword searches. We did not find duplicative work on the present topic.

  1. Impact on Small Businesses or Other Small Entities

No small businesses will be involved in this data collection.

  1. Consequences of Collecting the Information Less Frequently

The proposed data collection is one-time only. There are no plans for successive data collections.

  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for this collection of information.

  1. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

In the Federal Register of November 6, 2019 (84 FR 59833), FDA published a 60-day notice requesting public comment on the proposed collection of information. FDA received eight comments, but only five submissions were PRA-related. Within those submissions, FDA received multiple comments that the Agency has addressed.

(Comment 1): Four comments supported the proposed research as an important step towards addressing current issues with the United States’ prescription drug advertisement practices.

(Response): FDA agrees with these comments to the extent they relate to this study.

(Comment 2): Two comments suggested the proposed research methodology could be improved by providing the general population with the option to complete the survey in writing or over the phone. These comments asserted that elderly consumers are highly susceptible to false and misleading advertisements of prescription drugs, and that elderly consumers use prescription drugs at rates higher than any other age group. The comments also indicated that elderly populations may face barriers to accessing a web-based platform to complete the survey.

(Response): While we agree that web panel surveys can sometimes have less than ideal coverage of populations like older adults, the survey proposed here would not be sampling from a web panel, but would instead use a probability sample selected from an address-based sample (ABS) frame to ensure a nationally-representative sample. This helps to ensure better coverage of older adults, who may be less likely to be part of an existing opt-in survey panel or less likely to answer a web-based ad to complete a survey than to respond to a mailed survey invitation. Pew research finds that 73% of people aged 65+ have access to the Internet in their home compared to 90% for the overall U.S. population.1 To address this coverage concern, responses from older adults will be weighted to the full U.S. population.

Our recent experience suggests we will be able to adequately represent this group. As an example, in a survey conducted by RTI on the Residential Energy Consumption Survey National Pilot, an analysis of representativeness among survey protocols found that for the older age group, web was less representative than a mixed mode survey allowing for either web-based or paper survey, but was still considered to have “good” agreement with the American Community Survey (considered the gold standard for U.S. demographic data).

(Comment 3): The comment indicated the proposed research methodology could be improved by including behavior-based questions in the surveys.

(Response): We agree about the value of measuring behavioral intentions in general. However, in this particular study, in which we are asking about a variety of terms and phrases used in prescription drug advertising that may or may not be relevant to all members of the sample, behavioral intention questions would not be appropriate. The drugs in question would not be relevant or salient for all consumers in the study. For example, a respondent will be able to answer questions about language used to describe migraine medication (e.g., #1 prescribed medication) even if they do not suffer from migraines. However, it would not make sense to ask them about their behavioral intentions related to taking that migraine medicine if they do not suffer from migraines. Given the limitations of space and scope, we do not plan to add more behavioral intentions measures into this study.

(Comment 4): The comment suggested that some of the longer contextual-based passages interviewees are presented with should include situations in which viewers/listeners are presented with previously seldom-used or new-to-the-public terms and phrases and an attempt at definition or generation of emotional valence by marketers.

(Response): The purpose of this study is for FDA to test understanding of terms “commonly used in prescription drug promotion.” Thus, those that have been “previously seldom-used” or are “new-to the-public” are outside the scope of the study and are not included in the survey materials.

The idea to study emotional valence is very interesting, but also beyond the scope of the current research.

(Comment 5): The comment included a note on the PCP mail surveys: rather than focusing on incentivizing response via an object included with the PCP mail surveys, the comment suggested that research funds would be better spent ensuring the surveys are engaging, easily understood by the two target audiences, short to complete, and presented with a clear deadline.

(Response): We believe we have the capacity both to incentivize the response and to ensure the surveys are engaging. For example, we specifically designed the advance mailings (letters that will go to potential participants) to follow best practices for ensuring the study is engaging, such as stating the purpose and likely outcomes of the research in the letter and including a graphic to identify the study on the postcard or envelope.

Token incentives have been shown in the literature to have a real impact on response rates, and increased response rates can save costs and potentially reduce nonresponse bias (if reluctant respondents are different from non-reluctant respondents). In fact, the literature has shown that even with short, engaging surveys, these types of token incentives can substantially boost response rates.

(Comment 6): The comment suggested that the study population of healthcare providers should be expanded to include specialists.

(Response): While we understand that some of the topics may be relevant for specialists, and we do often include specialists in our research, our focus in the present research is on PCPs. Specialists are not as numerous as PCPs, which makes them harder to recruit. In 2018, for example, the proportion of specialists representing each specialty area ranged from 2% (endocrinologists) to 11% (psychiatrists and emergency medicine specialists).2 These data demonstrate that the pool of potentially eligible specialists is limited. Given the large required sample size for this study, we chose to limit the population to PCPs.

(Comment 7): The comment suggested that FDA should use additional context for certain terms to more accurately represent the way in which these terms are conveyed in promotion. Specifically, the comment requested that FDA add context for the following terms:

  1. HCP assessment term of “significant (as in statistically significant)”: The comment stated that this term should be accompanied by a 95% CI, hazard ratio and p-value as additional data points.

  2. HCP and consumer assessment phrases “manageable safety profile; established safety profile; well-studied safety profile; “well-tolerated”: The comment stated that these phrases should be accompanied by an example, such as a table showing most common adverse events.

(Response): Regarding the term “significant (as in statistically significant)” and the suggestion to add additional data points: Although references to statistical significance in the prescription drug promotion marketplace are sometimes accompanied by other statistical information, at other times they are not. In this assessment, we wish to assess understanding of this phrase on its own.

Regarding “manageable safety profile” and related phrases and the suggestion to add an example such as a table showing most common adverse events: Given the length of the current instruments, we are limited in what can be included. The scope of this study includes terms and phrases and not graphics or numbers. However, we recognize the importance of studying those features as well. Examples of research involving these features can be found on the OPDP research website, linked earlier in this document.

(Comment 8): The comment suggests that the following commonly used terms should be added to the assessment to increase the utility, quality and clarity of the information collected.

For consumers and HCP, the comment suggested adding:

          1. “Potent” to assessment term “powerful;” and

          2. New assessment term “convenient/straightforward/simple/easy/easy to use.”

For HCPs only, the comment suggested adding “high affinity.”

(Response): Thank you for these suggestions. We added “potent,” “convenient,” “straightforward,” “simple,” easy”, and “easy to use” to the surveys. For “high affinity,” we have conducted several informal searches, but have not found sufficient examples of the use of this term in promotional materials.

(Comment 9): The comment noted that the surveys take terms and phrases out of context and suggests that FDA should study how consumers and PCPs interpret representative promotional pieces that include appropriate accompanying context.

(Response): This study is one in a program of related research conducted by OPDP. In several related studies, we examine how consumers and PCPs interpret the terms and phrases in representative promotional pieces that include accompanying context. In contrast to this prior research, the proposed research allows for assessment of a large number of terms and phrases—effectively emphasizing breadth over depth, and involving data collection from a nationally representative sample. We believe these various approaches to studying language commonly used in prescription drug promotion complement one another and together contribute to a more comprehensive understanding of the research questions.

(Comment 10): The comment suggested that questions in the surveys may be leading. In describing the proposed research, the 60-day notice stated, “For example, certain terms and phrases, when used without additional contextual information, might overstate the efficacy and minimize the risk of a product.” The comment stated that this statement shows bias that manifests in the proposed questions and suggests that because the evident bias is deeply rooted in this proposed study and its surveys, FDA should fundamentally reformulate the proposed collection of information in its entirety.

(Response): We agree that some of the probes proposed for use in the Phase 1 research may appear to be leading, so we have rewritten these probes. For example, where it said “safer,” we have altered language to “more” or “less” safe.

In the Phase 2 surveys, the safety and efficacy questions are not leading or one-sided. The questions use bipolar response scales allowing respondents to indicate that the products using that term are less safe/effective, equally as safe/effective, or more safe/effective.

(Comment 11): The comment suggested that the proposed answers in the closed-ended surveys are unbalanced.

(Response): We have reviewed the Phase 2 questions and made some edits to ensure more balance.

It is important to note that the response options shown for many of the questions are just examples. The full list of response options used in the Phase 2 surveys will be developed based on responses to the Phase 1 interviews. As a result, the Phase 2 response options may skew slightly negative or positive depending on what interview respondents say in the Phase 1 interviews. However, we will ensure that there is balance with both negative and positive response options.

(Comment 12): The comment suggested that by asking respondents to compare closely related terms and phrases, the survey may force artificial findings of difference… The comment stated that even if the measured differences are real (and not due to biases in the surveys), it is unclear how the results would have any practical utility because there may not be any objective definitions of the terms with which to compare the results.

(Response): We describe below the process to mitigate the effects of this concern.

If participants in the Phase 1 research do not articulate differences between certain terms, we will exclude those terms from Phase 2. This will reduce the chance to find artificial differences between terms.

We can also split question sets into multiple individual questions. We will make decisions surrounding this solution following completion of the Phase 1 interviews.

For the consumer survey, which will be conducted online only, we will randomize the order in which the terms are presented. This will not eliminate context effects but will randomly distribute any error across terms rather than significantly biasing an individual term.

(Comment 13): The comment opined that the surveys, at least in the past, are unnecessarily duplicative of information otherwise reasonably accessible to FDA (e.g., focus groups conducted by FDA in 2014; and information available from third-party sources regarding the terms “many,” “most,” “majority,” “some,” and “few”).

(Response): We believe the research is not duplicative of that conducted in 2014 by FDA, but instead builds on that research. It is being conducted by the same research team and is part of a coherent program of research that includes formative focus groups, in-depth interviews, a survey, and an experimental study. We used those focus group reports to inform the development of answer options for this study. The very few terms that are repeated in the current survey have been included in the current study because researchers wanted to follow up on previous findings with a larger, nationally representative sample. Furthermore, that study did not collect any quantitative data on the terms.

Literature searches in multiple medical, social science and linguistics databases, including Pubmed, Web of Science, EBSCO Discovery Service, and Linguistics Database for research on how people quantify or interpret terms like “few” and “many” as we do in the present research did not reveal significant literature on these terms. It is important for FDA to understand how these terms are interpreted in the context of prescription drug promotion, thus we plan to keep them in the current study.

(Comment 14): One comment recommended that FDA remove questions about the terms “off-label” and “prescription drug promotion” as they are not terms used in promotion.

(Response): While “off label” and “prescription drug promotion” are not terms that are typically used in promotion, it is important for FDA to understand how healthcare providers perceive these terms in general. We have revised the description of the scope in the Federal Register notice to clarify this broader purpose. We now state: “The present research involves assessment of how consumers and primary care physicians (PCPs) interpret terms and phrases commonly used in prescription drug promotion, as well as those used to describe prescription drugs and prescription drug promotion more generally.”

(Comment 15): One comment recommended that FDA change the framing for the survey from a focus on “words or phrases that are commonly used in prescription drug advertising” to “words or phrases that are commonly used to describe prescription drugs.” The comment suggested that if the survey keeps the former, respondents will view the surveys through whatever biases they have for drug advertising.

(Response): Because it is our intention to examine what participants think in the context of prescription drug advertising, we have retained our original approach to framing the research, while also expanding that framing to reference terms or phrases that are commonly used to describe prescription drug promotion.

External Reviewers

In addition to the comments above, the following experts reviewed the study design, methodology, and questionnaires:

  1. Terry Davis, Ph.D., Professor of Medicine and Pediatrics, Feist Weiller Cancer Center, Louisiana State University Health Science Center

  2. Michael Mackert, Ph.D., Professor, Department of Advertising, University of Texas

  3. Rima Rudd, Senior Lecturer on Health Literacy, Education, and Policy, Harvard T.H. Chan School of Public Health

  1. Explanation of Any Payment or Gift to Respondents

For completing the Phase 1 interview, consumers will receive $50, with the possibility of offering $75 if we struggle to recruit them and PCPs will receive $225. Since we usually provide three weeks for recruiting, we will plan to assess at the end of 2 weeks. If 75% of the sample has not been recruited by that time, we will implement the plan for the higher incentive. For the Phase 2 survey, consumers will receive a $2 prepaid cash incentive with the survey invitation, plus a $20 postpaid incentive for completing the survey; and PCPs will receive a $50 prepaid incentive exclusively. Based on our experience, and recent consultation with recruiting firms, these incentives are below current market rates for each population, yet should help ensure high participation and show rates.

We also plan to embed an experiment in the PCP mail survey to assess the effect of token incentives on response rates. More information about this experiment is provided In Part B of this Supporting Statement.

The proposed incentive rates are in accordance with standard practice and based on our experience with specific hard-to-reach populations, the amount of time the participant spends in the study, what is required of them, recent consultation with our recruiting vendor, and OMB-approved incentives on recent FDA projects. This estimate is based on participants spending approximately 90 minutes of their time on this task, which includes time for screening (5 minutes), time for testing the iTracks platform (10 minutes), time to participate in the interview (60 minutes), and the time involved in logging in 15 minutes early to confirm the technology is operating correctly. This token of appreciation is intended to provide enough incentive to participate in the study rather than another activity, improve data quality, reduce the number of cancellations, recognize the burden of childcare costs, and convey appreciation for contributing to this important activity.3 Incentives must be high enough to equalize the burden placed on respondents in respect to their time and cost of participation.

The Bureau of Labor Statistics (BLS) calculated that the average hourly wage of employees, including benefits, in March 2020 was $37.73.4 At that rate, compensation for 90 minutes is approximately $57. Although the incentive is a token of appreciation and not a wage, this estimate represents the amount of money participants would earn if they spent the same amount of time working at a job. But that is not the only expense to consider.

It is worth emphasizing that during the current pandemic, all research is remote. Thus, the convenience typically associated with virtual (remote) sessions is no longer relevant, and participants do not derive any benefit or convenience from joining a remote (versus an in-person) interview. Participants are required to join the interview from a location where there are no distractions, which may require coordinating childcare, finding a private and quiet location, or special accommodations during that time. BLS calculated in May 2018 that the average hourly wage of childcare workers is $11.65, making the average cost of 90 minutes of childcare $18.5

The interviews will be conducted online and participants must have a computer and broadband Internet to participate; participating will use approximately 90 minutes of data on their Internet plans.

A 2017 study on willingness to participate in qualitative research among general population participants found that participants offered a monetary incentive were more willing to participate than those offered no incentive or a nonmonetary incentive.6 Participants reported that the incentive was an important factor in helping them decide whether to participate. Among those who had at least some willingness, $75 produced more willingness than $25.

In reviewing OMB’s guidance on the factors that may justify paying incentives to research participants, we have determined that the following principles apply:

  1. The incentive amount will help reduce costs.

OMB’s guidance states that “If prior or similar surveys have devoted considerable resources to nonresponse follow-up, it may be possible to demonstrate that the cost of incentives will be less than the costs of extensive follow-up.”7

Consequences of insufficient incentives include increased time and cost of recruitment, increased “no-show” rates, and increased probability of cancelled or postponed interviews.

During the current pandemic, many people are working from home and experiencing “Zoom fatigue” and getting them to participate in yet one more Zoom call can be difficult. For some health care providers, the pandemic brings additional guidelines for sanitization between patients, potentially reduced staff in the office due to school children being home or absences related to suspected COVID-19 infection and quarantine. All of these factors increase the workload for available staff. Thus, incentives may need to be higher, to encourage providers to set aside the time needed to participate.

  1. Similar incentives were previously approved under recent OMB packages.

According to item 76 in the Memorandum for the President’s Management Council, past experience can be utilized to justify a more elevated honorarium: “Agencies may be able to justify the use of incentives by relating past survey experience, results from pretests or pilot tests, or findings from similar studies. This is especially true where there is evidence of attrition and/or poor prior response rates”.

Phase 1 interviews (consumers)

Not only is the proposed incentive of $50 significantly lower than market rate, it is also consistent with what OMB approved in recent years for remote interviews conducted by FDA on similar topics:.

  • Study of Oncology Indications in Direct-to-Consumer Television Advertising (OMB control number 0910-0885; 2020)

  • $50 for 60-minute remote cognitive interviews with general population consumers

  • Focus Groups on FDA’s Accelerated Approval Process (under generic OMB Control Number 0910-049; 2018)

  • $75 for 60 minutes with general population consumers

Examples of the tiered strategy of using a lower incentive amount and increasing it if recruiting proves difficult:

  • A similar tiered strategy was approved by OMB in 2017 for the Centers for Disease Control and Prevention’s Formative Research to Develop HIV Social Marketing Campaigns for Healthcare Providers (OMB No. 0920-1182).

  • A similar strategy was also used successfully in 2019 on the Health Care Providers’ Understanding of Opioid Analgesic Abuse-Deterrent Formulations: Focus Groups (OMB control number-010-0847) for which iTracks was the recruiter.

Phase 1 interviews (providers)

The proposed $225 for healthcare providers is less than has been offered on several other recent FDA studies with providers on similar topics:

  • For participation in an FDA biological products study, RTI also paid specialists incentives of $250 for participating in interviews of 60 minutes as part of the same project (OMB No. 0910-0687 approved in 2015).

  • Similarly, specialists received $250 incentives for participating in a one-hour focus group as part of Generic Drug Substitution in Special Populations study (OMB No. 0910-0677; 2017).

  • Specialists received $250 for participating in 60-minute telephone interviews for Studies to Enhance FDA Communications Addressing Opioids and Other Potentially Addictive Pain Medications (OMB No. 0910-0695; 2016).

  • Primary care providers received $200 and specialists $300 for participating in 60-minute interviews for Multiple Indications (OMB No. 0910-0695-2019).

Phase 2 survey (consumers)

The amounts for the survey (Phase 2) are also in line with those used on recent studies conducted by FDA, the Department of Commerce, and the U.S. Energy Information Administration:

  • For the consumer survey, a $5 prepaid and $20 postpaid is what we used on the Residential Energy Consumption Survey National Pilot in 2015(OMB No. 1905-0186) and also on the 2020 Residential Energy Consumption Survey (OMB No. 1905-0092). Both studies are with general population consumers.

  • FDA’s National Survey of Health Information and Communication in 2017 used a $2 prepaid incentive (OMB No. 0910-0828).

  • The NOAA Fishing Effort Survey is a general population survey regarding recreational fishing behavior, which used a $2 pre-paid incentive (OMB No. 0648-0652).

Phase 2 survey (providers)

The incentive is in the form of a pre-paid check. A $50 pre-paid check was offered for the Survey of Precision Medicine (OMB no. 0925-0739). RTI and the National Cancer Institute (NCI) conducted an experiment with this method on the Survey of Precision Medicine (survey of oncologists funded by NCI). In that study, RTI and NCI found that most nonresponders do not cash the check and some responders do not cash the check.8 Because of this, overall response rates are higher and costs are lower with the pre-paid incentive check.

  1. An incentive will improve data quality by improving validity and reliability.

OMB’s guidance states that a “justification for requesting use of an incentive is improvement in data quality. For example, agencies may be able to provide evidence that, because of an increase in response rates, an incentive will significantly improve validity and reliability to an extent beyond that possible through other means”.

Several studies have demonstrated that the use of gifts of gratitude are an effective method for increasing response rates, particularly among hard-to-reach populations.8 Numerous empirical studies have established that providing incentives can significantly increase participation rates, and that larger incentives (e.g., $100, $150) perform significantly better than smaller incentives.9,10,11,12,13 If the incentive is not adequate, participants may agree to participate and then not show up or drop out early. Low participation may result in inadequate data collection or, in the worst cases, loss of government funds associated with recruitment and interviewer and observer time.14

As well as preventing a low show rate, incentives are necessary to ensure adequate representation among harder-to-recruit populations and can help attract a reasonable cross-section of participants, reflecting diversity in age, income, and education.15,16 Numerous studies have shown that incentives can reduce nonresponse bias for key subgroups. Griffin et al.17 and Lesser et al.18 found that incentives reduced nonresponse bias for gender. Incentives have also been effective in increasing participation from minority respondents.19

Leverage-salience theory argues that monetary incentives can help to recruit people who otherwise might not be motivated to respond (e.g., people who do not care about the topic,20 lack altruistic motives for responding, have competing obligations)21 or are typically less likely to participate in research.22 Using incentives to bring in a cross section of consumers can reduce nonresponse bias if these participants (those less interested in the topic, men, minorities, high income) have different responses and feedback than those who would participate without incentives.23

The incentives we are proposing are cash incentives because research has consistently shown that cash incentives result in greater response rates than lottery tickets or other non-monetary incentives and lead to improved data quality24,25

  1. This incentive is consistent with those used in other studies between the contractor (RTI) and the vendor.

RTI has consulted with iTracks about the $50 incentive. iTracks, the platform that will recruit and host the interviews, has said $75 is the current acceptable amount for a 1-hour interview. The firm said it can try to recruit for a lower amount, but would not be able to guarantee the target sample with $75. Thus, we plan to start with $50 and if we cannot get enough participants, to increase the amount to $75 in an attempt to convert these refusals into willingness to participate.

Offering an incentive below these rates may result in increased costs exceeding the amount saved with a lower incentive. Consequences of insufficient incentives include increased time and cost of recruitment, and increased probability of cancelled or postponed interviews.

  1. Assurance of Confidentiality Provided to Respondents

No personally identifiable information (PII) will be sent to FDA. Data from completed surveys will be compiled into an SPSS data set by RTI International, the contractor, with no PII for analysis. All information that can identify individual respondents will be maintained in a form that is separate from the data provided to FDA. The information will be kept in a secured fashion that will not permit unauthorized access. Confidentiality of the information submitted is protected from disclosure under the Freedom of Information Act under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20). These methods will all be approved by FDA’s Institutional Review Board prior to collecting any information.

For the interviews, only first names will be used when livestreaming and audio-taping participants, and transcripts sent to the FDA will not contain participants’ names. Livestreaming of the interviews will not involve participants’ faces and will only involve their verbal responses to the questions.

All participants will be assured that the information will be used only for research purposes and will be kept private to the extent allowable by law. The study instructions and informed consent will include information explaining to respondents that their information will be kept confidential. Participants will be assured that their answers to screener and survey questions will not be shared with anyone outside the research team and that their names will not be reported with responses provided. Participants will be told that the information obtained from all of the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.

All electronic data will be maintained in a manner consistent with DHHS’s ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).

  1. Justification for Sensitive Questions

This data collection will not include sensitive questions.

  1. Estimates of Annualized Burden Hours and Costs

12a. Annualized Hour Burden Estimate

FDA estimates the burden of this collection of information as follows:

Table 1.--Estimated Annual Reporting Burden1


Activity

No. of Respondents

No. of Responses per Respondent

Total Annual Responses

Average Burden per Response

Total Hours

General Population

Phase 1: Screener completes (assumes 35% eligible)

85

1

85

0.083

(5 minutes)

7

Phase 1: Number of completes

30

1

30

1

30

Phase 2: Screener completes (assumes 90% eligible)

1,185

1

1,185

0.083

(5 minutes)

98

Phase 2: Number of completes

1,067

1

1,067+10%2

=1,174

0.333

(20 minutes)

391

PCP Population

Phase 1: Screener completes (assumes 30% eligible)

104

1

104

0.083

(5 minutes)

9

Phase 1: Number of completes

30

1

30

1

30

Phase 2: Screener completes (assumes 90% eligible)

1,180

1

1,180

0.083

(5 minutes)

98

Phase 2: Number of completes

1,062

1

1,062+10% 2

=1,168

0.333

(20 minutes)

389

Total

1,052

1There are no capital costs or operating and maintenance costs associated with this collection of information.

2As with most online and mail surveys, it is always possible that some participants are in the process of completing the survey when the target number is reached and that those surveys will be completed and received before the survey is closed out. To account for this, we have estimated approximately 10 percent overage for both samples in the study.







  1. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs

There are no capital, start-up, operating or maintenance costs associated with this information collection.

  1. Annualized Cost to the Federal Government

The total estimated cost to the Federal Government for the research is $658,901.00. This includes the costs paid to the contractor to assist with study design, questionnaire, and stimuli development, recruit a sample, collect and analyze data, write reports of work completed, and present findings. The task order was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information.

  1. Explanation for Programs Changes or Adjustments

This is a new data collection.

  1. Plans for Tabulation and Publication and Project Time Schedule

Conventional statistical techniques, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See part B for detailed information on the design and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and posting on FDA’s website.

Table 2.--Estimated Project Timetable

Task

Estimated Completion Date

FDA IRB review

July, 2020

30-day FRN publication

September 18, 2020

OMB Review of PRA package

September, 2020

Pretesting

April, 2021

Main Study Data Collection

March, 2022









  1. Reason(s) Display of OMB Expiration Date is Inappropriate

FDA will display the OMB expiration date as required by 5 CFR 1320.5.

  1. Exceptions to Certification for Paperwork Reduction Act Submissions

There are no exceptions to the certification.

1Internet/Broadband Fact Sheet (2019). Pew Research Center. Retrieved from https://www.pewresearch.org/internet/fact-sheet/internet-broadband.

2Kaiser Family Foundation. (2018). Professionally Active Specialist Physicians by Field. Retrieved from https://www.kff.org/other/state-indicator/physicians-by-specialty-area.

3 Russell, M. L., Moralejo, D. G., Burgess, E. D. (2000). Paying research subjects: Participants’ perspectives. Journal of Medical Ethics, 26(2), 126–130.

4 U.S. Bureau of Labor Statistics (BLS), “Civilian workers by occupational and industry group,” March, 2020, Table 2, total compensation for civilian workers: http://www.bls.gov/ncs/ (visited July 21, 2020)

5 BLS, Occupational Outlook Handbook, Childcare Workers, on the Internet at https://www.bls.gov/ooh/personal-care-and-service/childcare-workers.htm (visited July 21, 2020).

6 Kelly, B., Margolis, M., McCormack, L., LeBaron, P.A., Chowdhury, D., (2017). What affects people’s willingness to participate in qualitative research? An experimental comparison of five incentives. Field Methods, 1-18.

7 Office of Management and Budget. (2006). Questions and Answers When Designing Surveys for Information Collections. Washington, DC: Office of Information and Regulatory Affairs. OMB. Retrieved from: https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/pmc_sur vey_guidance_2006.pdf

8 Wiant, K., Geisen, E., Creel, D., Willis, G., Freeman, A., de Moor, J. & Kablunde, C., (2018). Risks and rewards of using prepaid vs. postpaid incentive checks on a survey of physicians. BMC Medical Research Methodology, 18; 104.


9 Shaghagi, A., Bhopal, R. S., Sheikh, A. (2011). Approaches to recruiting ‘hard-to-reach’ populations into research: A review of the literature. Health Promotion Perspectives, 1(2), 8694

10 Shettle, C., & Mooney, G. (1999). Monetary incentives in US government surveys. Journal of Official Statistics15(2), 231.

11 Martinez-Ebers, V. (1997). Using Monetary Incentives with Hard-To-Reach Populations in Panel Surveys. International Journal of Public Opinion Research. 99(1), 7786.

12 Hsu, J. W., Schmeiser, M. D., Haggerty, C., & Nelson, S. (2017). The effect of large monetary incentives on survey completion: Evidence from a randomized experiment with the survey of consumer finances. Public Opinion Quarterly, 81(3), 736747.

13 Church (1993): Estimating the effect of incentives on mail survey response rates. Public Opinion Quarterly, 57(1), 62-79.

14 Morgan, D. L., Scannell, A. U. (1998). Planning Focus Groups. Thousand Oaks, CA: Sage.

15 Groth, S. W. (2010). Honorarium or coercion: use of incentives for participants in clinical research. Journal of the New York State Nurses Association, 41, 113.

16 Willis, G. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage.

17 Griffin, J. M., Simon, A. B., Hulbert, E., Stevenson, J., Grill, J. P., Noorbaloochi, S., & Partin, M. R. (2011). A comparison of small monetary incentives to convert survey non-respondents: a randomized control trial. BMC Medical Research Methodology11(1), 81.

18 Lesser, V. M., Dillman, D. A., Carlson, J., Lorenz, F., Mason, R., and Willits, F. (2001) Quantifying the Influence of Incentives on Mail Survey Response Rates and Nonresponse Bias. Paper presented at the annual meeting of the American Statistical Association, Atlanta, GA.

19 Singer, E. & Kulka, R. A. (2002). Paying Respondents for Survey Participation. In M. Ver Ploeg, R. A. Moffitt, & C. F. Citro (Eds.), Studies of Welfare Populations: Data Collection and Research Issues, Washington, D.C.: National Academy Press.

20 Groves, R. M., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly68(1), 231.

21 Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112141.

22 Guyll, M., Spoth, R., & Redmond, C. (2003). The effects of incentives and research requirements on participation rates for a community-based preventive intervention research study. Journal of Primary Prevention24(1), 2541.

23 Castiglioni, L., Pforr, K. The effect of incentives in reducing non-response bias in a multi-actor survey. Presented at the 2nd annual European Survey Research Association Conference, Prague, Czech Republic, June, 2007

24 Singer, E., Van Hoewyk, J., Gebler, N., & McGonagle, K. (1999). The effect of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics, 15(2), 217.

25 Edwards, P., Roberts, I., Clarke, M., DiGuiseppi, C., Pratap, S., Wentz, R. and Kwan, I. (2002) Increasing response rates to postal questionnaires: Systematic review. British Medical Journal 324, 1183.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMichelle Bogus
File Modified0000-00-00
File Created2022-01-21

© 2024 OMB.report | Privacy Policy