OMB approval memo

0695 OMB approval memo for HCP - Risk Processing Interviews.docx

Data to Support Drug Product Communications

OMB approval memo

OMB: 0910-0695

Document [docx]
Download: docx | pdf

FDA DOCUMENTATION FOR THE GENERIC CLEARANCE

OF COMMUNICATION TESTING FOR DRUG PRODUCTS (0910-0695)

Shape1


TITLE OF INFORMATION COLLECTION: Healthcare Professional Interviews:

Risk Processing for Newly Promoted Prescription Drugs


DESCRIPTION OF THIS SPECIFIC COLLECTION

  1. Statement of need:


The Food and Drug Administration’s (FDA) Center for Drug Evaluation and Research (CDER), in collaboration with Fors Marsh Group (FMG) and MedStar Health (collectively the FMG Team), is conducting research on how healthcare professionals (HCP) process risk information for newly promoted prescription drugs. Because physicians and other HCPs have many duties in their busy daily schedules, it is possible that they have only limited time to read the complete risk information for new drug products. These time constraints could lead HCPs to overly rely on past information about similar drugs or to read only some sections of presented risk information, leading to an incomplete understanding of drug risks and adverse side effects. Fully and critically processing risk information is especially vital because promotional materials tend to emphasize drug benefits while downplaying potential adverse effects, despite governmental and organizational guidelines. Overall, limited cognitive processing could lead to impaired prescribing decisions and the exposure of patients to higher risks.


  1. Intended use of information:


We will use the results of this research to better understand how HCPs process risk information for newly promoted prescription drugs. Moreover, findings will inform future research projects designed to examine the presentation of risk information in promotional pieces.

  1. Description of respondents:


The study will consist of 120 individual in-depth interviews in person with HCPs who have prescribing authority, plus an additional 20 participants to pilot the methods and procedures. General inclusion and exclusion requirements built into the screening protocol will ensure that all HCPs are currently practicing, spend substantial time on direct patient care, and have not recently participated in market research.


Additional inclusion and exclusion participant requirements will be implemented via soft recruiting quotas. These soft quotas, as detailed below, will help to screen participants into appropriate categories.


Prescriber Type


Four different categories of HCPs will be interviewed for this study: primary care physicians (PCP), specialists (endocrinologists and neurologists), nurse practitioners (NP), and physician assistants (PA). We will aim to recruit 30 participants from each of these four groups, for a total of 120 participants. The pilot, designed to test the methods and procedures prior to full scale data collection, will involve five participants from each of the four groups, for a total of 20 participants. Sample sizes for the full-scale data collection will be large enough to reach data saturation and to draw reliable conclusions about qualitative differences according to HCP type (e.g., PCP vs. specialist). Categorization will be based upon self-identified medical specialty.


Demographics


The gender, race/ethnicity, and ages of the participating HCPs will be self-identified by participants. To represent a variety of urbanicity (i.e., urban, suburban, and rural), we will recruit from three different MedStar hospital locations (detailed below). We will aim to include a mix of demographic segments to ensure a diversity of viewpoints and backgrounds.


Location/Description

Sample Size

Total

PCP

Specialist

NP

PA

Urban

MedStar Washington Hospital Center

Washington, DC

10

10

18


16


54

Suburban

MedStar Southern Maryland’s Hospital

Clinton, MD

10

10

8

10

38

Rural

MedStar St. Mary’s Hospital

Leonardtown, MD

10

10

4

4

28

Total

30

30

30

30

120



  1. Date(s) to be conducted and location(s):


We plan to conduct interviews in the Fall of 2018.


  1. How the Information is being collected:


Recruitment Procedures


Identifying and Contacting Potential Participants


Based upon project specification and working in cooperation with FMG, MedStar will identify potential participants for screening. Contact will be made via emails to listservs and posted paper flyers in various MedStar locations providing relevant study information and requirements. Interested participants will contact MedStar via email or phone call and undergo screening over the phone to determine eligibility.


If needed to increase enrollment numbers, MedStar will also arrange calls to potential participants from existing phone number lists to inform them about the study and conduct phone screening for HCPs who express interest in participating.


Recruitment Procedures


Recruiting is expected to begin three to four weeks before the start of a study’s fielding period; however, we may revise the recruiting lead time as needed. Given prescribers’ busy schedules, we anticipate that many sessions will be scheduled during timeslots in mornings and evenings, with some lunchtime sessions. In coordination with MedStar, FMG will follow best practices in terms of session scheduling and confirmation, including emails and phone calls.


If there are any no-shows or last-minute cancellations, which is typical and to be expected, the FMG Team will either reschedule the HCP or recruit a replacement and will plan for backup fielding time at the end of the fielding period to accommodate rescheduled or replacement interviews. This approach will allow us to ensure we meet the desired sample as well as to minimize costs associated with the typical approach of over-recruitment.


Scheduling Participants


After completing the screening questionnaire, HCPs who qualify for the study will be scheduled for an available time slot. As participants schedule interview times, MedStar will send FMG regular updates about scheduled interviews, including each participant’s first names and last initials, screening information, and interview date/time/location.


Shortly before each interview, MedStar will send a confirmation email to each participant to remind him or her of the date, time, and location of the scheduled interview. MedStar will work with participants who need to reschedule and, if necessary and in coordination with FMG, will recruit additional participants.


Data Collection Procedures


The FMG Team will conduct in-person individual in-depth interviews. These interviews are anticipated to last one hour. Data collection will be conducted at a location of each participant’s choice, generally his or her place of work or another hospital location at his or her convenience (one of the three specified MedStar locations). Each individual team will consist of one FMG employee and one MedStar employee, with one person serving as moderator and the other acting as notetaker and implementing the eye-tracking and memory task software.


Participants will first be given all relevant consent information and will provide verbal consent before proceeding with the interview. After answering a few questions about their background, participants will view a mock promotional piece for a fictional diabetic neuropathy drug, during which their eye movements will be tracked. We will use the Tobii Pro X2-60 system for tracking eye movements on the screen-based stimulus. Participants will be briefly introduced to the eye tracker and will be instructed to complete a five-point calibration process that takes approximately 30 seconds to complete. After a successful calibration, participants will be informed that they do not need to remain perfectly still during the session, but large movements may cause the eye tracker to lose their eyes. Participants will also be informed that the moderator may provide them with verbal prompts to adjust their positioning (e.g., moving up or back in the chair) during the session so that their eyes can be better located by the eye tracker. If the participant does not calibrate with acceptable accuracy after five attempts, then we will proceed with the interview knowing that the participant will not be included in the eye-tracking sample. We expect about 10% of participants to either not track or track well enough to be included in the analyses. We typically set thresholds for percentage of gaze samples that need to be recorded to include a particular participant in the eye-tracking sample. This threshold is typically set after running pilot sessions, since the exact threshold would be study and stimuli specific.


Immediately after viewing the promotional piece, participants will complete a memory task via E-Prime, a behavioral experiment software program, to assess their recall and comprehension of information from the promotional materials. This task will include the following sections:


  • Part 1: A distraction task (approximately one to two minutes in duration). The purpose of the task is to offload working memory by asking participants to complete a task that is not directly related to the stimuli.


  • Part 2: An explicit recall task (approximately three to four minutes in duration). The purpose of this task is to determine free recall of risk information as well as other related information, such as prescribing considerations, from the promotional piece. Participants will be consecutively presented with five questions during the task, each with an allowed maximum of one minute. Each question will be accompanied by a text box for free responses. Participants will be instructed to input the correct answer. Accuracy will be measured in Part 2.


  • Part 3: Indirect memory task (approximately two to three minutes in duration). Part 3 will provide the ability to recognize risk information as well as other related information from the promotional piece. Participants will be consecutively presented with five questions during the task, each with an allowed maximum of 30 seconds. Each question will consist of a pixelated/blurred image along with five options. Participants will be instructed to identify the correct answer from the options presented. Response time and accuracy will be measured in Part 3.


  • Part 4: Ranking task (approximately three minutes in duration). Participants will be asked to rank the importance of 10 informational items presented in the promotional piece. The purpose of this task is to provide information priority for healthcare practitioners. It will be a forced rank task with no ties permitted. The outcome variable in Part 4 is self-reported importance of information items in the promotional piece.


Following the memory task, the moderator will ask participants to assess how they went through the promotional materials, what they thought about the information, and how that information would influence their risk-benefit analysis for prescribing the drug to their patients.


FMG will audio record all interview sessions as well as provide remote login of the sessions for FDA and other study staff to observe the pilot test sessions live, with views of the study room as well as the participant’s screen. Observers will be in listen-only mode; only the moderator and researcher will interact with the participant. FMG will video-capture participants’ screens and record their eye movements as they go through the stimuli. There will be no use of webcams nor video recordings of the participants’ faces.


If time allows, a “false close” will be implemented during which observers will have an opportunity before the close of each interview to ask individual participants additional questions based on their discussion. FMG will work with the moderator to ensure that feedback from FDA observers and/or updates to the discussion guide are incorporated and implemented.


  1. Confidentiality of Respondents:


Assurance of Privacy Provided to Participants


All data will be collected with an assurance that the respondents’ responses will remain private to the extent allowable by law. Both the consent letter and the moderator’s guide will contain a statement emphasizing that no one will be able to link a participant’s identity to his or her responses. Researchers will not tie respondents’ personal information to their answers. Additionally, moderators will not ask participants to provide identifying information as part of their responses (e.g., name of hospital, names of colleagues); however, in order to establish a rapport with the participant, moderators will address participants by their first name. All analyses will be done in the aggregate and respondent information will not be appended to the data file used. Further, no identifying information will be included in the data files delivered by FMG to FDA.


All sessions will be audio recorded for reporting purposes. The pilot sessions will be livestreamed for observers. Only FDA personnel and other study team members directly involved in the research will view the livestream, and the livestream video will not be recorded. Livestreaming connections will be secure, using industry-standard firewalls and security practices. All data will be encrypted in transit using HTTPS. All equipment will be operated and maintained according to industry-standard practices, and all software validated using industry-standard quality assurance practices. Audio recordings will be used to create transcriptions of the interview sessions for reporting purposes, and then destroyed after final reporting. The informed consent letter will contain language that notifies participants of the audio recording, screen capturing, eye tracking, and livestreaming. Before each interview begins, the moderator will confirm consent by receiving verbal affirmation from the participants to record and livestream the session. Due to the importance of complete data, in the event verbal consent for the audio recording is not given, the interview will not proceed and efforts will be made to schedule a replacement interview.


After data collection is completed, FDA will have copies of transcripts of all audio-recorded interviews. These transcripts will be provided to FDA to provide a written record of the sessions. To ensure participant privacy, all PII, including first names, will be redacted from the transcripts before delivery to FDA.


Record Keeping and Confidentiality


The following procedures will be used to ensure participant confidentiality before, during, and after fielding:


1. Full names of the participants will be used only for scheduling purposes and will not be used on any interview materials provided to FDA (e.g., typed lists of participants); instead, each participant will be assigned a unique ID by which he or she will be referred. Moderators will only address the participants by their first name (e.g., Mary). Names will be redacted from all transcripts delivered to FDA.


2. Transferring of screening- and scheduling-related information between MedStar and FMG will be conducted via a password-protected, secure file transfer protocol (FTP) site.


3. All screening-related information will not be tied to any PII, but rather identified and matched by the assigned unique ID. For scheduling information, this will be limited to first name, last initial, email, and phone number(s). Scheduling information will not be provided to FDA.


4. Transcripts and reports will not contain any PII.


5. Respondents will not be tied to their individual responses, and all analyses will be conducted in the aggregate (i.e., any quotes used reporting will not be attributed to specific participants).


Contractors will not share personal information regarding participants with any third party without participants’ permission unless it is required by law to protect their rights or to comply with judicial proceedings, court orders, or other legal processes. This possibility will be disclosed in the informed consent form. Further, if a participant makes a direct threat of harm to himself or herself or others, FMG reserves the right to take action out of concern for him or her and others.


Any transcript or report delivered to FDA will not include PII. De-identified transcripts will be used by FDA to assist in material development and to provide FDA with records of the sessions. Audio recordings will be deleted after reporting is finalized, retaining transcripts as the record of the sessions. All identifying information, including information collected during screening, will be kept on a separate password-protected computer and/or in locked cabinets for a period of three years only accessible by FMG, after which they will be destroyed by securely shredding documents or permanently deleting electronic information. In the case of a breach of confidentiality, appropriate steps will be taken to notify participants.


All data will also be maintained in consistency with the FDA Privacy Act & Applicable System of Records Notices #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).


  1. Amount and justification for any proposed incentive:


Participants will receive an incentive as a token of appreciation for participating in the interviews. MedStar will provide each participant a check following interview completion.

The FMG Team will offer the following incentive amounts for 60-minute sessions:


Target Audience

Proposed Incentive

PCPs

$150

Specialists

$175

NPs and PAs

$100


The proposed incentive amounts are below typical market incentive rates. Although market incentive rates for physicians are approximately $250 to $350 for similar research activities, with higher rates for specialists, the flexibility that our interview methodology affords—such as minimal travel time to and from the facilities, conducted around the HCPs’ schedules—offsets the lower honorarium.


Incentives must be high enough to equalize the burden placed on respondents in respect to their time and cost of participation, as well as provide enough motivation to participate in the study rather than another activity. Particularly in the case of HCPs, incentives need to be high enough to entice them to make time in their busy schedules and participate in the study.


As participants often have competing demands for their time, incentives are used to encourage participation in research. When applied in a reasonable manner, incentives are not an unjust inducement and are an approach that acknowledges respondents for their participation. The use of incentives treats participants justly and with respect by recognizing and acknowledging the effort they expend to participate. In this particular study, we are asking HCPs to provide thought-intensive, open-ended feedback on materials that require a high level of engagement.


If the incentive is inadequate, however, participants might agree to participate and then not show up or drop out early. Low participation may result in inadequate data collection or, in the worst cases, loss of government funds associated with recruitment, facility fees, and moderator and observer time. Additionally, low participation can cause a difficult and lengthy recruitment process that, in turn, can cause delays in launching the research, both of which lead to increased costs.


To address below-market incentive rates and ensure successful recruitment and fielding, we will coordinate closely to monitor recruitment status. Additionally, we will ensure that other considerations are in place to increase likelihood of participation, such as:


  1. Ensuring an adequate recruiting period before the start of fielding (as well as ongoing recruiting as needed during fielding period;


  1. Availability of sessions at time slots that, in our experience, have been popular among HCPs—for example, early morning, evenings, lunch; and


  1. Having the flexibility and appropriate staff availability to run concurrent sessions to leverage popular session times.


  1. Questions of a Sensitive Nature:


None.


  1. Description of Statistical Methods (i.e. Sample Size & Method of Selection):


Qualitative Analyses


Our approach to qualitative analysis focuses on identifying the key underlying themes from participants’ discussions. A full qualitative analysis will be conducted on all data collected from semi-structured interviews. We have developed a standardized protocol to guide content-coding efforts. This protocol draws on best practices and covers all aspects of the coding process, from developing the codebook and assessing interrater reliability to completing final coding and merging coded variables into the final data set for further analysis.


All open-ended comments are first “sanitized” (removing obscenities, proper names, and any case-specific or sensitive information) and parsed to establish the appropriate unit of analysis. We will develop a coding shell as a starting point to refine a coding scheme and produce a codebook with detailed definitions and examples to help coders differentiate among themes and reduce ambiguity. A typical coding scheme establishes organizational and thematic codes and descriptions of the associated concept or theme, specifying whether single or multiple codes should be assigned and what constitutes a nonresponse. Using these themes, at least two coders independently rate a random subset of the transcribed responses and then agreement/reliability (Cohen’s kappa) is calculated to ensure both the reliability of the coders and generalizability of the coding instrument. As part of this process, we will ensure that 25% of the responses are double-coded to ensure that interrater reliability is obtained. After intercoder reliability is established, the coders will single code the remaining transcripts.


We will use NVivo as the qualitative analysis tool and we have developed a process to quickly and efficiently code data and turn around analyses for reporting qualitative research findings:


  1. Codebook Development. Before coding transcripts, we will develop a codebook based on the discussion guide. The codebook will also incorporate major themes of interest to the research objectives. If desired, any emergent themes from fielding can be added to the codebook once mutually agreed upon by FDA and FMG.

  2. Coder Training. Coders will undergo a training session to facilitate shared understanding of the coding manual and to discuss established coding rules.

  3. Codebook Refinement. The codebook will then be applied to a single transcript by the coding team. The team will meet to review discrepancies, resolve disagreements, and further refine the codebook (i.e., by revising or removing categories and subcategories).

  4. Establishing Intercoder Reliability. As the coding process starts, we will have coders double code specified transcripts as part of establishing intercoder reliability. Establishing intercoder reliability is approached in an iterative fashion: The various coders will apply the coding system to the same two transcripts, intercoder reliability is calculated,1 and then the coders meet to review any disagreements and to calibrate their consistency after coding additional sets of transcripts. Although the goals of the study may dictate a specific threshold of reliability, FMG considers a minimum Kappa coefficient of >.70 across all pairs of coders, indicating “good” agreement.

  5. Completion of Coding and Thematic Analysis. Once reliability is established, coders will complete coding transcripts and the resulting content at the various “nodes” (or codes) will be used to facilitate a systematic, thematic review of the qualitative data. These results will ultimately guide the research team’s efforts to identify insights, draw conclusions, and make actionable recommendations in reporting.

To be efficient, a member of the research team will be designated as the data manager. This researcher will be responsible for maintaining quality control of the data, which includes maintaining audio recordings, importing full transcripts into NVivo, maintaining backup files, facilitating coding assignments, merging coded transcripts, and checking intercoder reliability of double-coded transcripts. This systematic approach to analyzing qualitative data is essential for the reporting to meet the standards of peer-reviewed academic publication, should FDA choose to submit this study for consideration.


Quantitative Analyses


Eye tracking is included as a proxy measure for how attention is allocated and, thus, provides insight about health care practitioners’ engagement with the content, visual processing order, and findability of information. The eye-tracking data will be analyzed in the context of the additional data sources from the interviews: self-reported comments, observable participant behavior, and task performance. The eye-tracking analyst will produce gaze plot and heat map visualizations of the eye-tracking data. Gaze plots will be produced for each participant. A random sample of gaze plots will be evaluated qualitatively to uncover trends in gaze path patterns between and within all participants. Heat maps, based on fixation counts and durations, will be produced at the participant group level (e.g., PCPs, NPs, specialists) for each of the audiences. Each heat map will also be evaluated qualitatively to compare fixation trends within and across participant groups. A second researcher will review a random sample of gaze plots and the full set of heat maps, independently determine eye-movement trends, and compare conclusions with the primary analyst to ensure the validity and accuracy of the results.


Eye-tracking metrics, based on fixation counts and durations, will be produced and analyzed for each of the areas of interest (AOIs). Metrics will be categorized using the following structure:


  1. AOI metrics:

    1. Number of fixations on an AOI

    2. Number of fixation revisits on an AOI

    3. Total dwell time (fixation duration) on an AOI

    4. Percentage of time on an AOI

  2. Cognitive processing metric:

    1. Average fixation duration

  3. Target findability metrics:

    1. Percentage of participants who fixated on the target

    2. Number of fixations before first fixation on the target

    3. Time to first fixation on the target

We will analyze the eye-tracking metrics to determine trends in how attention was allocated to the written risk communication language and other AOIs on the page. FMG will also include results of the memory task into the model of eye-movement patterns. Thus, the model will assess how AOI metrics on the promotional material affect subsequent memory for the informational and design elements in the promotional material.


FMG plans to use generalized linear regression models to analyze the effect of fixations on AOIs on recall performance from the memory task. Recall performance on the explicit memory task will consist of accuracy (dichotomous data) and recall performance on the indirect memory task will consist of accuracy (dichotomous data) as well as latency (continuous data). A logistic regression model will be used to analyze the effect of attention on explicit memory task performance. A linear regression model will be used to analyze the effect of attention on indirect memory task performance.


An increase in total fixation counts and fixation durations are hypothesized to increase the accuracy and reduce the latency of the subsequent recall of information during the memory task. The results from the study are anticipated to inform our understanding of how healthcare practitioners visually process information from a pharmaceutical promotional piece and the likelihood of recalling that information during direct, in-person patient care. Furthermore, we will explore the eye-movement patterns that may be more likely to result in the proper encoding for long-term memory retrieval. Accurate and efficient recall of information during the memory task will occur when the total amount of information stored in long-term memory exceeds the threshold for retrieval. In addition to the differences in eye-movement patterns, there are other influences for proper encoding into long-term memory that need to be taken into account, such as prior knowledge, motivation to remember the information, and cognitive ability to retrieve the information from long-term memory.


BURDEN HOUR COMPUTATION (Number of responses (X) estimated response or participation time in minutes (/60) = annual burden hours):


Type/Category of Respondent

No. of Respondents

Participation Time (minutes)

Burden Hours

Number to complete the screener

225

.08 (5 min.)

18

Number to complete the study (included in number to complete screener)

140

1.00 (60 min.)

140

Total



158



REQUESTED APPROVAL DATE: May/June 2018


NAME OF PRA ANALYST & PROGRAM CONTACT:


Ila S. Mizrachi

Paperwork Reduction Act Staff

[email protected]

(301)796-7726


Kevin Betts, Ph.D.

Social Science Analyst

[email protected]

240-485-6252


FDA CENTER: Center for Drug Evaluation and Research, Office of Prescription Drug Promotion

1 Viera, A. J., & Garrett, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Fam Med, 37(5), 360–363.

10

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMBMemoMERCPtP
SubjectMERC OMB MEP
AuthorHillabrant
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy