Evaluating a Structured Reporting Template to Increase Transparency and Reduce Review Time for Healthcare Database Studies

Focus Groups as Used by the Food and Drug Administration

Appendix B2 - User Guide

Evaluating a Structured Reporting Template to Increase Transparency and Reduce Review Time for Healthcare Database Studies

OMB: 0910-0497

Document [docx]
Download: docx | pdf

OMB Control No. 0910-0497 Expiration Date: 10/31/2020



Structured Protocol and Reporting Template with Design Visualization for Non-Randomized Database Studies


User Guide v1.0


April 9, 2019


This user guide walks through the contents of a structured protocol and reporting template with design visualization as well as how to read/use the template. The template is based in excel and PowerPoint (PPT).


The structured template can be used by investigators for technical and statistical protocol specification and/or included in appendices of manuscripts or reports. It may also be used by reviewers to better understand the scientific decisions underpinning reported findings and facilitate evaluation of validity.


We provide structured reporting and design visualization for an example study to illustrate how the fields could be filled in (see accompanying excel file).


Table of Contents:


This sheet provides hyperlinks to the other worksheets in the Excel file, including summary tables and appendices.


Administrative Information:


This worksheet contains basic identifying information, such as protocol title and objective. We specify that the objective must detail the PICOT (Patient, Intervention, Comparator, Outcome, Time-Horizon). This sheet also contains areas to fill in details about protocol registration, the protocol version number, contributors, funding sources, data use agreement, and Institutional Review Board coverage.


Version History:


This sheet contains columns to specify the version date, version number, log of changes, and rationale for changes made to the prior version.


Design Diagram:


At the top of the diagram, the name of the study design is provided (e.g., new initiator, active-comparator cohort design) as well as the primary analysis for the study (e.g., a comparison of Drug A versus Drug B on outcome Y).


The design diagram can be created in Excel, PPT, or other software program (see template example). It is intended to be read from top to bottom, reflecting the order of operations to create an analytic cohort from a source longitudinal healthcare database. Temporality of assessment windows are clearly shown, relative to the cohort entry (“index”) date, which is considered day 0. Bracketed number ranges denote the inclusive time windows for washout, inclusion/exclusion, and covariate assessment windows as well as follow-up. Whether or not day 0 is included in an assessment window can also be visually distinguished by whether it overlaps the vertical arrow representing the cohort entry date.


The diagram may include footnotes specifying the inclusion/exclusion criteria, covariates, and censoring criteria relevant to each assessment window.


Details of Study Population Identification:


This worksheet summarizes the most important high-level design decisions to create an analytic study population.


  1. Meta data about data source and software


This section records the calendar time range used to ascertain cohort entry (index date) as well as the calendar time range of data available for pre-index assessment windows and post-index follow up (study period). The data source name and version are identified as well as any sampling criteria applied (for example, the data cut only includes patients with a diagnosis of diabetes). There are sections to describe the type of data, any linkages, data conversion (e.g., to a common data model1-3), and software used.


  1. Index Date (day 0) defining criterion


This section is where the criterion that defines the date of entry to the cohort is specified. If the study is descriptive, there may only be one row filled out. An active comparator study may have 2 rows, one for the exposure of interest and one for the comparator. Multiple drug comparisons may have more rows filled out.


This section has fields to enter a brief name, describing the entry criterion, the number of times a patient can enter the analytic cohort (e.g., only once or multiple entries), and whether the entry criterion is required to be an incident occurrence, and a field to briefly define what defines the index date for cohort entry. If the index date defining criterion is required to be an incident occurrence (e.g., an incident diagnosis, procedure, drug prescription, or dispensation), then there is a field to specify the washout window used to define “incident” occurrence as well as a field defining what the patients must be incident with respect to. For example, the index date might be defined by National Drug Codes (NDC) for tablet formulations of azithromycin, but the patients must not have had exposure to azithromycin in any formulation in the 183 days prior to the index dispensation (washout window).


  1. Inclusion and Exclusion Criteria


Inclusion and exclusion criteria are mirrors of each other when it comes to implementation. For example, one could include patients if they have at least 183 days of medical and drug coverage enrollment prior to the index date, or one could exclude patients if they do not have at least 183 days of medical and drug coverage enrollment prior to the index date. The choice of entering the criterion in the inclusion versus exclusion sections is semi-arbitrary from an operational sense, but in some cases, it may be more easily interpretable to have a criterion in one section rather than another. For example, specifying that patients not over age 65 on the index date were excluded may be more confusing to interpret than specifying that patients over age 65 were included because of the double negations.


Both the inclusion and exclusion criteria sections have fields to briefly describe what the criterion is conceptually as well as the order of application, relative to selection of the index date (e.g., eligible index dates are identified after inclusion/exclusion criteria are applied vs. potential index dates meeting the entry criterion are identified, and then inclusion/exclusion criteria are applied to identify the eligible index dates). There are fields to define the assessment window, relative to the index date, whether there are restrictions on care setting or diagnosis position in the algorithm to define the inclusion/exclusion criterion, and which groups or analyses the criterion is applied to (e.g., descriptive cohort 1, exposure and comparator group, sensitivity analysis 5).


Defining “observable” patient time in the healthcare data source is almost always an inclusion/exclusion criterion. When using administrative claims data, this can be measured with dates of enrollment in insurance coverage, with or without bridging of short gaps in enrollment. When using electronic health record (EHR) data, defining observable patient time may require making some strong assumptions. For example, assuming that patient encounters are always observable, that patients are observable between the first and last recorded encounter in the record, that patients are observable for X days before and after any recorded encounter, etc.4 Alternatively, one could specify inclusion based on algorithms to measure “loyalty” to a healthcare provider or EHR system.5


We recommend specifying in the inclusion/exclusion criteria issues such as how to handle multiple exposures on the index date or how to handle prescriptions that are missing days supply.




  1. Predefined Covariates


The predefined covariates section has a field to briefly define the covariate conceptually, broadly categorize it (e.g., component of comorbidity score, screening measure, healthcare utilization measure), which analyses adjust for the covariate, and how it is specified (e.g., continuous, categorical, binary). There are fields to define the assessment window, relative to the index date, whether there are restrictions on care setting or diagnosis position in the algorithm, and which groups or analyses the covariate is measured for (e.g., descriptive cohort 1, exposure group, comparator group).


  1. Empirically Identified Covariates


Empirical identification of covariates to use in confounding control may not be relevant to all study populations or analyses; however, if such methods are used, the template includes fields to describe what the algorithm for identification is as well as specify the settings or parameters used to empirically identify covariates. This section, like the predefined covariate section, has fields to specify the assessment window, relative to the index date, which analyses adjust for empirically identified covariates, how the covariates are specified in a model, whether there are restrictions on care setting or diagnosis position, and which groups or analyses the covariate is measured for.


  1. Outcome


The outcome section includes fields to briefly define the outcome conceptually and whether it is defined as an incident occurrence (if so, there is a field to specify the washout window to define “incident” occurrences). There are fields specifying whether there are restrictions on care setting or diagnosis position and which groups or analyses the outcome is measured for. Additionally, if there are published performance characteristics for the algorithm (e.g., high positive predictive value (PPV)) or performance characteristics from a subsample of the analytic cohort, there is a field to provide this information. If the outcome algorithm performance characteristics are published, the citation can be reported in the same field.


  1. Follow-up


The follow-up section is structured like a checklist, with fields to enter the day that follow-up for the outcome begins, relative to the index date, and check box fields for censoring criteria. There is a field next to each check box, where more detail is requested on the specific parameter for the censoring criterion. For example, if follow-up is censored upon exposure discontinuation, then there are fields to specify how to handle days’ supply dispensed when there are early refills (“stockpiling” algorithm) as well as the grace period allowed after dispensation date + days supplied, during which the patient is still counted as exposed (e.g., gaps of less than 30 days are bridged, and 30 days added to the last days’ supply dispensed in a treatment episode).


Attrition Table


This worksheet has a table, where each row shows the number of patients remaining after applying inclusion/exclusion criteria sequentially.


Power Calculation


This section contains fields to specify the software used, what is being calculated (power or sample size), and assumptions for the calculations. For each parameter assumption, the primary assumption and the range considered are specified. There are also fields to specify the sources used to select the estimated parameters. The power or sample size calculations across the range of assumed parameters may be displayed in tabular or visual form as needed. The template contains assumptions to calculate power for a comparison of two proportions. The sheet can be modified to reflect the assumptions necessary to do other power calculations.


Analysis Specification


The analysis specification tab has sections for the primary, secondary, subgroup analyses, and sensitivity analyses that involve only changes in analysis parameters. Sensitivity analyses requiring changes in analytic cohort composition would require separate study specification worksheets.


Glossary of Terminology:


Since the language used by different groups to reflect the same concept varies and, sometimes, the same words are used to reflect different concepts, this sheet defines how terminology is used in the template.


Appendices


The template includes appendices which specify the specific algorithms used to define code-based measures at each step of identifying the analytic population, including study population entry criterion (exposure), inclusion/exclusion criteria, covariates, and outcome. These appendices include the name of the concept (e.g., diabetes), the codes used to define the concept, the type of code, and the code description. Dose and route are also provided for drug concepts.


Additional appendices may be relevant; for example, if data is converted to a Common Data Model, the decisions applied to categorize one or more source data variables to the limited value set allowable for encounter type in the Common Data Model could be presented in an appendix (see example Appendix E). Another example could be an appendix specifying how care setting is defined for claims occurring between an inpatient admission and discharge: are they considered inpatient claims, or are they categorized by the nominal place of service?


These appendices are intended to supplement the analytic study population worksheet, providing in depth detail about specific codes and other algorithms.

https://www.ncbi.nlm.nih.gov/pubmed/28865143



References


1. Sentinel Common Data Model, October 2018. https://www.sentinelinitiative.org/sentinel/data/distributed-database-common-data-model/sentinel-common-data-model

2. PCORnet Common Data Model (CDM). https://archive.pcornet.org/pcornet-common-data-model/

3. OMOP Common Data Model. https://www.ohdsi.org/data-standardization/the-common-data-model/

4. Rassen JA, Bartels DB, Schneeweiss S, Patrick AR, Murk W. Measuring prevalence and incidence of chronic conditions in claims and electronic health record databases. Clin Epidemiol. 2019;11:1-15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6301730/

5. Lin KJ, Singer DE, Glynn RJ, Murphy SN, Lii J, Schneeweiss S. Identifying Patients With High Data Completeness to Improve Validity of Comparative Effectiveness Research in Electronic Health Records Data. Clin Pharmacol Ther. 2018;103(5):899-905. https://www.ncbi.nlm.nih.gov/pubmed/28865143



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorWang, Shirley,Ph.D.
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy