Supporting Statement Part A

Supporting Statement Part A.docx

TeamSTEPPS® Stakeholder Surveys for AHRQ’s ACTION III Diagnostic Safety Capacity Building Contract Task 3

OMB: 0935-0262

Document [docx]
Download: docx | pdf




SUPPORTING STATEMENT


Part A








TeamSTEPPS® Stakeholder Surveys for AHRQ’s ACTION III Diagnostic Safety Capacity Building Contract Task 3 (TORFP: 75P00119R00265)






Version: January 13, 2022







Agency of Healthcare Research and Quality (AHRQ)



Table of contents


A. Justification 3

1. Circumstances that make the collection of information necessary 3

2. Purpose and use of information 6

3. Use of Improved Information Technology 6

4. Efforts to Identify Duplication 7

5. Involvement of Small Entities 7

6. Consequences if Information Collected Less Frequently 7

7. Special Circumstances 7

8. Federal Register Notice and Outside Consultations 7

9. Payments/Gifts to Respondents 8

10. Assurance of Confidentiality 8

11. Questions of a Sensitive Nature 9

12. Estimates of Annualized Burden Hours and Costs 9

13. Estimates of Annualized Respondent Capital and Maintenance Costs 9

14. Estimates of Annualized Cost to the Government 9

15. Changes in Hour Burden 10

16. Time Schedule, Publication and Analysis Plans 10

17. Exemption for Display of Expiration Date 10

List of Attachments 11

References 12





A. Justification

1. Circumstances that make the collection of information necessary


The mission of the Agency for Healthcare Research and Quality (AHRQ) set out in its authorizing legislation, The Healthcare Research and Quality Act of 1999 (see http://www.ahrq.gov/hrqa99.pdf), is to enhance the quality, appropriateness, and effectiveness of health services, and access to such services, through the establishment of a broad base of scientific research and through the promotion of improvements in clinical and health systems practices, including the prevention of diseases and other health conditions. AHRQ shall promote health care quality improvement by conducting and supporting:

1. Research that develops and presents scientific evidence regarding all aspects of health care; and


2. Synthesis and dissemination of available scientific evidence for use by patients, consumers, practitioners, providers, purchasers, policy makers, and educators; and


3. Initiatives to advance private and public efforts to improve health care quality.


AHRQ conducts and supports research and evaluations, and supports demonstration projects, with respect to (A) the delivery of health care in inner-city areas, and in rural areas (including frontier areas); and (B) health care for priority populations, which shall include (1) low-income groups, (2) minority groups, (3) women, (4) children, (5) the elderly, and (6) individuals with special health care needs, including individuals with disabilities and individuals who need chronic care or end-of-life health care.


Background for this information collection


Delayed, wrong, and missed diagnoses, or diagnostic errors, account for an estimated 40,000 to 80,000 patient deaths each year1-2. Diagnostic errors are the most harmful type of medical error3 and are responsible for 33% of malpractice claims that result in permanent injury or death4 of the patient. One third 5of diagnostic-related malpractice claims have one or more communication breakdowns contributing to the event.


The diagnostic process relies on well-coordinated activities. In the 2015 report Improving Diagnosis in Health Care, The National Academies of Science, Engineering, and Medicine (NAESM) identified the need for more effective teamwork in the diagnostic process among health care providers, patients, and their family members. Patient-provider encounters, diagnostic referrals, and daily team huddles are opportunities where improving communication among providers related to diagnosis may mitigate error6-7.


TeamSTEPPS®, created and refined over the years by AHRQ, is an evidence-based framework to optimize team performance across the health care delivery system. The various TeamSTEPPS versions are based on four teachable learnable skills: Communication, Leadership, Situation Monitoring, and Mutual Support.


The MedStar Health Research Institute (MHRI) was awarded a contract with AHRQ in 2019 and received OMB fast track clearance (OMB control number 0935-0179, expiration date of 11/30/23), to provide program support and expertise related to improving diagnostic safety and quality across five distinct contract tasks. Task 3 of the contract is to develop, pilot test and promote TeamSTEPPS® Course to improve communication among providers related to diagnosis. TeamSTEPPS® to Improve Diagnosis provides communication strategies, methods to improve intra-professional communication and communication during the referrals process and to practice mutual support and situation monitoring during the diagnostic process. It includes an educational module for leaders on strategies to facilitate improved communication with and among providers related to diagnosis. It also includes a Team Assessment Tool for Improving Diagnosis.


The Team Assessment Tool is an instrument developed as a method of self-assessment, with the goal of helping teams reflect on their current diagnostic and teamwork practices. In addition, it orients them to the repertoire of tools available within the TeamSTEPPS for Improving Diagnosis course that are available to support improvement efforts. The Team Assessment Tool asks participants to complete self-assessment ratings as a mechanism to identify strengths and opportunities for improvement in unit-based teamwork. The unit level aggregate results of the assessments help unit leaders identify priorities for training via use of course modules and specific interventions with their diagnostic improvement teams.


We are requesting OMB approval to conduct psychometric testing on the TeamSTEPPS Team Assessment Tool for Improving Diagnosis.


AHRQ would like to further develop this Team Assessment Tool into a measurement instrument, expanding on its intended use as an educational activity and formative assessment. The opportunity to provide evidence (via publication in peer reviewed journals) that the tool is both valid and reliable will strengthen its acceptance in the care delivery community and provide a scientifically sound method for teams to assess changes in performance overtime. In order to assure validity and reliability the instrument requires psychometric testing.


Psychometrics is the construction and validation of measurement instruments and assessing if these instruments are reliable (have consistency in measurement) and valid (have accuracy in measurement). Reliability and validity indicate how well a method, technique, test, or instrument is truly measuring what it intends to measure. Psychometric testing through design reduces non-random error in measurement, adds reliability and validity to the instrument, increases the strength of evaluation findings, and most importantly, adds population-level generalizability (external validity), given that the results from the psychometrically tested instrument can then be compared with other units in the same setting, or with other settings using the same instrument. With psychometric testing, the Team Assessment Tool has the potential to become a true measurement instrument to assess changes over time and introduce options for program evaluation and comparability within and among any healthcare settings.


The Team Assessment Tool instrument will undergo remote usability testing of a survey to refine questions. To execute this task, the contractor has assembled an interprofessional team to execute any or all of the following methods for generating reliability and validity evidence that would be applicable to this specific tool: 1) parallel forms reliability, 2) internal consistency reliability, 3) inter-rater reliability, 4) content validity, and 5) construct validity, using a multitrait-multimethod matrix and/or known groups testing.


The MedStar team has conducted precursor psychometric testing on the Team Assessment Tool, which included the following: 1) Item wording and scale refinement, 2) Project Team Subject Matter Expert content review, 3) Non-Project Team Subject Matter Expert review, 4) End-user feedback, and 5) Instrument refinement. This work puts the reliability and validity of the indicators of the instrument at an optimal starting point for full psychometric testing.


Full psychometric testing of this instrument means the scaling must be evaluated extensively, which will require a sample of at least 359 individual care team members (physicians, nurses, ancillary staff, etc.,) from diverse clinical settings to participate in a 15-minute, anonymous, online survey distributed via a shared electronic survey link. Individual care team members will be recruited from across 9 health systems or care settings. The survey will ask participants to read through and complete the questions; participants will not be privy to the results of the survey. The service-delivery-focused, voluntary collections fall under the fast-track PRA process.


The MedStar team will examine this sample of results via analyses to determine the stability of the instrument and its indicators, ensuring parallel measurements, homogeneity among indicators, concurrent, convergent, and discriminant validity, latent constructs of the tool, the extent to which measures of the same concept correlate and diverge, and the degree of that correlation in evaluating the instrument’s ability to discriminate between different groups with various levels and familiarity with safety culture. It is important to note the responses on the surveys are not being evaluated, but rather the consistency with which the questions are answered is being evaluated (i.e., determining whether the questions are being interpreted the same by all the users), despite diverse healthcare settings and varying levels of experience and familiarity with TeamSTEPPS. The combination of these psychometric methods will allow for internal and external validity and reliability to be assessed, to create a psychometrically sound instrument vetted for potential widespread adoption.


This information collection has the following goal:


  1. To determine the stability of the Team Assessment Tool instrument and its indicators

in improving communication to reduce diagnostic errors, by quantitatively examining the correlation among responses of each indicator.


To achieve the goal of this project the following information collection instruments will be completed using individual surveys:


  1. Setting Demographics Survey (Appendix A): Prior to testing of the instrument, each health system will take a brief survey to describe the characteristics of the sites engaged in pilot testing (e.g., size, diagnostic team member role diversity, and familiarity with patient safety and quality improvement activities).


  1. TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B): This is collected from individual survey respondents, who are diverse staff members in a diagnostic team. The consistency with which the questions are interpreted and answered among respondents will be evaluated to determine the stability among indicators on the instrument.


This information collection is being conducted by AHRQ through its contractor, MedStar Health Research Institute, pursuant to AHRQ’s statutory authority to conduct and support research on healthcare and on systems for the delivery of such care, including activities with respect to the quality, effectiveness, efficiency, appropriateness and value of healthcare services and with respect to quality measurement and improvement. 42 U.S.C. 299a(a)(1) and (2).

2. Purpose and Use of Information


AHRQ will use the information collected through this Information Collection Request to assess and enhance the feasibility of adopting a Course to improve communication among providers related to diagnosis. AHRQs’ ability to publicly share a Team Assessment Tool that has been scientifically validated is expected to be of great interest to the health care community and important in helping organizations prioritize improvement efforts.


The specific purpose of each of the information collection instruments is described below:


  1. Setting Demographics Survey (Appendix A) This is only collected once per health system, at nine health systems maximum. It is designed to describe the characteristics of the sites engaged in pilot testing (e.g., size, diagnostic team member role diversity, and familiarity with patient safety and quality improvement activities).


  1. TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B) – This is collected from individual survey respondents, who are diverse staff members in a diagnostic team, from among nine health systems maximum. There will be 359 participants minimum. The consistency with which the questions are interpreted and answered among respondents will be evaluated to determine the stability among indicators on the instrument.

The information collection instruments are designed to capture background site and team data (Appendix A), and to capture quantitative data (Appendix B). The results from this evaluation via statistical analyses will have population-level external validity and will be generalizable. Every attempt will be made to recruit sites that are representative of diverse geographic locations as well as diverse patient populations served including sites that serve AHRQ priority populations.

3. Use of Information Technology

The information collection described herein will rely on an electronic data collection instruments in the form of a survey link sent to participants. Nine health systems participating in the evaluation will complete the Setting Demographics Survey (Appendix A). The same nine health systems will then disseminate a survey link created by the MedStar study team to their diagnostic team members to complete the TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B). The Team Assessment Tool survey link will be completed anonymously, with no individual identifiers that link participants to responses, and only the MedStar study team will be privy to the responses. There will be a health system-level identifier that links a batch of surveys (50 surveys per health system) with a particular health system, to monitor and meet the minimum survey threshold of respondents per health system (80% completion rate, or 40 surveys), ensuring a robust sample of participants. Health systems may include more than 50 respondents in the administration of the survey. Oversampling respondents increases the statistical significance of the findings, and accounts for natural attrition with respondents and health systems. The results of the responses on the Team Assessment Tool are not being examined, only statistical correlation among how consistently participants answer the questions on the instrument will be analyzed; this will help determine if the tool is indeed measuring what it is intended to measure.

4. Efforts to Identify Duplication

To our knowledge, this does not involve a duplication of any existing efforts as suggested by a background review of the field

5. Involvement of Small Entities

The information being collected under this request will reflect a variety of health system settings in which the Team Assessment Tool will be used, including health systems that house urgent care centers, small community practices, and hospitals. The goal will be to disseminate the survey among health systems that have smaller diagnostic teams and larger diagnostic teams to ensure setting variation and a diverse distribution of respondents. To our knowledge, none of the practices volunteering to participate would be considered small businesses or small entities.

  1. Consequences if Information Collected Less Frequently

This information collection is for a onetime data collection only. None of the information needed to conduct full psychometric testing of the Team Assessment Tool needs to be collected more than once.

7. Special Circumstances

This request is consistent with the generic information collection guidelines of 5 CFR 1320.5(d)(2). No special circumstances apply.

8. Federal Register Notice and Outside Consultations

8.a. Federal Register Notice

This information collection is being submitted under AHRQ’s generic clearance. A Federal Register notice is therefore not required. As required by 5 CFR 1320.8(d), notices were published in the Federal Register on February 9, 2022, page 7454 of volume 87 for 60 days (see Appendix C) and again on for 30 days. No substantive comments were received.



8.b. Outside Consultations

Not applicable.

9. Payments/Gifts to Respondents

Our information collection efforts will not offer direct payments or gifts to individual respondents. The organizations engaging in psychometric testing will be sub-contractors to the MedStar Health Research Institute and will receive a flat stipend for their efforts.

10. Assurance of Confidentiality

Individuals and organizations will be assured of the confidentiality of their replies under Section 944(c) of the Public Health Service Act.  42 U.S.C. 299c-3(c).  That law requires that information collected for research conducted or supported by AHRQ that identifies individuals or establishments be used only for the purpose for which it was supplied.


Information that can directly identify the respondent, such as name and/or social security number will not be collected. No information will allow for individual identification of participants


Participants will also receive the following confidentiality statements printed on any respondent materials: “The confidentiality of your responses are protected by Sections 944(c) and 308(d) of the Public Health Service Act [42 U.S.C. 299c-3(c) and 42 U.S.C. 242m(d)]. Information that could identify you will not be disclosed unless you have consented to that disclosure.”

Information collected will be maintained in a secure HIPAA-compliant data server. All information collection will be stored using the contractor, MedStar’s REDCap™ research data capture database. REDCap™ is a mature, secure web application for building and managing online information collection instruments and data. While REDCap™ can be used to collect virtually any type of data, it is specifically geared to support data capture for research studies. The REDCap™ Consortium is composed of 1,711 active institutional partners in 96 countries who utilize and support REDCap™ in various ways. REDCap™ can be established to support data entry forms and to conduct web-enabled surveys. The Course will also use a REDCap™ project space to securely store any documents received from the practices during the project. The MedStar Health Research Institute is a REDCap™ project collaborator site with a robust history of using this method for data collection.

This ICR does not request any personally identifiable information.

This ICR does not include a form that requires a Privacy Act Statement.

Does this ICR contain surveys, censuses, or employ statistical methods? Yes No

Does this ICR request any personally identifiable information (see OMB Circular No. A-130 for an explanation of this term)? Please consult with your agency's privacy program when making this determination? Yes No

Does this ICR include a form that requires a Privacy Act Statement (see 5 U.S.C. §552a(e)(3))? Please consult with your agency's privacy program when making this determination. Yes No

11. Questions of a Sensitive Nature

The proposed information collection does not include any questions of a sensitive nature. There are no individual respondent identifiers on the survey, and questions pertain only to quality improvement, teamwork, team communication, and patient safety culture. Additionally, the responses on the surveys are not being evaluated, but rather the consistency with which the questions are answered is being evaluated (i.e., determining whether the questions are being interpreted the same by all the users), despite diverse healthcare settings and varying levels of experience and familiarity with TeamSTEPPS.)

12. Estimates of Annualized Burden Hours and Costs*


Exhibit 1.  Estimated annualized burden hours


Form Name

Number of respondents

Number of responses per respondent

Hours per response

Total burden hours

Setting Demographics Survey (Appendix A)


9

1

0.25

2.25

TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B)


350

1

0.25

87.5

Total


359


 

89.75


 

Exhibit 2. Estimated annualized cost burden


Form Name

Number of respondents

Total burden hours

Average hourly wage rate

Total cost burden

Setting Demographics Survey (Appendix A)

9

2.25

$ 57.12a

$128.52

TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B)

265

66.25

$ 103.06b

$6,827.73

TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Appendix B)

85

21.25

$ 15.50c

$329.38

Total

359

89.75


$7,285.63


a Based on the mean wages for Medical and Health Services Managers (Code 11-9111)

b Based on the mean wages for Family Medicine Physicians (Code 29-1215)

c Based on the mean wages for HC Support Occupations (Code 31-0000)

Occupational Employment Statistics, May 2020 National Occupational Employment and Wage Estimates United States, U.S. Department of Labor, Bureau of Labor Statistics.

https://www.bls.gov/oes/current/oes_nat.htm#b29-0000


13. Estimates of Annualized Respondent Capital and Maintenance Costs

There are no direct costs to respondents other than their time to participate in the study.

14. Estimates of Total and Annualized Cost to the Government

The total contractor cost to the government is estimated to be $202,738.56. As shown in Exhibit 3a, this amount includes costs for project development ($18,812.50); data collection activities ($93,152.31); data processing and analysis ($24,178.47), project management ($18,812.50) and overhead ($47,782.78).


Exhibit 3a.  Estimated Total and Annualized Cost

Cost Component

Total Cost

Annualized Cost

Project Development

$18,812.50

$18,812.50

Data Collection Activities

$93,152.31

$93,152.31

Data Processing and Analysis

$24,178.47

$24,178.47

Publication of Results

$0

$0

Project Management

$18,812.50

$18,812.50

Overhead

$47,782.78

$47,782.78

Total

$202,738.56

$202,738.56

Exhibit 3b. Federal Government Personnel Cost

Activity

Federal Personnel

Hourly Rate

Estimated Hours

Cost

Project oversight to include data collection oversight and review of results




Project Officer GS15

$81.84

25

$2,046

Total

$2,046

Annual salaries based on 2020 OPM Pay Schedule for Washington/DC area: https://www.opm.gov/policy-data-oversight/pay-leave/salaries-wages/salary-tables/pdf/2020/DCB_h.pdf

15. Changes in Hour Burden

This is a new information collection, thus no changes in hour burden is expected or reported here.

16. Time Schedule, Publication and Analysis Plans

The information collection will begin upon OMB approval (estimated April 2022) and will include recruitment of practices and completion of all data collection activities by June 2022. Quantitative analysis for psychometric testing and findings will be completed by August 2021 and materials will be revised based on data collected. We anticipate a 2-month survey timeline. Publication of the materials by AHRQ on their website will be completed after 508 compliance review. There will be publication efforts associated with the methods for psychometric testing, and the statistical findings on if the Team Assessment Tool instrument is a valid and reliable measurement scale. There will be no publications associated with respondent survey results. All information will only be presented in the aggregate, with no individual identifiers linking health systems or respondents to the survey.

17. Exemption for Display of Expiration Date

AHRQ does not seek this exemption.

List of Attachments:

Appendix A: Setting Demographics Survey (Participating health systems)

Appendix B: TeamSTEPPS® Team Assessment Tool for Improving Diagnosis (Survey Respondents)

Appendix C: Federal Register Notice

References

1. Leape LL, Berwick DM, Bates DW. Counting Deaths Due to Medical Errors—Reply.

JAMA. 2002;288(19):2405. doi:10.1001/jama.288.19.2405-jlt1120-2-3


  1. Newman-Toker DE, Pronovost PJ. Diagnostic errors the next frontier for patient safety.

JAMA - J Am Med Assoc. 2009;301(10):1060-1062. doi:10.1001/jama.2009.249


  1. Saber Tehrani AS, Lee HW, Mathews SC, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: An analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550


  1. Newman-Toker DE, Schaffer AC, Yu-Moe CW, et al. Serious misdiagnosis-related harms in malpractice claims: The “Big Three” – vascular events, infections, and cancers. Diagnosis. 2019;6(3):227-240. doi:10.1515/dx-2019-0019


  1. CRICO Strategies, 2015. 2014 Annual Benchmarking Report: Malpractice Risks in the Diagnostic Process. Cambridge, MA, Harvard Medical Institutions, Inc.


  1. National Academy of Medicine. Improving Diagnosis in Health Care. (Balogh EP, Miller BT, Ball JR, eds.). Washington, DC: National Academies Press; 2015. doi:10.17226/21794.


  1. Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6):418-425. doi:10.1001/jamainternmed.2013.2777





File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorHill, Mary A
File Modified0000-00-00
File Created2022-05-03

© 2024 OMB.report | Privacy Policy