Supporting Statement Part B FINAL

Supporting Statement Part B FINAL.docx

Updating and Expanding the AHRQ Quality Indicators Toolkit for Hospitals

OMB: 0935-0164

Document [docx]
Download: docx | pdf



SUPPORTING STATEMENT


Part B







Updating and Expanding the AHRQ QI Toolkit for Hospitals





Version: February 25, 2014









Agency of Healthcare Research and Quality (AHRQ)



Table of contents


B. Collections of Information Employing Statistical Methods 3

1. Respondent universe and sampling methods 3

2. Information Collection Procedures 5

3. Methods to Maximize Response Rates 9

4. Tests of Procedures 9

5. Statistical Consultants 9







































B. Collections of Information Employing Statistical Methods

The goals of the evaluation are to:

  1. Assess the usability of the updated Toolkit for hospitals, which will be used to improve the Toolkit, and

  2. Examine hospitals’ experiences in implementing interventions to improve their performance on the AHRQ QIs, which will be use to guide successful future applications of the Toolkit.


Reflecting these goals, group interviews will be conducted with respondents at the participating hospitals, through which qualitative data will be collected regarding the quality improvement implementation activities pursued by the hospitals, application of the updated Toolkit to support quality improvement, staff perceptions of the updated Toolkit and its implementation, and its effects on their work activities and experiences. The information collected will be used to revise the Toolkit, and also will be reported as case studies describing hospitals’ experiences in implementing improvement processes while using the Toolkit. OMB approval was obtained for the development and evaluation of the original Toolkit, which consisted of a protocol very similar to the one described in this statement.


The evaluation of the Toolkit will use case study research methods. These methods allow for in-depth exploration of the qualitative dynamics of the implementation processes of a limited number of hospitals, and they can generate detailed information for use in subsequent Toolkit revisions and refinements. Data will be collected using semi-structured interview protocols containing open-ended questions that will allow exploration of the multiple, interacting factors involved in hospitals’ use of the Toolkit for implementing quality improvements.


The information needed to meet the goals of this study could not be collected using a closed-ended questionnaire and standard survey methods. First, it is not a goal of this study to obtain data that can be summarized over a representative sample of hospitals; rather it is to learn from a small number of hospitals with a variety of characteristics, so that those lessons may be applied to establish a viable Toolkit for use by a larger population of hospitals. Second, a closed-ended survey requires a standardization of well-established topics with a well-defined set of appropriate response options for each topic. Such specific and structured information required to guide development of closed-ended survey items must first be developed through case studies of the type we propose to conduct.


1. Respondent universe and sampling methods

The hospitals will be selected from among the membership of the UHC (University HealthSystem Consortium), which has partnered with RAND for this study. UHC has 120 academic medical center members and an additional 299 affiliated hospitals. UHC member hospitals offer an excellent testing ground for this work because of their diverse characteristics, ranging from large academic medical centers to a variety of community hospitals. Given that the PDIs have not previously been evaluated, our focus for the evaluation will be on hospitals that serve pediatric populations. Thus, all of the hospitals for case study will provide pediatric care. The hospitals will represent the full spectrum of pediatric care, ranging from highly specialized (e.g. freestanding children’s hospital) to general care (e.g. community hospitals with some general pediatric care). UHC has members in all of these three categories.

Hospitals that serve only children (i.e. freestanding children’s hospitals) will be asked to select two PDIs with which to work. Hospitals that serve both children and adults (i.e. children’s hospitals within a hospital and general hospitals with some pediatric care) will be asked to select one PDI for use in pediatric patients as well as one PSI or IQI for use in adult patients.

Sample

Size

UHC membership

120 academic medical centers

299 affiliated hospitals


Case study hospitals


6 hospitals that provide pediatric care plus/minus

adult care

  • 2 free-standing children’s hospitals

  • 2 children’s hospitals within a hospital

  • 2 general hospitals that provide pediatric care



As another selection criterion, hospitals will be sought that have varied experience in conducting quality improvement activities. Some of the hospitals will have significant quality improvement experience and will therefore be able to inform us about how the Toolkit materials compare with other resources they may have available. Other hospitals, in contrast, will be able to provide a more naïve view, and inform us about the use of the Toolkit in an environment that has few other resources available for conducting quality improvement activities. Combining the perspectives of these two types of hospitals in these case studies will be critical to helping ensure that the revised Toolkit meets the needs of a diverse range of U.S. hospitals. To further narrow down the potential participants, hospitals’ performance on the QIs can be assessed using UHC’s clinical data base (CDB) so that both high and low performers can be identified (as all participating hospitals will be UHC members). Having a variety of performance levels in the program will provide additional depth of experience in assessing the use of the Toolkit.

The final selection criterion is that all the hospitals should have a commitment to making improvements to their performance. This criterion will enhance the likelihood that we will observe hospitals that are indeed taking actions for quality improvement, and therefore would provide useful information for the evaluation. Those with little or weak commitment are much less likely to make meaningful progress in implementation actions, including actual use of the Toolkit, with the result that such hospitals would not generate much useful information for the evaluation (i.e., absence of action would yield no experience or Toolkit use from which learning could be done).

Invitations will be sent to the six evaluation hospitals to engage more closely in the detailed testing of the Toolkit. Our proposed approach to recruiting UHC hospitals for participation was employed successfully in testing of the original Toolkit. We expect a 100% response rate based on the actual close to 100% response rate (one hospital did drop out prior to the beginning of the evaluation but was quickly replaced) observed in our previous work with the original Toolkit.

The invitation letter will specify that the commitment of the hospitals will include agreement to participate in the evaluation data collection activities. Using this process, all of the hospitals that agree to participate in the field test and evaluation will be doing so voluntarily, and they will have agreed in advance to the actions specified as involved in their participation. No payments or gifts will be provided to the hospitals participating in the evaluation. However, it is expected that they will benefit from the quality improvement activities they will undertake as part of field testing the Toolkit by increasing their capacity for quality improvement activities and improving the quality of care provided to patients.


2. Information Collection Procedures

Four types of data will be collected in the evaluation:

  • Descriptive data regarding each hospital’s decision making on improvement priorities and resulting quality improvement design and implementation, and how the Toolkit was used in this process. Data elements that will be collected include the decision process used to select specific interventions, implementation strategy and timeline, participants involved in decisions on priorities and action plan, effect of the recent change form ICD9 to ICD-10 codes, and characteristics of the organizational environment of the hospital that could affect implementation.

  • Experiential data reflecting perceptions of stakeholders regarding the progress made in implementing the selected improvement strategy, factors contributing to successes or challenges, and effects on their work activities and experiences. These data will be collected from multiple stakeholders involved in or affected by each hospital’s quality improvement activities.

  • Feedback on the usability of each Toolkit component with a particular focus on new components of the updated Toolkit, including tools to support application of the AHRQ QIs to the hospital’s data, identification of areas for improvement, leadership and decision making, design of implementation strategies for interventions, progress measurement, cost-effectiveness evaluation of the interventions, and monitoring of progress and outcomes. For each component, we will seek to identify revisions that could improve Toolkit usability.


a. Information Collection Schedule

Primary data collection will be conducted at five times during the field-test year when the hospitals are implementing their quality improvement activities and using the Toolkit. Through this approach, it will be possible to capture changes in each hospital’s implementation status and experiences over time, as well as to document which tools in the Toolkit they used at different phases of their quality improvement work.

1) The first data collection will be performed through one-hour group interviews conducted with the six hospitals’ implementation teams at the start of the implementation year, as the hospitals are starting to implement improvements using the Toolkit. These group interviews, which will be conducted by teleconference, will capture information on expectations and early decision processes, as well as feedback on how they used the Toolkit in their priority-setting and early implementation processes. Stakeholders to be interviewed will include both management as well as the implementation team.

2-4) The next three data collection points will be three quarterly update interviews conducted with the lead of each hospital’s implementation team, via teleconference. These interviews will focus on subsequent implementation progress and experiences over the course of the evaluation year, including information on which tools they used during each time period being addressed.

5) The last data collection will be performed at the end of the implementation year, again using group interviews and a semi-structured interview protocol. These interviews will focus on gaining a retrospective perspective on the hospitals’ implementation experiences and obtaining final feedback on the Toolkit. Data will be collected during in-person site visits to the hospitals, at which separate group and individual interviews will be conducted with the implementation team and with other stakeholder groups, including management staff, physicians, nurses, and other frontline staff. This in-depth examination of varying views and experiences of multiple stakeholder groups will provide a broad perspective on the Toolkit implementation experiences.


Data collection is scheduled to start in conjunction with implementation of the revised Toolkit. Both the implementation and evaluation have been timed so as to minimize the possible effects of the change from ICD-9 to ICD-10 coding that is scheduled to happen in October 2014. During the time period just before and after this coding change, hospitals’ implementation teams may be occupied with adapting to this change. Thus the implementation and evaluation are scheduled to begin 6 months after the change from ICD-9 to ICD-10, in April 2015. Information about the impact of this change will be collected during the pre/post interviews, as well as the quarterly update interviews.


b. Design and Use of Interview Protocols

Three semi-structured interview protocols have been developed that will be used to conduct the interviews, which will enable collection of data that covers the same topics and issues for all the participating hospitals. The table below lists these protocols and identifies when each of them will be used for data collection. It also lists the AHRQ QI data collection tool and shows when it will be used to collect effects data.



Start of

Quarterly Update Interviews

End of

Interview Protocol

Field Test

Qtr 1

Qtr 2

Qtr 3

Field Test

Pre/Post Interview Protocol

X




X

Quarterly Update Protocol


X

X

X


Toolkit Usability Protocol

X




X


Design of the interview protocols. The protocols and data collection tool were developed and successfully used for the evaluation of the original Toolkit, and adapted for use with the updated Toolkit.


The contents of the pre/post interview protocol (Attachment B) and quarterly update protocol (Attachment C) were originally developed based on a formal conceptual framework that identifies the key elements of the system in which the Toolkit and related quality improvement interventions are implemented. For example, key system elements include the implementation process itself, the function of the implementation team, perceptions and actions of other directly involved staff, interactions with other hospital units or departments, role of patients and families, executive leadership involvement and support, organizational philosophy and capacity, and the external environment. Questions in the interview protocols address each of these components. These protocols were used successfully in our evaluation of the original Toolkit, and have been updated as appropriate for this evaluation. For each topic, stakeholders will be asked about their expectations, their perceptions of progress, and how changes have affected the interviewee. The data collected will be organized using a grid designed to facilitate comparison of perceptions across stakeholders during analysis of results.


The Toolkit usability protocol (Attachment D) was developed for the original Toolkit, and is designed to collect feedback from hospitals on the usability of each of the tools in the Toolkit. The topics covered are those that have been determined to be important dimensions of usability, and are included in standard instruments for usability testing.


Collection and analysis of data using the protocols. Consistent methods will be used to capture and record the data obtained during each of the types of interviews to be used, as described above. Each interview will be conducted by a team of two people – a researcher with extensive experience in conducting this type of interview and another staff person who will take notes during the interview. For open-ended questions, the note taker will record detailed information provided by the respondents as the discussion proceeds. For the more structured questions, both the interviewer and note taker will record responses on whatever scales are used (e.g., yes/no, 5-point scales). All interviews will be tape recorded to ensure that the researchers do not miss any comments. Interviewees will be informed of this, and will be able to opt out of being tape recorded but still be able to participate. All qualitative evaluation data will be coded in Atlas.ti software for analysis of qualitative data.


All of the interviews will be collected in compliance with the requirements of the RAND IRB, and the privacy of the interview data will be protected in accordance with the data safeguarding plan for the project, as submitted to the RAND IRB which granted our project an exemption (Attachment G).


Following each interview, the note taker will finalize the notes by populating a blank interview protocol with the responses related to each question on the protocol. With the information organized in the same way for all interviews, the analysis of results will be able to readily make comparisons on each question, both across hospitals and across interviews over time for each hospital. Responses will be searched for common themes in implementation experiences, as well as variations across the 6 hospitals. This information also will be used to prepare case studies on implementation experiences and lessons, which are to be included in the Toolkit as resources for other hospitals.


Data collected on the usability of the Toolkit will consist of both closed-ended and open-ended questions, for which data will be collected about each tool in the alpha Toolkit. The closed-ended data will be tabulated to generate a distribution of responses across the hospitals for each tool. The open-ended (qualitative) data will be summarized in a table format so that feedback from all of the 6 hospitals is presented together for each tool, as well as by pediatric specialty hospital versus general hospital for the PDIs. Using these results, a set of recommendations will be prepared regarding revisions that should be made to the Toolkit.


c. Provisions for Data Quality Control

Qualifications of individuals collecting the information. The group interviews will be conducted by RAND researchers who have extensive experience with evaluation methods as well as the conduct of individual and group interviews and focus groups. In addition having evaluated the original toolkit, members of RAND’s team have experience including interviews as part of an evaluation of teamwork improvement in hospital labor and delivery units, an evaluation of a quality improvement training program, and an evaluation of AHRQ’s patient safety initiative. Most of the researchers also have worked together in previous projects, so transfer of the methods to this evaluation will be smooth.


Scheduling data collection with participating hospitals. The RAND evaluation team will work with a contact person at each participating hospital to plan for and schedule each interview involved in the evaluation. Once a hospital has agreed to participate, an interview schedule will be developed for that hospital in collaboration with its contact person. RAND will confirm this schedule via a follow-up email communication to the contact person. The schedule will be revised as necessary as the evaluation proceeds, through mutual agreement by RAND and the hospital contact person.


Procedures for data quality control. The interview protocols are designed based on a conceptual framework and they are structured to achieve consistency and completeness in the data collected from the interviews. The interviews will be conducted by a small number of researchers (2 or 3), which will yield consistency in data collection methods and synthesis.


As a RAND researcher conducts an interview, a second RAND staff member will take notes. Within 2 days after an interview is completed, the note taker will finalize the notes by refining the narrative and placing each response item obtained under the relevant questions. This person also will identify any inadequacies in the data completeness, which will be addressed by the interviewer in subsequent contacts with the hospital contact person.


The researcher who conducts each interview will review the completed interview notes for accuracy of the data content, appropriate assignment to relevant questions, and completeness. The data analysis will aggregate the responses related to each question and will examine similarities and differences in the responses across hospitals. Because these interviews will be collecting qualitative data, there will not be an issue of missing quantitative data that would require imputation.



3. Methods to Maximize Response Rates

Recruiting for the six hospitals for participation in the project will occur from a pool of 20 hospitals already selected for implementation by UHC. Thus, the hospitals will have already been introduced to the project, informed about what will be involved if they choose to participate, and offered an opportunity to volunteer for participation. Because the hospitals are volunteering, and they know in advance the data collection interviews that will be involved in the evaluation, it is expected that we will have close to 100% participation by teams at participating hospitals. The introductory materials for the interviews emphasize that this is a formative evaluation being done to learn from their experiences and to apply those lessons to improving the Toolkit being developed for hospital use. Based on experience with previous evaluations, we anticipate that the hospitals will participate willingly in the interviews, and they will share their experiences candidly. We have received feedback from those involved in previous evaluations that the interview process is educational for them, giving them an opportunity to think through past experiences to inform future actions.

Of note, a virtually identical study has been done in the past few years when this team evaluated the use of the original Toolkit, and an excellent response rate of essentially 100% was observed (one hospital did drop out prior to the beginning of the evaluation but was quickly replaced).


4. Tests of Procedures

The data collection procedures and instruments are virtually identical to ones used previously with satisfactory results. Therefore, it is anticipated that valid and usable data will be collected using them in this evaluation.


5. Statistical Consultants

Given that this project is largely qualitative, the burden of statistical analyses is minimal. Therefore we have not consulted with a statistical expert on the design of this project. Collection and analysis of the information will be carried out by the RAND Corporation (tel: 617-338-2059). Specific team members expected to be involved with data collection and analysis are Rachel Burns (ext. 4436), Dr. Donna Farley (ext. 4266), Dr. Courtney Gidengil (ext. 8637), Dr. Peter Hussey (ext. 8617), and Dr. Kerry Reynolds (ext. 4721).

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSUPPORTING STATEMENT
Authorwcarroll
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy