SUPPORTING STATEMENT
Part B
Development and Evaluation of AHRQ’s Quality Indicators Improvement Toolkit
Version: May 19, 2010
Agency of Healthcare Research and Quality (AHRQ)
Table of contents
B. Collections of Information Employing Statistical Methods 3
1. Respondent universe and sampling methods 4
2. Information Collection Procedures 5
3. Methods to Maximize Response Rates 10
4. Tests of Procedures 11
5. Statistical Consultants 11
B. Collections of Information Employing Statistical Methods
The goals of the evaluation are to:
(1) Learn about the experiences of participating hospitals in implementing actions to achieve improvement on the AHRQ Quality Indicators (QIs),
(2) Obtain feedback and guidance from those hospitals on the usefulness and usability of the Toolkit, which will be used to improve the Toolkit, and
(3) Explore possible effects of the use of the toolkit on hospitals’ ability to affect their rates for the AHRQ Inpatient Quality Indicators (IQIs) and Patient Safety Indicators (PSIs), within the limits of this formative research, with the goal to help inform design of future studies intended to assess toolkit effectiveness more rigorously once the toolkit has been refined through the field testing done in this study.
Reflecting those goals, group interviews will be conducted with respondents at the participating hospitals, through which qualitative data will be collected regarding the quality improvement implementation activities pursued by the hospitals, application of the Toolkit to support quality improvement, staff perceptions of the Toolkit and its implementation, and its effects on their work activities and experiences. The information collected will be used to revise the Toolkit, and also will be reported as case studies describing hospitals’ experiences in implementing improvement processes while using the Toolkit. These case studies will not include conclusions regarding the effectiveness of the toolkit because the performance of such an assessment would require the existence of a finalized toolkit as well as a larger sample of users and a comparison group of non-users that can provide sufficient power for statistical inference. The hospitals themselves will use their preexisting data to calculate the AHRQ IQI and PSI measures both pre and post-implementation of the Toolkit.
The evaluation of the Toolkit will use case study research methods. These methods allow for in-depth exploration of the qualitative dynamics of the implementation processes of a limited number of hospitals, and they can generate detailed information for use in subsequent Toolkit revisions and refinements. Data will be collected using semi-structured interview protocols containing open-ended questions that will allow exploration of the multiple, interacting factors involved in hospitals’ use of the Toolkit for implementing quality improvements.
The information needed to meet the goals of this study could not be collected using a closed-ended questionnaire and standard survey methods. First, it is not a goal of this study to obtain data that can be summarized over a representative sample of hospitals; rather it is to learn from a small number of hospitals with a variety of characteristics, so that those lessons may be applied to establishing a viable Toolkit for use by a larger population of hospitals. Second, a closed-ended survey requires a standardization of well-established topics with a well-defined set of appropriate response options for each topic. Such specific and structured information required to guide development of closed-ended survey items must first be developed through case studies of the type we propose to conduct.
1. Respondent universe and sampling methods
Six case study hospitals will be selected to participate in the Toolkit field test and evaluation. The six hospitals will be selected from among the membership of the University HealthSystem Consortium (UHC) and Trinity Health, which are participating in this study. UHC has 103 academic medical center members and an additional 217 affiliated hospitals; Trinity Health is a system of 44 hospitals that serve communities in seven states. These hospitals will be selected to vary across the characteristics of bed size, academic or community hospital, geographic region, urban or rural location, safety net hospital, and previous experience in working with the QIs.
As another selection criterion, hospitals will be sought that have varied experience in conducting quality improvement activities. Thus, some of the hospitals will have significant quality improvement experience and will therefore be able to inform us about how the Toolkit materials compare with other resources they may have available. Other hospitals, in contrast, will be able to provide a more naïve view, and inform us about the use of the Toolkit in an environment that has few other resources available for conducting quality improvement activities. Combining the perspectives of these two types of hospitals in these case studies will be critical to helping ensure that the Toolkit meets the needs of a diverse range of U.S. hospitals.
The final selection criterion is that all the hospitals should have a commitment to making improvements to their performance. This criterion will enhance the likelihood that we will observe hospitals that indeed are taking actions for quality improvement, and therefore would provide useful information for the evaluation. Those with little or weak commitment are much less likely to make meaningful progress in implementation actions, including actual use of the Toolkit, with the result that such hospitals would not generate much useful information for the evaluation (i.e., absence of action would yield no experience or Toolkit use from which learning could be done).
It is planned that four hospitals from UHC and two hospitals from Trinity Health will be selected to participate. Invitations will be sent to the member hospitals of UHC and Trinity, and the final set of six hospitals will be selected from among those that volunteer to participate.
The invitation letter will specify that the commitment of the hospitals will include agreement to participate in the evaluation data collection activities. Using this process, all of the hospitals that agree to participate in the field test and evaluation will be doing so voluntarily, and they will have agreed in advance to the actions specified as involved in their participation.
2. Information Collection Procedures
Four types of data will be collected in the evaluation:
Descriptive data regarding each hospital’s decision making on improvement priorities and resulting quality improvement design and implementation, and how the Toolkit was used in this process. Data elements that will be collected include the decision process used to select specific interventions, implementation strategy and timeline, participants involved in decisions on priorities and action plan, and characteristics of the organizational environment of the hospital that could affect implementation.
Experiential data regarding perceptions of stakeholders regarding the progress made in implementing the selected improvement strategy, factors contributing to successes or challenges, and effects on their work activities and experiences. These data will be collected from multiple stakeholders involved in or affected by each hospital’s quality improvement activities.
Feedback on the usability of each Toolkit component, including tools to support application of the AHRQ QIs to the hospital’s data, identification of areas for improvement, leadership and decision making, design of implementation strategies for interventions, progress measurement, cost-effectiveness evaluation of the interventions, and monitoring of progress and outcomes. For each component, we will seek to identify revisions that could improve Toolkit usability.
• AHRQ QI measures for both the IQIs and PSIs. Measurements will be made by the hospitals themselves using the instructions and algorithms provided in the Toolkit. Pre and post-implementation measures will determine the effect the toolkit had on the QI measures. A pre-post change in a measure will be considered significant if the difference in the two rates is significant at a p < 0.05 level.
a. Information Collection Schedule
Primary data collection will be conducted at five times during the field-test year when the hospitals are implementing their quality improvement activities and using the Toolkit. Through this approach, it will be possible to capture changes in each hospital’s implementation status and experiences over time, as well as to document which tools in the Toolkit they used at different phases of their quality improvement work.
1) The first data collection will be performed through one-hour group interviews conducted with the six hospitals’ implementation teams at the start of the implementation year, as the hospitals are starting to implement improvements using the Toolkit. These interviews, which will be conducted by teleconference, will capture information on expectations and early decision processes, as well as feedback on how they used the Toolkit in their priority-setting and early implementation processes.
2-4) The next three data collection points will be three quarterly update interviews conducted with the lead of each hospital’s implementation team, via teleconference. These interviews will focus on subsequent implementation progress and experiences over the course of the evaluation year, including information on which tools they used during each time period being addressed.
5) The last data collection will be performed at the end of the implementation year, again using group interviews and a semi-structured interview protocol. These interviews will focus on gaining a retrospective perspective on the hospitals’ implementation experiences and obtaining final feedback on the Toolkit.
• For three hospitals, data will be collected during in-person site visits to the hospitals, at which separate group and individual interviews will be conducted with the implementation team and with other stakeholder groups, including management staff, physicians, nurses, and other frontline staff. This in-depth examination of varying views and experiences of multiple stakeholder groups will provide a broad perspective on the Toolkit implementation experiences.
• For the other 3 hospitals that will not have site visits, we will conduct group and individual interviews with the hospital implementation teams via teleconference, using the same semi-structured interview protocol used for interviews conducted during the site visits.
Data on rates of the QIs will be obtained from the hospitals at the start and end of the field test. They will asked to provide rates they have calculated for all of the QIs as well as the numerators and denominators used to calculate those rates.
b. Design and Use of Interview Protocols
Three semi-structured interview protocols have been developed that will be used to conduct the interviews, which will enable collection of data that covers the same topics and issues for all the participating hospitals. The table below lists these protocols and identifies when each of them will be used for data collection. It also lists the AHRQ QI data collection tool and shows when it will be used to collect effects data.
|
Start of |
Quarterly Update Interviews |
End of |
||
Interview Protocol |
Field Test |
Qtr 1 |
Qtr 2 |
Qtr 3 |
Field Test |
Pre/Post Interview Protocol |
X |
|
|
|
X |
Quarterly Update Protocol |
|
X |
X |
X |
|
Toolkit Usability Protocol |
X |
|
|
|
X |
AHRQ QI Data Collection Tool |
X |
|
|
|
X |
Design of the interview protocols. The contents of the pre/post interview protocol and quarterly update protocol were developed based on a formal conceptual framework that identifies the key elements of the system in which the Toolkit and related quality improvement interventions are implemented. For example, key system elements include the implementation process itself, the function of the implementation team, perceptions and actions of other directly involved staff, interactions with other hospital units or departments, role of patients and families, executive leadership involvement and support, organizational philosophy and capacity, and the external environment.
Questions in the interview protocols address each of these components. For each topic, stakeholders will be asked about their expectations, their perceptions of progress, and how changes have affected the interviewee. The data collected will be organized using a grid designed to facilitate comparison of perceptions across stakeholders during analysis of results.
The Toolkit usability protocol was developed for this study, and it is designed to collect feedback from hospitals on the usability of each of the tools in the Toolkit. The topics covered are those that have been determined to be important dimensions of usability, and included in standard instruments for usability testing.
The QI data collection tool is a template table that will be populated by data provided by each hospital, which they will do by completing the items in the table, with modifications to the contents made by each hospital based on which QIs it is addressing with its quality improvement efforts. The source data for the information to be provided are the discharge records for the time period(s) of interest for each hospital. It is anticipated that the hospitals will use the AHRQ software to calculate the IQIs and PSIs to generate these data on rates. A row is included for each QI, in which the hospital will enter data for the number of events that occurred (numerator), number of patients at risk (denominator), and rate calculated using these two numbers.
Collection of data using the protocols. Consistent methods will be used to capture and record the data obtained during each of the types of interviews to be used, as described above. Each interview will be conducted by a team of two people – a researcher with extensive experience in conducting this type of interview and another staff person who will take notes during the interview. For open-ended questions, the note taker will record detailed information provided by the respondents as the discussion proceeds. The more structured questions will be presented to the interviewee(s) in a separate survey form that they will be asked to complete before the interview is conducted to address the qualitative, open-ended questions.
All of the interviews will be collected in compliance with the requirements of the RAND IRB, and the privacy of the interview data will be protected in accordance with the data safeguarding plan for the project, as approved by the RAND IRB.
Following each interview, the note taker will finalize the notes by populating a blank interview protocol with the responses related to each question on the protocol. With the information organized in the same way for all interviews, the analysis of results will be able to readily make comparisons on each question, both across hospitals and across interviews over time for each hospital. Responses will be searched for common themes in implementation experiences, as well as variations across the 6 hospitals. This information also will be used to prepare case studies on implementation experiences and lessons, which are to be included in the Toolkit as resources for other hospitals.
Data collected on the usability of the Toolkit will consist of both closed-ended and open-ended questions, for which data will be collected about each tool in the alpha Toolkit. The closed-ended data will be tabulated to generate a distribution of responses across the hospitals for each tool. The open-ended (qualitative) data will be summarized in a table format so that feedback from all of the 6 hospitals is presented together for each tool. Using these results, a set of recommendations will be prepared regarding revisions that should be made to the Toolkit.
c. Provisions for Data Quality Control
Qualifications of individuals collecting the information. The group interviews will be conducted by RAND researchers who have extensive experience with evaluation methods and the conduct of individual and group interviews and focus groups. This experience includes interviews as part of an evaluation of teamwork improvement in hospital labor and delivery units, an evaluation of a quality improvement training program, and evaluation of AHRQ’s patient safety initiative. These researchers also have worked together in previous projects, so transfer of the methods to this evaluation will be smooth.
Scheduling data collection with participating hospitals. The RAND evaluation team will work with a contact person at each participating hospital to plan for and schedule each interview involved in the evaluation. Once a hospital has agreed to participate, an interview schedule will be developed for that hospital in collaboration with its contact person. RAND will confirm this schedule via a follow-up email communication to the contact person. The schedule will be revised as necessary as the evaluation proceeds, through mutual agreement by RAND and the hospital contact person.
Procedures for data quality control. The interview protocols are designed based on a conceptual framework and they are structured to achieve consistency and completeness in the data collected from the interviews. The interviews will be conducted by a small number of researchers (2 or 3), which will yield consistency in data collection methods and synthesis.
As a RAND researcher conducts an interview, a second RAND staff member will take notes. Within 2 days after an interview is completed, the note taker will finalize the notes by refining the narrative and placing each response item obtained under the relevant questions. This person also will identify any inadequacies in the data completeness, which will be addressed by the interviewer in subsequent contacts with the hospital contact person.
The researcher who conducts each interview will review the completed interview notes for accuracy of the data content, appropriate assignment to relevant questions, and completeness. The data analysis will aggregate the responses related to each question and will examine similarities and differences in the responses across hospitals. Because these interviews will be collecting qualitative data, there will not be an issue of missing quantitative data that would require imputation.
3. Methods to Maximize Response Rates
The planned approach for recruiting the six hospitals for participation in the project includes a step in which hospitals are introduced to the project, informed about what will be involved if they choose to participate, and offered an opportunity to volunteer for participation. Because the hospitals are volunteering, and they know in advance the data collection interviews that will be involved in the evaluation, it is expected that we will have close to 100% participation by teams at participating hospitals. The introductory materials for the interviews emphasize that this is a formative evaluation being done to learn from their experiences and to apply those lessons to improving the Toolkit being developed for hospital use. In addition, the hospitals will receive implementation guidance from UHC as part of the implementation process, which is another incentive for their active participation.
Based on experience with previous evaluations, we anticipate that the hospitals will participate willingly in the interviews, and they will share their experiences candidly. We have received feedback from those involved in previous evaluations that the interview process is educational for them, giving them an opportunity to think through past experiences to inform future actions.
4. Tests of Procedures
The data collection procedures and instruments are virtually identical to ones used previously with satisfactory results. Therefore, it is anticipated that valid and usable data will be collected using them in this evaluation.
5. Statistical Consultants
Dr. Amelia Haviland at RAND (412) 683-2300 is the statistician for the project in which the evaluation interviews will be conducted. She will serve as the statistical consultant for the evaluation.
File Type | application/msword |
Author | william.carroll |
Last Modified By | william.carroll |
File Modified | 2010-05-21 |
File Created | 2010-05-21 |