SUPPORTING STATEMENT
Part B
Assessing the Impact of the Patient Safety Improvement Corps (PSIC) Training Program
Version October 28, 2008
Agency of Healthcare Research and Quality (AHRQ)
Table of Contents
B. Collections of Information Employing Statistical Methods 3
1. Respondent universe and sampling methods 3
2. Information Collection Procedures 4
The training participant and leadership Web-based questionnaires will be distributed to all individuals who have participated in the PSIC training program since its inception in 2003 and to all CEOs or their equivalent from the participating organizations, respectively. Currently, there are approximately 300 training participants and 75 CEOs. No sampling strategy will be employed, as AHRQ seeks to obtain feedback from everyone who has participated in the program (i.e., a census survey). A 75% response rate or higher is anticipated based on the positive relationship established between AHRQ, NCPS, the training participants, and their organizations.
Because response rates for any questionnaire can vary widely, our goal is to attain an acceptable response rate for conducting descriptive analyses to identify basic trends and patterns in the data. To that end, we hope to achieve a response rate acceptable for census questionnaires such as these. Best practice and extant literature (Barclay, Todd, Finlay, Grande, & Wyatt, 2002)1 point to acceptable response rates for post-hoc training evaluation questionnaires to be approximately 40 percent of the target population (e.g., training participants). However, we estimated a 75 percent or higher response rate based on the overall interest and enthusiasm expressed by many PSIC training participants to engage in patient safety activities, as informally reported by AHRQ and NCPS to AIR.
Despite this perceived level of interest, we still anticipate the need to bolster participation in the questionnaires. To do so, we will begin with an initial e-mail to invite participants to complete the questionnaire and follow-up with reminder notices2 at periodic intervals during the questionnaire response period. Our goal remains to obtain the highest possible response rate that can be achieved by reminding potential respondents of the opportunity.
In addition, qualitative interviews will be conducted in six states which had teams that participated in the PSIC program. A two-stage selection process will be used: first, states will be selected and second, individuals and the organizations they belong to will be selected. The first step of the selection process for the six states relied on data from previous evaluation efforts, publicly available information on state efforts and anecdotal information available from AHRQ, NCPS, or other relevant stakeholders. The selection process for the six states relied on data from previous evaluation efforts and anecdotal information available from AHRQ, NCPS, or other relevant stakeholders. The objective was to select states so as to increase diversity with respect to the following factors: perceived level of success at disseminating PSIC material post-training, type of team projects conducted during the PSIC program and afterwards (if known), size of training teams, leadership of teams and team composition, years in which the state participated in the training, state’s history of work in patient safety prior to training, and geographic location. By applying these selection criteria, we maximize the variety of experiences that will be used to define a set of lessons learned from the states selected and from their respective organizations. The preliminary states selected to participate in the qualitative interviews are: Arizona, Florida, Georgia, Illinois, Maryland, and New Jersey.
The second step of the selection process involves identifying the individuals who will be interviewed in each state. Interviewees for the semi-structured qualitative interviews will be drawn from qualified individuals serving in a variety of roles (i.e., policy maker, trainer or facilitator, front-line implementer) among PSIC trainees and non-trainees, yielding a total of at least six different types of respondent groups defined by role and PSIC training experience. It is also possible that we may identify additional roles once additional information is collected in preparation for the semi-structured interviews. We will interview up to nine individuals per each of the six respondent groups across the six states selected, for a maximum of 54 individuals. If additional roles are identified, the maximum will still be held at 54. The number of individuals interviewed per state will depend on the number and type of organizations and individuals involved, as well as their degree of participation in the PSIC program, but will generally vary between four and eight individuals within a state.
Interviewees include individuals from organizations participating in PSIC as well as individuals from other organizations affected by patient safety initiatives that may have resulted from involvement with the PSIC program. These individuals will be identified on a case-by-case basis using the list of PSIC participants, information from publicly available written information on patient safety activities in the state, additional input from AHRQ and NCPS to define a central organization and key contact, and referrals from the key contact in each state. The key contact in each state is typically the State Health Department team member who participated in the PSIC program. Interviewees will be recruited with assistance from the key contact in each state.
Questionnaires will be administered to all individuals who have participated in the PSIC training program using the Web-based questionnaires included in Attachments C and D. The participant questionnaire will be sent to all PSIC training participants via email. All correspondence to training participants and leaders are presented in Attachments E and F, respectively. These correspondences contain a hyperlink to enable easy access to the on-line questionnaire. AHRQ will provide the official email addresses for all PSIC training program participants and leaders in an electronic format. The email address will be used as the "user name" to access the Web-based questionnaire.
The Web-questionnaires will be accessible to PSIC training participants and CEOs 24 hours a day for a total of 30 days. Upon entrance into the questionnaire, respondents will view an introduction page that explains the questionnaire objectives and stresses the importance of participation. Respondents will then continue to the page where they are asked to enter their user name. The user name will be used as unique identifiers to ensure that multiple questionnaires cannot be submitted by the same respondent. Following the access page will be a page describing specific instructions on how to complete the questionnaire. Respondents will be able to easily respond to the questionnaire items by clicking on pre-coded options for closed-ended items and typing in "boxes" for any open-ended items.
Completed responses to the questionnaires will be backed up daily onto AIR’s dedicated data collection server and will also be printed and stored in a locked cabinet. Data responses will be checked visually by researchers and analysts on a regular basis to assure that data are entered appropriately into the database. In the event of potential problems such as irregular response patterns or responses that are not possible for a given question, analysts will identify the source of the problem and address it immediately.
During the data collection period, invited respondents who have yet to respond to the questionnaire will be contacted via email reminding them of the opportunity to participate and the importance of their feedback regarding the training program. The re-contact notice sent via email will provide the hyperlink to access the questionnaire, the estimated time (in minutes) it will take to respond, the impending deadline for submission of their responses, and additional information regarding privacy of responses and confidentiality of personal information. The re-contact notice is provided in Attachments E and F.
Following data collection, questionnaire responses will be compiled and assessed formally for data quality to produce a finalized database for statistical analyses. Incomplete response data poses a substantial threat to confident interpretation and generalization of the study results. The general approach to handling incomplete response data is to salvage as much data as possible using multiple techniques for examining patterns of missing data. Given the scope of this Web-based questionnaire, AIR will review questionnaire items with a substantial proportion of omitted responses. The precise cutoff percentage is typically chosen once the distribution of missing data has been established. It will be determined whether responses are missing in a manner that relates to other observable values. If data are determined to be missing in a manner that affects the interpretability of the responses, descriptive statistics and point estimates of relations among variables may be adjusted to account for missing data. This may be accomplished by using multiple imputation and full information maximum likelihood estimation techniques.
Typically for census surveys, the nature of missing data are explored, and an imputation strategy that combines two or more imputation methods may be developed. For example, one multiple imputation approach for this project might include a combination of hot-decking imputation with regression imputation known as regression-based nearest neighbor hot decking. With this imputation method, variance estimation is accounted for in a non-traditional sense in that sampling variance from the imputed data is estimated as is the imputation variance which accounts for the uncertainty associated with the imputed estimates. Despite this, there are a variety of imputation methods and variance estimation techniques that can be applied to ensure the completeness of data gathered.
An interview guide has been developed for the semi-structured interviews to be conducted in each state (Attachment G). This guide is to be used only to provide information on the type of questions that may need to be considered and is not to be read verbatim. Not all questions apply to all interviewees. The questions will be defined and adapted for each role based on the respondent group and, to some degree, adapted for each individual, based on the their experience with PSIC and specific patient safety experience. Each state’s experience is unique, and this was the main criterion for their selection. In this way, we can maximize the breadth of the data to be used to derive lessons learned. Interview topics include: patient safety experience; current role; expectations from the training (if applicable); organizational experience with reporting systems and patient safety tools; relevancy of PSIC training concepts and tools to current workplace setting; barriers and facilitators to implementation of recent patient safety projects; training and dissemination of patient safety concepts and tools to others; organizational patient safety culture and leadership support; and planned patient safety activities.
Semi-structured interviews will be conducted in person (or by phone when this is not possible) at one or more of the organizations participating selected in each state. An experienced AIR interviewer will conduct all interviews after obtaining written informed consent. Each interview will last about 60 minutes. Interviews will be digitally recorded and transcribed deleting any use of the interviewee’s name. Interviews, transcripts, and any additional notes from the interviews will be stored in secured computer servers at AIR.
The text files for the transcripts will used to construct a database in a qualitative analysis software package, such as Atlas.ti. The software will be also be used for coding and analysis. A preliminary code list will be defined but it will be revised based on the actual review of the data after identifying any additional common themes. The revised coding scheme will be used to code the appropriate text fragments in each of the transcripts. Coding will be performed by two research assistants/associates. Intercoder agreement will be tested by double coding an initial set of interviews and once 80% agreement has been reached, coding will proceed. Analysis will be based on text analysis and analysis of the results of queries using the software that will identify text fragments with particular codes of interest. A memo for each major code or topic of interest will be prepared and these will be used to draft the “lessons learned” document.
The variability in response rates by questionnaire type has lead to great confusion about acceptable response rates.3 However, it is important to note that commonly acceptable practice in psychological research involving census surveys of training participant groups is 75 percent at a minimum. As is commonly the case with all surveys, the acceptable response rate is often not achieved initially. To effectively bolster this response rate, AIR will employ two interventions: (1) time-staggered notices via email of the opportunity to participate in the questionnaire and (2) one telephone communication per invited respondent, encouraging them to participate in the AHRQ Web-based questionnaire of PSIC training participants. If at any point, an invited respondent refuses to participate, they will not be contacted again through any means.
It is important to note that the most common way to ensure an acceptable response rate is to provide introductory information through advance notice from a known and respected source – in this case, AHRQ and/or NCPS. For the purposes of this study, the AHRQ PSIC Project Officer and a representative of the NCPS training team will distribute an e-mail to all invited respondents to provide advance notice of the study and the importance of participation (see Attachments E and F).
The aim of the proposed interventions for a low response rate is to ensure that data are reliable and accurate. Given that the respondents to the Web-based questionnaires represent the entire universe of potential PSIC training program participants and CEOs, the largest threat to the accuracy and reliability of the information gathered remains demand characteristics or faithful subject responding in an effort to skew evaluation results. Faithful subject responding represents an example of participant bias where participants in a survey or study can skew responses to meet the demand characteristics (e.g., respond in a manner to affirm the research questions posed by researchers or surveys) set by researchers. This is tantamount to having a participant respond in a certain fashion to help confirm a hypothesis rather than provide honest responses. To mitigate this threat to validity, any contact notice or communication with potential respondents should impart the importance of responding to items honestly.
In order to test Web-questionnaire procedures, AIR will employ two strategies: (1) a series of two to three cognitive laboratory interviews with AHRQ staff members and (2) a field test of the Web-questionnaires with a sub-sample of no more than nine potential respondents. This protocol will then be tested on a small group of volunteers using AIR’s cognitive laboratory (CogLab) technique. The goal of the CogLab is to identify questionnaire items and procedures that are confusing or intrusive, as well as questions that elicit ambiguous answers. During the CogLab, a researcher trained in the technique asks the volunteer to “think aloud” as he or she answers each question. In so doing, the researcher is able to examine the thought processes of the respondent as he or she hears, interprets, and decides on an answer. The results of the cognitive laboratory will be used to refine the questionnaire prior to field-testing.
Whenever a new Web-questionnaire data collection instrument is developed, it is important to field-test the procedures using a small group of people representative of the population of interest prior to administering the questionnaire on a wide-scale. A field test will be conducted using a sample of nine potential respondents, including key stakeholders in this project. The entire field test will be conducted in three days, after which time the resulting suggestions and feedback will be incorporated to produce the final questionnaire. Any potential data collection problems will be assessed and fixed to meet the initial intent of the questionnaire specifications. The purpose of the field test of the Web-questionnaire will be to ensure that data is being collected and stored properly and to address any fine-tuning to procedures that may be required. In the event that fine-tuning is required, OMB will be notified in a memorandum with a copy of the final version of the Web-questionnaire.
The American Institutes for Research will serve as the primary consultants for statistical aspects of the design and analysis of the web-questionnaire data. Dr. Laura Steighner, Senior Research Scientist at the American Institutes for Research is the primary point of contact for statistical design and analyses. She can be reached at [email protected] or 202-403-5064.
1 Barclay, S., Todd, C., Finlay, I., Grande, G. & Wyatt, P. (2002). Not another questionnaire! Maximizing the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs. Family Practice, 19, 105-111.
2 Invitations, reminders, and thank you e-mails are provided in Attachments E and F of our original OMB application.
3 Matzkin, R. (2007, November). Nonparametric survey response errors. International Economic Review, 48(4), 1411-1427.
File Type | application/msword |
File Title | SUPPORTING STATEMENT |
Author | wcarroll |
Last Modified By | Information Technology Group |
File Modified | 2008-11-04 |
File Created | 2008-11-04 |