Supporting Statement Part B -- CAHPS Fieldtest of Proposed HIT Questions Methodology 6-16-2009

Supporting Statement Part B -- CAHPS Fieldtest of Proposed HIT Questions Methodology 6-16-2009.doc

CAHPS Field Test of Proposed Health Information Technology Questions and Methodology

OMB: 0935-0158

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT


Part B








CAHPS Field Test of Proposed Health Information Technology Questions and Methodology

0935-0124








May 2009








Agency of Healthcare Research and Quality (AHRQ)
















B. Statistical Methods 3

1. Potential Respondent Universe and Sample Selection Method 3

2. Information Collection Procedures 4

3. Methods to Maximize Response Rate 4

4. Tests of Procedures 5

5. Statistical Consultation and Independent Review 6







































B. STATISTICAL METHODS


1. Potential Respondent Universe and Sample Selection Method



Respondents will be selected from up to six purposively chosen sites (health care providers and health insurance plans) that have implemented health information technology systems, such as electronic health records (EHRs) and electronic prescription refills, that are used by sufficient numbers of enrollees (i.e. at least 2400 enrollees per site). From each site the potential respondent universe will be patients who have been receiving care from a clinician at the health provider for at least one year prior to the survey and who have used one or more features of the health providers’ electronic EHR system. EHR systems managers have the ability to track which patients log on to the system, and which features (e.g. examine lab results, request prescription refill, etc.) the patients used. The sample selection at each site will be carried out jointly by senior leadership at the site (e.g. chief information officer) and a survey vendor experienced in conducting the CAHPS survey. We will ask the sites to provide a list of their enrollees who have seen a provider in the last 12 months and who have logged onto the personal health record system in the last 12 months. We will randomly select a sample of these enrollees for the field test. We will use common statistical techniques to select the sample, e.g. computerized random number generation applied to a list of enrollees.  When possible, we will stratify the enrollees at a site based on extent of health information technology (HIT) exposure to ensure a mix of different enrollees in the study (e.g. enrollees who use many HIT functions versus those who use few HIT functions). Institutional Review Boards (IRBs) at Yale and RAND evaluated the study to ensure proper protection of patients’ right to privacy and confidentiality as well as avoidance of harm. The study received approvals from both IRBs.


The draw will be a sample large enough to yield approximately 7,200 respondents. Because we are assuming a 50% response rate, we will draw approximately 14,400 patients, to achieve our total of 7,200 respondents.


Sites will be selected for geographic distribution and to have substantial numbers of patients with exposure to health information technology. A summary of the sites’ geographic locations and their sample sizes are shown in Exhibit 1.


Exhibit 1. Location and Sample Size of Field Test Sites

Site

Geographic Location

Invited Participants

Expected Responses

Site 1

West Coast

2,400

1,200

Site 2

West Coast

2,400

1,200

Site 3

South

2,400

1,200

Site 4

East Coast

2,400

1,200

Site 5

East Coast

2,400

1,200

Site 6

East Coast

2,400

1,200

2. Information Collection Procedures

Testing will be done using the Internet, mail and telephone survey modes of administration. For those assigned to Internet administration an email invitation will be sent that includes an invitation to participate along with a URL link to a web-based survey hosted on a secure server. Individuals who do not complete the survey via the Internet will be mailed a questionnaire and a cover letter. The sites will be divided between RAND’s Survey Research Group, a division within the RAND Corporation, and the Center for Survey Research (CSR), University of Massachusetts, Boston, an organization contracted by Yale University to complete field testing. RAND will use the software CfMC to administer the survey, while CSR will use Snap software.

In addition to the Internet, we also anticipate a mixed mail-telephone mode of data collection which will include the following steps:


  • Mailing an advanced notification letter

  • Mailing of the questionnaire and cover letter

  • Postal card reminder

  • A second mailing of the questionnaire to non-respondents.

  • Minimum of six telephone calls to every mail non-respondent approximately two weeks after the final mailing to complete a telephone interview.



3. Methods to Maximize Response Rate

Every effort will be made to maximize the response rate, while retaining the voluntary nature of the effort. We anticipate an approximately 50% response rate due to our experience with past CAHPS surveys. CAHPS survey response rates for 2005 ranged from 19%-71% for adult commercial populations, 17%-59% for adult Medicaid populations, and 14%-50% for child Medicaid populations (http://aspe.hhs.gov/hsp/06/Catalog-AI-AN-NA/CAHPS.htm). Published reports show similar rates, e.g. 54% from a medical group CAHPS survey (Hepner et al. 2005 Eval Health Prof), and 50% in 2001 and 46% in 2002 for the CAHPS dental care survey (Hays et al. 2006 Med Care).

We will provide an advanced notice prior to sending the survey. We will include a letter explaining what the survey is about, who is doing it and why, and providing contact information for questions. The second mailing and telephone follow-up will produce significant increases in response. For those assigned to Internet data collection, we will send up to 3 email invitations/reminders. If they do not respond to the invitations, we will mail a paper version of the survey to them by U.S. mail and ask them to complete it that way.


Surveys generally do not yield complete responses from every individual sampled from the population. In certain situations, nonresponse can bias the survey findings if appropriate adjustments are not made. There are two basic types of survey nonresponse. Unit nonresponse is the failure of a member of the sample to respond to the entire survey. Item nonresponse is the failure of a respondent to answer one or more survey items that the respondent is eligible to answer. In this analysis, we will examine and model patterns of both unit and item nonresponse to the field test CAHPS HIT Survey and assess the potential impacts of nonresponse bias and the corresponding adjustments. Common set administrative variables (e.g., age, gender, race/ethnicity) will be used to predict unit nonresponse. These variables and others collected on the survey itself will be used as predictors of item nonresponse. We will use case mix adjustment and nonresponse weights to more accurately reflect consumer experiences with HIT of different physicians and sites of care.

We will estimate multivariate logistic regression models to analyze the factors associated with unit nonresponse and item nonresponse. The initial models will include the full set of potential predictor variables. Subsequently, we will alter the model to consider possible interactions and to determine the most parsimonious specifications. Inverse probability weights will be generated from the prediction of the final unit nonresponse model. Case-mix models will be parameterized by linear age, race/ethnicity indicators, linear education, and linear self-reported health status.

4. Tests of Procedures

To achieve the goals of the field test the following analyses will be done:


  • Psychometric analysis focusing on the reliability and construct validity of the items included in the analyses. Items will be assessed for their ability to discriminate among clinicians. Items will also be assessed in terms of their associations with existing CAHPS items and domains using correlations and factor analysis. The domain structure of the survey will be assessed.


  • Assessment of the equivalence of survey responses by mode of administration: mail, telephone, and Internet. We will compare the characteristics of respondents (i.e., age, gender, education) completing surveys by the different modes of administration. We will also compare mean CAHPS item and composite scores by mode of administration using between group t-tests. Finally, we will estimate internal physician-level reliability for items and composites for each mode and evaluate the significance of difference of reliability estimates between modes.


  • Evaluation of potential case mix adjustors. Results of respondents categorized by gender, age, education, self-reported health status, and whether someone helped complete the survey. These variables have been shown to be significantly associated with CAHPS reports and ratings in earlier versions. The relationship of CAHPS results to these variables will be reviewed. Also, unadjusted and adjusted results will be compared.


  • Comparison of a 4-point response scale and a 6-point response scale. We will compare responses to the corresponding 4-point and 6-point communication and office staff items. Cross-tabulations and polychoric correlations will be estimated between pairs of items. We will also estimate the same associations after collapsing the 6-point items, combining the “Never” with “Almost never” and the “Always” with “Almost always” response options.


  • Impact of a post-paid incentive payment. Response rate for the group of patients randomized to the post-paid incentive will be compared to the response rate of the patients from the same site to determine if the incentive resulted in a difference in response rate. We will use chi-square to evaluate if the difference in response rate between those with an incentive is significantly (p < 0.05) higher than those not receiving the incentive offer. We will also evaluate the magnitude in response rate difference by cost of the incentive. Finally, we will evaluate the significance of differences in individual characteristics (gender, age, education, self-reported health status) of those who respond in the incentive and no incentive groups using chi-square and t-statistics, as appropriate.




5. Statistical Consultation and Independent Review

Input from statisticians will be obtained to develop, design, conduct, and analyze the information collected from this survey. This statistical expertise will be available from Marc Elliott, Ph.D. (RAND) and Alan Zaslovsky, Ph.D. (Harvard).

6


File Typeapplication/msword
File TitleSUPPORTING STATEMENT
Authorwcarroll
Last Modified Bywcarroll
File Modified2009-06-16
File Created2009-06-16

© 2024 OMB.report | Privacy Policy