P4P_Eval - Part B_final_060909

P4P_Eval - Part B_final_060909.doc

Evaluation of the Home Health Pay for Performance Demonstration: Survey instrument

OMB: 0938-1064

Document [doc]
Download: doc | pdf

Supporting Statement for Paperwork Reduction Act Submission

Home Health Quality Measures and Data Analysis”


Part B: Collections of Information Employing Statistical Methods


1. Description of the potential respondent universe and sam­pling/other respondent selection methods to be used.


The data will be collected from home health agencies (HHAs) in seven states: Connecticut, Massachusetts, Alabama, Georgia, Tennessee, Illinois, and California. A total of 570 HHAs (281 "Treatment" and 289 "Control") from these seven states volunteered to be part of the Pay for Performance (P4P) Demonstration. The assignment of HHAs into groups was done by the P4P Demonstration contractor, Abt Associates, Inc. The following table represents the total number of active HHAs identified on Home Health Compare (as of December 2007) distributed by state and the number and percentage (state) of volunteer HHAs in each "Treatment" and "Control" group.

State

Region

# State (Total)

# (%) State (Vol)

# (%) Treatment

# (%) Control

CT

Northeast

86

50 (58)

24 (28)

26 (30)

MA

Northeast

129

50 (39)

24 (19)

26 (20)

AL

South

146

55 (38)

26 (18)

29 (20)

GA

South

101

58 (57)

26 (26)

32 (32)

TN

South

139

89 (64)

47 (34)

42 (30)

IL

Midwest

490

132 (27)

67 (14)

65 (13)

CA

West

650

136 (21)

67 (10)

69 (11)


As can be seen in the previous table, the percentage of volunteer HHAs compared to the total number of active HHAs in a state ranges from a high of 64% in Tennessee to a low of 21% for California. The volunteer HHAs were randomly assigned in approximately even numbers to either the "Treatment" or "Control" group for each of the states.


As described elsewhere, the primary objective of this project is to evaluate the efficacy of the P4P approach to improving HHA performance based on seven publicly reported quality measures. Using a budget-neutral approach, HHAs can earn performance-based bonuses (absolute performance or improvement in performance) using these seven measures. One element in the evaluation is to determine what strategies (processes, policies) HHAs employed to improve their performance on these measures. HHAs will automatically be provided using the HHAs CMS Certification Number (CCN) one of two surveys based on their assignment to the P4P Demonstration by Abt Associates.


2. Procedures for the collection of information


a. Statistical Methodology for Sample Selection


The home health agencies were assigned and notified by Abt Associates to "Treatment" and "Control" groups based on the following characteristics of the HHA:

  • density (urban vs. rural) according to their Metropolitan Service Area classification,

  • size as defined by number of episodes (small, medium, large, or unknown),

  • control status of the HHA, i.e., nonprofit, proprietary, and government control, and

  • affiliation status of the HHA,, i.e., freestanding and hospital-based.

The P4P Demonstration contractor, Abt Associates, Inc, stratified each of the volunteer HHAs into one of 336 cells (state X density X size X control & affiliation). The HHAs in each cell were alternately assigned to either the "Treatment" and "Control" groups. The number of HHAs in the "Treatment" and "Control" groups were checked across the entire sample frame to see if there were approximately equal numbers of HHAs in the two groups (281 vs. 289 respectively). Abt's definition of small, medium, or large agencies was operationally defined based on the number of episodes reported from June 2005 through July 2006. The following operational definitions were used: small <1000 episodes; medium >=1000 and <=4000 episodes; and large >4000 episodes.


The analysis of the survey results will be conducted in aggregate across the entire sample frame. The "Treatment" survey contains 3 demographic items and 16 items that can be quantified using at least descriptive statistics, while the "Control" survey contains 3 demographic items and 13 items taken from the “Treatment” survey instrument. There are 13 items from each survey that can be compared using parametric or non-parametric statistics.


b. Estimation Procedure:


Based on the previous estimates of a survey completion rate of between 80 - 90%, we anticipate that between 225 - 253 "Treatment" surveys and 231 - 260 "Control" surveys will be available for analyses. We anticipate that the return rate for "Control" HHAs will be somewhat lower than "Treatment" HHAs in that the former group is not eligible for the monetary incentive in the P4P Demonstration. These HHAs may be somewhat less motivated than the latter group to share innovative clinical practices that occurred during the first year of the P4P Demonstration. We will produce separate descriptive analyses of the frequency of responses for each question for each of the two surveys. Additionally, we will compute the appropriate parametric or non-parametric comparative test for the nine items common to both surveys. Even using the lower estimates of response rates (assuming that the non-responses are distributed randomly across each of the four regions), the sample sizes will be sufficient to compute meaningful confidence intervals (see Table 1 in the next section).


c. Degree of Accuracy Needed:


For purposes of evaluating impact of the demonstration on provider behavior, it will be necessary to detect substantial differences between control and treatment group home health agencies in terms of reported changes in agency practice. "Substantial" in this context is defined as any difference of 10% to 15% or greater in survey responses. The expected sample size of roughly 460, divided approximately 55:45% between treatment and control providers, will be sufficient to detect differences of this magnitude with a reasonable level of confidence. As indicated in Figure 11, if the control group percentage responding "yes" to a particular yes/no survey item is 20%, and the experimental group percentage is 10%, there is an 80% chance of detecting the difference with the given sample size. In the other direction, there is a slightly lower probability of detecting the difference where the experimental group percentage is 30%, although it is still greater than 60%. A difference of 15% in either direction would be detected with almost 100% probability.


F igure 1: Power Calculation for Sample Size of 250/210 in Experimental/Control Group, Where Control Group Proportion is .20, Type I Error Probability of .05




















For an item with a 50:50% split, the power function is shown in Figure 2. The probability of detecting a difference of 10% in either direction is slightly greater than 50%, while a 15% difference can be detected more than 80% of the time.



Figure 2: Power Calculation for Sample Size of 250/210 in Experimental/Control Group, Where Control Group Proportion is .50, Type I Error Probability of .05




















d. Unusual Problems Requiring Specialized Sampling Procedures:


No specialized sampling procedures were required for this project.


e. Use of Periodic Data Collection Cycles:


This is a one-time study using these two survey instruments. During the second year of the evaluation two different survey instruments may be used to assess the impact of being awarded a performance bonus vs. not being awarded a performance bonus. If this option is pursued, a separate PRA package will be created for these instruments.


3. Methods to maximize response rates and to deal with issues of non-response.


Maximizing response rates

Two activities that will maximize response rate are 1) to ensure that the burden of completing the survey is minimized and 2) maximizing the number of contacts/reminders to complete the survey using multiple modalities. Each of these will be addressed separately.


Survey Burden:

As identified in Part A, Section 12 “Burden Estimate”, the Web survey instrument was tested both in its “paper and pencil” and “Web delivery” formats by individuals with similar backgrounds and expertise as the individuals expected to complete the survey items. This testing identified that the total number of minutes to complete the Web-based survey would likely be less than 30 minutes, even if preparation time was 50% of the actual time to complete the survey. A second way the burden of completing the survey was mitigated was to conduct cognitive testing for this same group of individuals in the “Web delivery” modality. After the survey items were transformed into their Web-based format, each of the senior clinicians was asked to complete the survey using the Web-based format while being interviewed by a senior member of the project team. Specific cognitive probes were used throughout the interview/testing process, such as “Please think ‘aloud’ as you answer this question. Please tell me how you chose your answer. What did you have to think about? Do the column headings for the matrix make sense to you? Why/Why not?” The specific responses by the senior clinicians to the 25 cognitive burden questions related to the survey are included in Appendix B of this document. Survey items and Web-based format were revised based on the responses to these cognitive probes. Based on the comments made during the cognitive burden testing, 3 changes to the cover memo and 11 changes to the Web based survey were made. Additional details regarding the cognitive burden testing can be found in Part A: Attachment 2 and appendices.


Maximizing Contacts:

The University of Texas provides the following guidance through their Instructional Assessment Resources about improving response rates for surveys:

  • The better your respondents know you, the better your response rate

  • Request participation from respondents in advance

  • Give respondents a sufficient amount of time to complete the survey

  • Provide clear instructions on how to complete and submit the survey when it is administered

  • Design the survey so it is easy to read and follow

  • For mail or online surveys, send reminders during the survey period thanking the respondents who have completed the survey, while reminding others about the deadline for completing the survey

  • For online surveys, always provide a link to the survey and send a reminder a day before closing the survey

  • Offer an incentive for participating

The response rates are based on a very aggressive approach and the use of multiple modalities when re-contacting agencies. This more aggressive approach and the use of multiple modalities for re-contacting are supported by the research and principles set for by Dillman and others (1998, 2007). Additionally, the character of the home health agencies involved in the P4P Demonstration will be further clarified to demonstrate this group as very highly motivated to participate in these kinds of activities.


As described by D. A. Dillman in his Mail and Internet Surveys (2nd edition, 2007), surveys are a social exchange in which the needs and ease of response by the survey taker need to be addressed. Based on an enhanced follow-up protocol (See Part A: Attachment 2, Appendices for more details) using repeated and progressively more intensive contacts with the HHAs that do not respond initially, we anticipate that 80 - 90 percent of the agencies that volunteered to participate in the P4P Demonstration will complete the on-line surveys. This expected response rate is based on our past experience on similar types of projects, the fact that these were volunteer agencies, and the simplicity and brevity of the instruments. In addition to the cover letter/invitation to participate in the survey, each HHA not completing the survey within a designated timeframe will receive as many as four follow-up contacts during the time period when the surveys are available on-line. The re-contact schedule has been changed as well as the materials provided to the agencies at each time point. The new schedule is as follows:

  1. Initial notification to home health agencies participating in the P4P Demonstration

Materials/Method:

    1. Notification letter addressed by name to the administrator or Director of Nursing for participating home health agencies from the CMS P4P Demonstration Evaluation Project Officer (William Buczko, PhD) inviting their participation in completing the Web-based survey

    2. Information sheet (See Part A: Attachment 1 for example) that 1) outlines items that will be included in the survey that can be used as a navigation aid while completing the Web-based survey, 2) provides the URL address for accessing the Web-based survey,
      3) reiterates the security protocols in place to ensure that the information that is provided will remain secure, 4) includes the date for completing the Web-based survey (the work day nearest to two weeks and three days from the date of the mailing), and 5) provides an abbreviated summary of expected follow-up contacts if the Web-based survey is not completed by the specified date.

  1. First follow-up (within two working days after specified date in the initial notification)

Materials/Method:

    1. Email sent to administrator or Director of Nursing with a colorful, animated reminder message about completing the Web-based survey and the new date to complete (one week from the date of the email).

  1. Second follow-up (within two working days after date specified in first follow-up)

Materials/Method:

    1. Letter addressed by name to administrator or Director of Nursing from the CMS contractor (University of Colorado, Denver (Anschutz Medical Center)) with a request to complete the Web-based survey, the challenge to “be counted” as have many of their peers, and new date to complete of one week and three days.

  1. Third follow-up (within two working days after the date specified in the second follow-up)

Materials/Method:

    1. Letter addressed by name to administrator or Director of Nursing from the CMS contractor (University of Colorado, Denver (Anschutz Medical Center)) with a hard copy of the survey instrument. The letter will explain the agency’s option either to complete the hard copy of the survey, mail it back to the CMS contractor, and have the contractor enter their data or use the hard copy as a guide when they complete the Web-based survey themselves. The new date to complete will be one week and three days from the date of the mailing.

  1. Fourth follow-up (within two working days after the date specified in the third follow-up)

Materials/Method:

    1. Personal phone call to administrator or Director of Nursing from the CMS contractor (University of Colorado, Denver (Anschutz Medical Center)). The phone call will follow a script where the goal is to gather the data needed to complete the Web-based survey.

The entire period from initial contact to fourth follow-up (if needed) is approximately two calendar months.


The expected response rate for the data collection activities is between 80 - 90 percent. Although we will not be providing any monetary incentive to complete the survey, we believe our approach meets each of the other items suggested in the University of Texas guidance and is consistent with the findings of the other researchers. The University of Colorado Denver, Division of Health Care Policy & Research has been involved with home health care research for more than two decades and is well-known within the health care community. Each of the Treatment and Control HHAs will receive a packet of materials announcing the survey and will contain the following items:

  • a cover letter explaining the purpose of the survey and its connection to the HHA's participation in the P4P Demonstration,

  • an information sheet with directions for completing the Web survey (See Part A: Attachment 1 for an example) and includes the URL for accessing the online survey on a secure Web site (including individualized passwords to gain access),

  • the dates of the 30-day window when the online site will be available for their use, and

  • contact information (email and phone) for the University of Colorado Denver, Division of Health Care Policy & Research to address any questions they have, including access problems.

When the HHA representative accesses the secure Web site to complete the survey online, the cover page of the online survey contains an abbreviated version of the purpose of the survey, the option to print a pdf version of the survey, and contact information for University of Colorado Denver, Division of Health Care Policy & Research.


Contractor staff takes very seriously the need to establish and maintain a positive rapport with participating HHAs. In virtually all cases, HHAs can expect a response to their email and phone questions within one working day at the latest--with quicker turn-around being more typical. Contact with the Demonstration contractor, Abt Associates, Inc, will be maintained throughout this evaluation process to identify if any of the original HHAs have dropped out of the study. Therefore, based on this plan of action for supporting HHA participation, we believe the projected response rate for this project to be a realistic estimate.


Non-response analysis

Non-response is a potential issue with any survey-based data collection effort. Given the level of detail and effort exhibited by Abt Associates, Inc in establishing HHA characteristics in assigning individual HHAs to either the Treatment or Control group, patterns of non-responsive HHAs will be relatively easy to identify. Nonresponse bias will be addressed using Guideline 3.2.9 as found in the OMB Standards and Guidelines for Statistical Surveys (September 2006) in Section 3.2 Nonresponse Analysis and Response Rate Calculation, pp. 16 – 17. We anticipate that the overall unit response will be close to, but may not exceed, 80 percent. The purpose of the nonresponse bias is to determine whether the data are missing completely at random. This is a function of not only the response rate but also how much the respondents and nonrespondents differ on the survey variables of interest. In this case, the key stratifying variables (treatment vs. control) are the variables of interest. Comparisons of overall treatment vs. control response/nonresponse rates for completion of the survey and for parallel items between the two surveys will be computed and reported. Other potential stratification variables such as state and profit/non-profit status will also be tested for nonresponse bias if these variables are chosen as stratification variables in the analyses.


4. Tests of procedures and/or methods to be undertaken


Estimation Rates Across Item Responses

Response rates for Treatment and Control HHAs will initially be characterized separately using percentages of HHAs choosing particular item response options. In some cases, we anticipate the need to collapse the number of item options into simpler groupings where appropriate, e.g., five-point Likert-type scales into three-point scales. In other cases, a Pareto analysis may suggest identifying the one or two most frequent options and then collapsing the remaining item options into an "Other" category. Confidence intervals around these estimates will be computed and displayed as appropriate.


Comparison of Rates between Groups

For the nine items that are common to both the Treatment and Control surveys, comparative bar or pie charts will be created to represent the rates from each group. Additionally, non-parametric statistics such as Chi-Square will be used to provide statistical measures of significant difference between the two groups. The interpretation of differences between Treatment and Control groups, and later between HHAs from the Treatment group that were able to demonstrate meaningful performance differences, will utilize these comparisons.


5. Individuals responsible for statistical design, data collection, and/or data analysis


Data will be collected and analyzed as part of Contract Number HHSM 500-2005-0022I, “Evaluation of the Home Health Pay for Performance Demonstration”. The following table lists the name and contact information for individuals responsible for the design, collection and analysis of the data.


Name, affiliation

Area of responsibility

Contact information

William Buczko, CMS, DRTM

CMS Project Officer for the contract under which this study is being conducted

[email protected]

410-786-6593

Dr. David Hittle, UCD, HCPR

Project Director - overall project design and implementation

[email protected]

303- 724-2430

Dr. Eugene Nuccio, UCD, HCPR

Co-Project Director - survey design and data analysis

[email protected]

303-724-2479

Ms. Angela Richard, MSN, UCD, HCPR

Co-Project Director - survey design

[email protected]

303-724-2442

Mr. Don Keller, UCD, HCPR

Survey development and testing

[email protected]

303-724-2429


The University of Colorado consulted Abt Associates regarding the processes Abt Associates used to randomly assign volunteer HHAs to either the treatment or control groups. As stated previously, the designation of an HHA as a treatment or control HHA for the purposes of the Demonstration was done by Abt. Our stratification by treatment and control is based on their designation. Additionally, Abt Associates was questioned regarding whether they (Abt Associates) had informed the HHAs of their status as either a treatment or control agency. Abt did inform the HHAs of their status and hence, eligibility for a potential monetary award based on their performance.


References:

Beatty, PC and Willis, GB “Research Synthesis: The practice of cognitive interviewing”, Public Opinion Quarterly, May 2007, pp.1-25.


Dillman, DA; Tortora, RD; and Bowker, D “Principles for Constructing Web Surveys”, SESRC Technical Report 98-50, Pullman, Washington, 1998


Dillman, DA Mail and Internet Surveys: The tailored design, Second Edition—2007 Update, John Wiley: Hoboken, NJ, 2007.


Goldenberg, KL “Using Cognitive Testing in the Design of a Business Survey Questionnaire”, American Association for Public Opinion Research, Salt Lake City, UT, May 1996.


Levine, RE; Folwer, Jr., FJ; and Brown, JA “Role of Cognitive Testing in the Development of CAHPS Hospital Survey”, Health Services Research, Vol. 40 #6, Dec. 2005, pp. 2037 – 2056.


Uhrig, JD; Squire, C; McCormick, LA; Bann, C; Hall, PK; An, C; and Bonito, AJ “Questionnaire Development and Cognitive Testing Using Item Response Theory (IRT)” Final Report presented to Centers for Medicare & Medicaid Services, Baltimore, MD, Feb. 5, 2002.


Westat “Survey of ATO Applicants 2000: Methods report”, Report submitted to National Institute of Standards and Technology, Advanced Technology Program, Gaithersburg, MD, December 2003.

University of Texas, Instructional Assessment Resources http://www.utexas.edu/academic/diia/assessment/iar/teaching/gather/method/survey-Response.php

1 Power calculations were derived using PS Power and Sample Size Program,Version 3.0, January 2009, as documented in Dupont WD, Plummer WD Jr. Power and sample size calculations. A review and computer program. Control Clin Trials. 1990 Apr;11(2):116-28. Calculations or for a comparison of proportions for two independent samples, using Fisher's Exact Test.


Supporting Statement For Paperwork Reduction Act Submission – Part B Page 10

Home Health P4P Demonstration Evaluation” 06/09/2009


File Typeapplication/msword
File TitleSupporting Statement – Part B
AuthorCMS
Last Modified ByGene Nuccio
File Modified2009-06-09
File Created2009-06-09

© 2024 OMB.report | Privacy Policy