Supporting Statement B for 0938-0915

Supporting Statement B for 0938-0915.doc

Medicare Contractor Provider Satisfaction Survey (MCPSS) and Supporting Regulations in 42 CFR 421.120 and 421.122

OMB: 0938-0915

Document [doc]
Download: doc | pdf

Supporting Statement B for 0938-0915

(This is an excerpt from the document submitted as Supporting Statement A.)

C. Collection of Information Employing Statistical Methods



C-1 Potential Respondent Universe

The target population for the Survey consists of all Medicare Providers served by Medicare Contractors across the country; CMS will select a sample to yield 20,514 completed surveys from Providers. The sample of Providers will be selected, as shown in Table 4, from 27 Fiscal Intermediaries Contractors, 20 Medicare Carriers, four Regional Home Health Intermediaries (RHHIs) and four Durable Medical Equipment Contractors (DMERCs).

Table 4 Medicare Provider Sample for National Implementation

Provider Types

Sample Size

Physicians

5,744

Licensed practitioners

1,781

Other Part B providers

725

Hospitals

1,800

Skilled Nursing Facility

2,832

Other Part A providers

4,589

Home Health Agencies

1,443

DME suppliers

1,600

Total

20,514

Based on a target response rate of 80% and an eligibility rate of 85 percent, CMS will draw a sample of 30,168 providers to achieve 20,514 completed surveys.


C-2 Procedures for Collecting Information

C-2.1 Study Sample

The target population for the MCPSS survey consists of all Medicare Providers served by all Medicare Contractors in the nation. These Contractors1 are comprised of 27 Fiscal Intermediaries, 20 Carriers, four Regional Home Health Intermediaries (RHHIs) and four Durable Medical Equipment Contractors (DMERCs). The contractors with multiple service areas are considered as a single contractor. With changes in the contracting environment we expect to see fluctuations in the number contractors from one year to the next.

To meet CMS’ objective of making valid comparisons between Contractors, the sample has been designed to obtain an equal number of completed questionnaires from each Contractor. We will select a sample to yield 400 completed interviews for each Contractor. For those Contractors with a Provider population size 400 or smaller, all the Providers will be selected with certainty. Table 1-1 in Attachment 1 shows the Provider population size for each Provider type within each Contractor. The maximum percent error for estimates of percentages obtained from a simple random sample yielding 400 completed questionnaires will not exceed 5 percent 95 percent of the time. For example, suppose 50 percent of Providers responded as satisfied with the service they received. We can be 95 percent confident that between 45 percent and 55 percent of the Providers are satisfied with the service. The percent error is the largest for the 50 percent proportion and decreases as proportion moves further away from the 50 percent / 50 percent split. For example, for an 80 percent / 20 percent split, the error is 4 percent. Thus, 400 completed questionnaires should provide adequate precision for Contractor-level estimates. Note that several contractors have multiple service areas. The precision is provided here for the contractor-level estimates. The precision of estimates can be much lower for the service areas within the contractors.

We considered samples sizes of smaller than 400. The sample sizes smaller than 400 will not only provide smaller precision, they will also require more oversampling for smaller Provider types. For example, a sample size of 300 will provide an error not exceeding 5.8 percent, which is not substantially higher than 5 percent, however, it will require more extensive and higher oversampling rates in smaller Provider types. This oversampling can further reduce the precision of the Contractor level estimates.

The sample size of 400 will be allocated proportionately to provider types within each Contractor. In contractors with multiple service areas, the providers will be first stratified by service area and within service area by provider type. The proportional allocation provides a representative sample of Providers for Contractors across the service areas and provider types and minimizes the variance of the Contractor-level estimates. The numbers under the heading “Base sample” in Table 1-1 in Attachment 1 show the proportionately allocated sample size for each provider type within each Contractor.

The proportional allocation could result in small sample sizes in several relatively smaller provider types. We propose to oversample these provider types to yield a minimum of 30 completed questionnaires. In Attachment 1, the additional number of Providers needed is shown under the column with a heading “Oversample.” Thirty responses are adequate to conduct statistical tests to detect valid differences between provider types within or across the Contractors.

The satisfaction score has six distinct intervals. The power of a statistical test indicates the probability of rejecting the null hypothesis in error. If the power is inadequate, we cannot draw conclusions from the test with confidence. Sample size affects the power of a statistical test. For example, we could conclude that there is no difference between the scores of two provider types using small samples when, in fact, the samples are too small to detect the true difference. Assuming a standard deviation of 1.35 for the satisfaction score within each provider type, 30 completed questionnaires for each provider type will provide more than 80 percent power (when significance level is 0.05) to detect a mean satisfaction score difference of 1 between the two provider types. Figure 1 shows the power function against various sample sizes per provider type with a standard deviation of 1.35 and a mean score difference of 1 (with equal sample sizes between providers).

Figure 1 Power by Sample Size

The target overall response rate for the national survey is 80 percent. The desired precision level by provider types within Contractors is achieved by 20,514 completed questionnaires. Applying the estimated response rate of 80 percent and 85 percent eligibility rate, we will contact 30,168 (that is, 20,514/ (0.80*0.85)) Providers to achieve the desired number of completes. See Table 1-1 in Attachment 1.

C-2.2 Survey Materials

Survey materials will follow the same design and format as those used in the Pilot phase. These include:

The Questionnaire: The questionnaire includes seven topic areas: provider inquiries, provider communication, claims processing, appeals, provider enrollment, medical review, and provider audit & reimbursement. Some of these topics may not pertain to some Contractors and their respective providers. For example, provider enrollment, medical review, and provider audit & reimbursement do not apply to DME suppliers and DMERCs. Similarly, the topic of provider audit & reimbursement does not apply to carriers and the providers who work with them. CMS will customize the questionnaire, so providers receive a questionnaire with topics that are relevant to their interaction with the Contractor.

The 2005 MCPSS Pilot survey instrument worked as intended. Providers who participated in the pilot found the questions easy to comprehend, and did not find the questionnaire to be long.

One of the comments CMS received from Contractors is the need to have information on which areas are key drivers of satisfaction. They need this information to focus their performance improvement efforts.

In response to this need to make the survey instrument usable for performance improvement, CMS will be including one additional question on overall satisfaction with the Contractor. This overall question will enable CMS to conduct a multi variate (key driver) analysis to determine which business functions and what processes within business functions drive overall satisfaction.

Please see Attachment 2 for a copy of the MCPSS survey instrument.

CMS will be conducting psychometric and factor analysis with the 2006 data. The factor analysis with the 2005 data was limited as the number of completed items was small.

CMS is committed to improving the survey with each round of data collection and have set aside dedicated resources to refine the survey. Given the changing contracting environment it is important to include a core set of measures for trending purposes, but at the same time it is important to collect data on new and topical initiatives. CMS will be collecting relevant measurement information from CMS staff and Contractors on a continuous basis.

Web Survey: CMS will use the Web as the primary mode of data collection. During the pilot 84 percent of the completed surveys were received via the Web. CMS will maintain the current formats of both the Web survey and paper questionnaire. The Web survey includes easy-to-understand instructions and user-friendly navigation features. The Web survey will include all the instructions included in the paper questionnaire. During recent meetings with providers and provider organization representatives over the past months, they specified that they prefer surveys to be available for completion on line.

As mentioned earlier, CMS has conducted usability testing to improve the functionality and usability of the web survey.

Cover letters: The survey notification package will include two cover letters, one on CMS letterhead and another from the relevant Contractor. The letters will explain the purpose of the study, the need for the data, a confidentiality clause, and the unique Provider ID and password to access the Web survey, as well as contact information for questions or to request assistance or a paper questionnaire (e.g., a toll free phone number, a fax number and an e-mail address).

Web Instructions: A separate flyer on brightly colored paper will be included in the package to alert respondents to the Web survey. The flyer will include the Web address, the Provider’s user ID and password, instructions for accessing the site, the study e-mail address and a toll-free number for assistance. Research shows that a separate flyer will attract the respondent’s attention.



C-2.3 Data Collection

The data collection steps are as follows:

  • Mail survey notification package

  • 10 days after initial mail, send a reminder/thank-you postcard

  • Start non-response follow-up (by telephone) 10 days after reminder/thank-you postcard card

  • Close data collection 16 weeks after initial mailing


Providers will be encouraged to complete the survey over the secure Web site. The cover letter will clearly state options to access the Web site, print a copy of the questionnaire from the Web site and return it by mail or FAX. All Providers will be given the option to request a paper copy of the questionnaire and submit their responses via mail or FAX.

The strategy of using the Web as the main mode of data collection worked well during the pilot. The dominant mode of responding to the Pilot survey was via the Web (84 percent). Telephone contact will be the primary mode for following up with non-responders.

The following media have been set up to allow respondents to communicate with CMS during data collection:

  • Toll-free Phone: The survey vendor will maintain a toll-free telephone number as done during the Pilot to receive calls from respondents concerning any issues they have regarding the survey.

  • E-Mail Box: The survey vendor will maintain a study e-mail box. This was a popular feature during the Pilot and helped communication regarding alternative ways of submitting survey responses.

  • FAX Number: A FAX number will be available for respondents who wish to respond via this method. The FAX machine, to which inquiries or responses are sent, is located in a secure location and only authorized project staff can retrieve these documents.



C-2.4 Processing Returned Surveys

The three criteria will be used for processing returned surveys:

  • The submission must contain the pre-coded identification number

  • All applicable sections should be completed.

  • A section will be considered a complete if at least half the number of items in that section is completed. e.g., the Provider Inquiries section will be considered a complete if 5 of the 10 items are completed.



C-2.5 Calculating Satisfaction Scores

We anticipate scoring survey data from the national implementation response data in the same manner as used for the pilot. The scoring methodology used allows us to calculate respondent level scores for Contractors, provider types and each section. Below is an explanation of how the scores will be calculated (the same method was used for the Pilot):

Contractor Score:

The weighted2 sum of ratings for all questions for all business functions across all provider types related to each Contractor divided by the total number of respondents answering the questions across all business functions for all provider types related to each Contractor

Business Function Score at the Contractor Level:

The weighted sum of ratings for all questions for a business function across all provider types related to each Contractor divided by the total number of respondents answering the questions for that business function related to each Contractor

Provider Score for Each Provider Type under Each Contractor:

The weighted sum of ratings for all questions for all business functions related to a provider type divided by the total number of respondents answering the questions for all business functions related to that provider type

Business Function Score at the Provider Level:

The weighted sum of ratings for all questions for a business function related to a provider type divided by the total number of respondents answering the questions for that business function related to that provider type





C-2.6 Contractor Reports

CMS set up a formal mechanism to obtain feedback from Contractors who participated in the Pilot survey. The Contractors were in general pleased with the content and level of detail provided in the final Contractor reports. Contractors indicated that the reports, particularly the satisfaction scores, were useful to identifying the business functions that needed improvement. Several Contractors had also stated that the satisfaction scores had confirmed what they already thought and/or knew to be problem service areas. In addition, Contractors agreed that the timeframe for receiving these documents (i.e., early June) was good because it helped to prepare them for the next fiscal year.

The results from the national implementation will be available to all Contractors via an interactive web based system. Contractors can access the following information via the on-line reports:

  • Their scores at the Contractor level, provider level and business function level

  • Item level weighted frequencies

  • Verbatim and coded comments; these comments will be sanitized and will not have any identifiers.

To help identify problem spots, Contractors can view both scores and frequencies by the following parameters:

  • By state;

  • By state by urbanicity (i.e., urban, rural);

  • By state by provider type;

  • By state by urbanicity by provider type; and

  • By provider size.

The results, at all levels, will include cell sizes and standard errors. Since providers may have answered some but not all of the sections or only some of the questions for a particular section, the cell size for calculating the scores can vary across sections of the survey. A cell size will be presented with each score so contractors know how many providers responded to each question, this provides an indication of the stability of the score. If only a few providers answered the question, then the survey estimate could fluctuate considerably if we happened to survey a different set of providers. The larger the number of providers who respond to an item, the more confident we are that the survey estimate is close to the “true” answer we would find had we not selected a sample, but instead surveyed all providers. The standard errors are intended to help the contractor determine how close the contractor score is to the average contractor score.

The reports will also include information on key drivers of satisfaction. This information will help Contractors determine which areas within each business function are key drivers of satisfaction with that business function. They will also have information on which business functions are key drivers of overall satisfaction. This information can help Contractors focus their performance improvement efforts.



C-3. Methods to Maximize Response Rates and Deal with Nonresponse

CMS has explored many issues related to increasing the saliency of the study among the provider community and using non-response follow-up strategies to maximize response rates.

The target response rate for the national implementation is 80 percent. The response rate for the Pilot was 33 percent. This was because of the high non-locatability of the sample. The response rate for the Pilot without the non-locatables was 50 percent. CMS and their survey vendor have developed a plan to improve the locatability of the sample and hence the response rate.

CMS is undertaking four important steps in this direction:

  • Using the Claims History file to draw the sample. A provider with at least one claim in the previous 8 months will be considered an “active provider”.

  • Sending all sampled records to vendors who maintain updated provider contact information. Hence CMS is using a vendor to obtain up to date contact information.

  • Pre-screening the entire sample to validate the sample and obtain the name of a survey contact.

  • An aggressive outreach plan.

In addition, the field period for the national implementation will be extended to 16 weeks. The extended field period, a cleaner sample frame of locatable providers, increasing the saliency of the survey, and focused non-response follow-up can help in achieving the response rate target.

However, if the response rate were to fall below 80 percent CMS and their survey vendor will explore the option of conducting a non-response bias analysis. Please see C-3.3 for a detailed description of the proposed non-response bias analysis.

C-3.1 Promoting the Survey Project to Increase Saliency

CMS is taking an aggressive approach to achieving the response rate goal of 80 percent. In addition to obtaining a clean sample, it is essential to create awareness and understanding of the value and importance of the survey within Provider and supplier communities in order to motivate participation in the survey. In the end, we want Providers and suppliers to view the MCPSS as a tool that will assist CMS and Contractors in identifying and implementing service improvements.

To achieve high saliency for the study, the level of activity between October 2006 and January 2007 will be high. We also propose a maintenance campaign between January and March 2007 as well as follow-up activity when results are available in June 2007.

The overall objective of this plan is to create awareness for the Medicare Contractor-Provider Satisfaction Survey (MCPSS) among financial and business managers employed by Medicare Providers and fee-for-service Contractors (see audience breakdown below). This awareness supports the following goals:

CMS will implement a public relations campaign to generate broad coverage of the MCPSS initiative through a variety of channels:


  • The healthcare trade media serving financial and business managers employed by Medicare Providers and fee-for-service Contractors. This includes members of the print and web-based media.

  • Contractor-based communications channels such as list-serves, conferences and meetings, newsletters, etc.

  • Professional organizations that serve the Provider community

  • CMS based channels of communications to both the Providers and Contractors.



C-3.2 Follow-up with Non-respondents

During the pilot, CMS used three non-response follow-up strategies: Paper questionnaire, phone prompt and telephone interview. Based on our experience we consider telephone non-response follow-up as the best method to maximize response rates. CMS will continue to use the Web survey as the main mode of data collection and telephone follow-up as the main mode to follow-up with non-respondents.

C-3.3 Non-response bias analysis

If response rates fall below 80%, CMS will conduct a nonresponse bias analysis. The purpose of this analysis is to determine if the non-respondents are significantly different from the respondents. This will include an analysis of sample frame variables including contractor, provider type, number of claims, dollar value of claims, size of facility (bed size and or number of patient days), specialty type (in the case of physicians, licensed practitioners, and medical equipment providers), ownership type (for Hospitals and skilled nursing homes).

In the event that the response rate falls below 60 percent, CMS will create a sub-sample of non-respondents to conduct a more detailed non-response bias study. The sub-sample will include those who refused and facilities that were contacted. Assuming a 60% response (40% non-response), from among the non-respondents, we will draw a sample to yield 450 follow-up respondents. This will provide more than 80 percent power to detect mean satisfaction score differences less than 0.3 between the follow-up respondents and respondents to the regular interview. (That is, testing the difference between the mean scores of 450 follow-up (non) respondents and 15,000 main interview respondents).

This study will include a follow-up survey to the sub-sample. The follow-up survey will include only the claims processing section and the overall satisfaction question. We will then compare the satisfaction scores of the respondents and non-respondents, by carrier type (FI, carrier, DMERC, RHHI) to determine if there is a significant difference. If significant differences are found, estimates can be adjusted for nonresponse bias through weighting. This follow-up survey will be kept to about 6-7 minutes. This follow-up will also include a question on why the respondent initially refused or did not respond.

The follow-up will be by mail and telephone. The protocol will be as follows:

  • First mailing questionnaire, with a revised cover letter from CMS, and Contractors.

  • One week later-a reminder/thank-you postcard

  • One week later, a second questionnaire

  • One week later-telephone interviews, with up to 9 additional callbacks



C-3.4 Non-response adjustment

In spite of the best practices, virtually all surveys experience nonresponse. The target response rate for this survey is 80 percent. This will most likely vary by provider type and by other provider characteristics.

One consequence of nonresponse is the potential for bias in the survey estimates, making them larger or smaller than the true statistic for all Providers. The extent to which those that do reply differ in their satisfaction from those that do not reply affects the extent of bias. When response rates vary among subgroups, such as provider types, as they are likely to do, there is an even greater potential for bias in survey estimates.

We will adjust the sampling weights to remove potential bias on satisfaction (and on any other substantive estimates to be produced from the survey) caused by not obtaining responses from all sampled providers. If response propensity is independent of the satisfaction, then no bias would arise. Therefore, the objective is, using the known characteristics of the sampled providers, to form nonresponse adjustment cells so that the response propensity within each cell is independent of satisfaction. To the extent that this was achieved, the estimates of satisfaction obtained using the sampling weights that are adjusted for nonresponse within these cells, will have smaller potential bias. There are several alternative methods of forming the cells to achieve this result. In forming the cells, we will attempt to minimize the variation in response propensity within the cells.

We plan to use Chi-Square Automatic Interaction Detector (CHAID) software to guide us in forming the cells. CHAID uses an AID type of algorithm. CHAID partitions data into homogenous subsets with respect to response propensity. To accomplish this, it first merges values of the predictors, which are statistically homogeneous with respect to response propensity and maintains all other heterogeneous values. It then selects the most significant predictor (with the smallest p-value) as the best predictor of response propensity and thus forms the first branch in the decision tree. It continues applying the same process within the subgroups (nodes) defined by the "best" predictor chosen in the preceding step. This process continues until no significant predictor is found or a specified (about 20) minimum node size is reached. The procedure is stepwise and creates a hierarchical tree-like structure.

The data on the relevant characteristics of the Providers will be available from the sampling frames for both respondents and nonrespondents. These characteristics include provider type, number of claims (both volume and dollar value) and MSA/nonMSA status for all Providers, number of beds for hospitals and skilled nursing facilities, total patient days for hospitals, ownership type of the facility, physician/non-physician specialty and age, and specialty for DMERCs.

Although nonresponse adjustment should reduce bias, it can also increase the variance of estimates. Small adjustment classes and/or low response rates (or large nonresponse adjustment factors) may increase the variance substantially and give rise to unstable estimates. In order to prevent an excessive increase in variance and thereby an adverse effect on the mean square error of the estimates, we will limit the size of the classes to a minimum and avoid large adjustment factors.

In June 2006, CMS will provide OMB a supplement with the non-response adjustment methods used in the 2006 survey.

C-4. Tests of Procedures and Methods

CMS will not test any data collection procedures during the national Implementation.



C-5. Individuals Consulted

Organization

Name

Contact Information

CMS

David C. Clark

410.786.6843/ [email protected]


Alan Constantian

410.786. /[email protected]


Dr. Elizabeth Goldstein

410.786.6665/ [email protected]


Mel Ingber

410.786.1913/[email protected]


Rene Mentnech

410.786.6692/ [email protected]


Bakeyah Nelson

410.786. 5608/ [email protected]


Geraldine Nicholson

410.786.6967/ [email protected]


Eva Tetteyfio

410.786.3136/ [email protected]


Gladys Valentin

410.786.1620/ [email protected]

Westat

Sherm Edwards

301.294.3993/ [email protected]


Pamela Giambo

240-453-2981/ [email protected]


Huseyin Goksel

301.251.4395/ [email protected]


Terita Jackson

240.314.2479/ [email protected]


Vasudha Narayanan

301.294.3808/ [email protected]


1 These estimates are based on October 2004 files.

2 Because not all providers will be selected for the survey and not all selected providers responded, a sample weight will be calculated for each responding provider.

13



File Typeapplication/msword
File TitleSupporting Statement B for 0938-0915
AuthorCMS
Last Modified ByCMS
File Modified2006-08-16
File Created2006-08-16

© 2024 OMB.report | Privacy Policy