CMS-10569 Supporting Statement B CROWNWeb CY 2019 (10-25-18)

CMS-10569 Supporting Statement B CROWNWeb CY 2019 (10-25-18).docx

Data Collection for Quality Measures Using the Consolidated Renal Operations in a Web-Enabled Network (CROWNWeb) (CMS-10569)

OMB: 0938-1289

Document [docx]
Download: docx | pdf

Supporting Statement – Part B


Collections of Information Employing Statistical Methods


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the finalized sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The Data Validation Contractor will randomly sample 300 facilities (roughly 5% of all dialysis facilities), per contract and Quality Incentive Program (QIP) rule guidelines, for participation in the validation project. As a random sample, this should be a representative sample of all included facilities nationally. The sample pool will consist of Medicare-certified dialysis facilities that are required to submit administrative and clinical data into CROWNWeb in order to Meet Section 4943108(h) of the 2008 updated Conditions for Coverage for ESRD Dialysis Facilities. The 300 facilities will be asked to submit records that will be validated for CMS-designated Critical Performance Measures (CPMs). The patient sample size is limited to 10 patients per facility, as per contract and QIP rule guidelines. The Data Validation Contractor will sample 10 patients (or the maximum patients possible) from each selected facility for CPM reviews. Historically, facility response rates have been solid. The response rate for the 2017 validation study was 99% of the 300 facilities selected for participation.

Sample Size Estimates

Team RELI is taking a fundamentally different data stratification approach compared to what was used by the previous contractor for prior years of the study. We are stratifying sampled facilities by CMS Network Number and by affiliation with major dialysis organizations (DaVita, DCI, Fresenius, and all others as Independent) as shown in Tables 1 and 2, respectively.


Using the ESRD QIP rule guidelines of randomly selecting 300 facilities from the total population of eligible facilities, and randomly selecting 10 records per facility, the Validation Contractor determined the distribution of patient records by Network Number and affiliation.




Table 1: Distribution of Patients within Network Number

Network Number

Number of Patient Records per Month

% Total Patients

1

161

1.99

2

353

4.37

3

208

2.58

4

363

4.50

5

454

5.62

6

767

9.50

7

466

5.77

8

573

7.10

9

641

7.94

10

306

3.79

11

717

8.88

12

317

3.93

13

536

6.64

14

704

8.72

15

371

4.60

16

367

4.55

17

352

4.36

18

416

5.15

Total

8072

100.00





Table 2: Distribution of Patients within Affiliation

Affiliation

Number of Patient Records per Month

% Total Patients

DaVita

2839

35.17

DCI

297

3.68

Fresenius

2529

31.33

Independent

2407

29.82

Total

8072

100.00



Some smaller facilities had less than 10 patients treated for the period; in these cases, we selected all of the patients treated at the facility during the study period for validation. Table 3 depicts the methodology used when sampling for patients for CPM reviews.


Table 3: Sampling Methodology for CPM Reviews

Sampling Source

Sample to be Taken

CROWNWeb Extract

Random selection of patients available, up to 10



Sampling Time Frame

The 300 facilities to be sampled for validation will be chosen within 10 days of receiving the corresponding Facility/Patient data file from CROWNWeb. The Validation Contractor will receive a CROWNWeb extract that contains all data reported into CROWNWeb during the selected second quarter time frame (April – June 2017).


This timeframe was selected after considering several factors. To ensure that the validation can be completed during the period of performance, the Validation Contractor considered the data reporting periods allowed to facilities to submit clinical data into CROWNWeb. Facilities are given 60 days from the end of any particular month to enter CROWNWeb clinical data. The mandated reporting period limits the time frame we can validate expeditiously, as we will not be able to obtain an extract until after the close of the data-reporting period. Another important consideration is that the ESRD QIP rule makes it mandatory for us to give facilities up to 60 days to submit records. Taking into consideration these factors as well as the need to ensure that there is adequate time to perform analysis and prepare reports, we decided on the second quarter of 2017 validation time frame. A breakdown of the mandated reported deadlines that were taken into consideration is displayed in Table 4.


Table 4: Mandated Reporting Deadlines

Submission Type

Mandated Reporting Deadlines

CROWNWeb Data Submission

60 days after month close (Q2 – August 31, 2017)

Facility Record Submission Deadline

60 days after request receipt per QIP rule


Assuming the CROWNWeb data team will need at least one week to export and send the data, the Validation Contractor has estimated preliminary dates for data availability. Table 5 provides the Validation Contractor’s estimates for when the data will be received for each corresponding data set.

Table 5: Mandated Reporting Deadlines

Type of Data

Data Reporting Period

Estimated Receive Date

CPM

April, May, June 2017

Starting mid-November through beginning of December



Due to the tight timeframe for data abstraction, effective coordination and management as well as adherence to established schedules will be crucial to the project’s success.


2. Describe the procedures for the collection of information including:


- Statistical methodology for stratification and sample selection,


- Estimation procedure,


- Degree of accuracy needed for the purpose described in the justification,


- Unusual problems requiring specialized sampling procedures, and


- Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Please see response to question 1 for statistical methodology for stratification and sample selection, including estimation procedure. As noted below in response to question 4, there are no unusual problems requiring specialized sampling procedures as our previous experience on past CMS CROWNWeb CPM validation efforts have shown near universal compliance by the hospitals with medical record requests. The period for data collection cycles is expected to be no more frequently than annually.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield 'reliable' data that can be generalized to the universe studied.


Facilities were contacted via certified letter in March 2018 and were asked to participate in the validation effort. The letter provided instructions on the types of records to be submitted, methods to submit records to the Validation Contractor, and identified patients selected for validation. Facilities that did not respond to the initial request for records were contacted via phone by the Validation Contractor and received a final request letter in April 2018. To aid in maximizing facility response rates, our team coordinated through CMS COR to send townhall notification on January 2018 to increase facility exposure to our validation study. We also communicated/coordinated extensively with large facilities to facilitate on-time medical records submission by their participating clinics. Facilities that did not respond to the request for records were subject to a 10-point reduction to their Total Performance Score (TPS). The response rate for the 2016 validation study was 100%; of the 300 facilities selected for participation, all eligible participating facilities responded and complied with our records request. For future validations, we plan to follow the same records request methodology, follow-up, and ESRD community outreach approach we’ve used in the past since it has been effective in producing desired response rates.

Data Validation

The main objective of this analysis is to perform a single comparison of the CROWNWeb system data against CPM element data obtained from the facilities’ records, leading to an evaluation of the reliability (i.e. the data are reasonably complete and accurate) and validity (i.e. the data actually represent what is being measure) of CROWNWeb data.


  • Reliability: Reliability means data are reasonably complete and accurate, meet intended purposes, and are not subject to inappropriate alteration. Where:

    • Completeness refers to the extent that relevant records are present and the fields in each record are populated appropriately, and,

    • Accuracy refers to the extent recorded data reflect the actual underlying information.


A more formal definition of reliability is the extent to which results are consistent over time and an accurate representation of the population under study:


    • The degree to which a measurement, given repeatedly, remains the same,

    • The stability of the measurement over time, and

    • The similarity of measurements within a given time period.


  • Validity: (as used here) refers to whether the data actually represent what one believes is being measured. A number of measures are commonly used to assess validity of any measure.


Reproducibility and Validation of the data was calculated in the earlier study using Cohen’s Kappa () because it is an overall measure of agreement between the test and reference databases. As referenced from the 2016 report, a number of difficulties in the interpretation of Cohen’s Kappa have been pointed out and several statistical fixes have been proposed. Kappa not only measures agreement, but it is affected in complex ways by the distribution of data across categories that are used (“prevalence”) and by bias that may be inherent in the measures used. These are the problems associated with Kappa (Feinstein and Cicchetti, 1990):


  1. If the expected agreement (pe) is large, the correction process can convert a relatively high value of the observed agreement (po) into a low value of Kappa ().

  2. Unbalanced marginal totals produce higher values of than balanced totals.


Kappa is also affected both by any bias between the two (2) measures of gender and by the overall prevalence (the relative probability of the responses – the “Yes” and “No” responses).

The previous approach centered on the quantification of agreement between reviewers for example, the proportion of times two ratings of the same case agree, the proportion of times raters agree on specific categories, the proportions of times different raters use the various rating levels, etc.


We propose in addition to the quantification of agreement, the development of a model about how ratings are made and why raters agree or disagree. This model will be explicit, as with latent structure models.


Analysis of variance (ANOVA) is a general statistical technique to analysis deviation and identifies other variation’s sources in a measurement system. Using ANOVA, we will estimate Common Cause Variation, which is the variance of the actual measurement, and it is the sum of four components:


The true record variation;

Variation due to same reviewer (reproducibility);

Variation due to different reviewers (repeatability); and

Variation due to record by reviewer interaction (between reviewer and the “gold standard”).


The calculation of variance components and standard deviations using ANOVA is equivalent to calculating variance and standard deviation for a single variable but it enables multiple sources of variation to be individually quantified which are simultaneously influencing a single data set.


Two basic principles are evident:


  1. It is better to have a model that is explicitly understood than one which is only implicit and potentially not understood.

  2. The model should be testable.


In our interpretation of these measures, we will identify the key sources of overall disagreement between the CROWNWeb and NHSN data and the “gold standard.”


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


As noted above, the sample pool will consist of Medicare-certified dialysis facilities that are required to submit administrative and clinical data into CROWNWeb in order to meet Section 494.108(h) of the 2008 updated Conditions for Coverage for ESRD Dialysis Facilities. The previous experience on past CMS CROWNWeb validation efforts have shown near universal compliance with medical record requests. No additional tests of procedures or methods to be undertaken are expected.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Khalil Abdul-Rahman, RELI Group, (410) 533-2384

Siva Bala, RELI Group, (440) 382-7415

Gladys Happi, RELI Group, (410) 504-4394


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLallemand, Nicole
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy