Crosswalk (30-Day)

R12_ 2019_07_19 CMSR305 Responses to Comments_Clean.docx

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364 (CMS-R-305)

Crosswalk (30-Day)

OMB: 0938-0786

Document [docx]
Download: docx | pdf

Responses to Comments Received on

Federal Register Notice of Proposed Agency Information Collection CMS-R-305



CMS received four comment letters on the February 14, 2019 notice on the proposed revisions to the eight existing External Quality Review Protocols, which were last revised in 2012. The commenters were Anthem, Inc., the National Committee for Quality Assurance (NCQA), Health Services Advisory Group (HSAG) and Island Peer Review Organization (IPRO. The comments vary and CMS will respond to each.


Technical comments on the revised EQR Protocols


Protocol 1

Comment

One commenter supported CMS’s additional language describing non-duplication for the mandatory EQR-related activities, and the guidance presented in the validation of performance measurement (EQR Protocol 2) and the Information Systems Capability Assessment (ISCA).


Response

We appreciate the commenter’s support.

Comment

One commenter recommended that CMS Replace or delete “study” with “PIP” in Protocol 1. For example, “Activity 1: Assess the Study PIP Methodology”; “Activity 3: Verify Study PIP Findings”; and “Step 1: Review the Selected Study PIP Topic.”

Response

We accept the suggested edit and made revisions throughout the protocol as appropriate.

Comment

One commenter suggested renaming “Step 2: Review the Study Question” with “Review the PIP Specific, Measurable, Achievable, Realistic, and Timely (SMART) AIM Statement” in Protocol 1, Step 2. The commenter noted if CMS elects to adopt the suggested language, all references to “study question” should change to “SMART AIM statement.”

Response

We accept changing “Step 2: Review the Study Question” to “Step 2: Review the PIP Aim statement.” While SMART is one approach, it is not the only approach. Including the term SMART could be construed as mandatory. The revised Step 2 language will clarify that, “The PIP question should…be clear, concise, measurable, and answerable.” That phrase accomplishes the same purpose as including the term “SMART aim.”

Comment

One commenter recommended moving the call-out box in Protocol 1, “How do we know if a PIP study question is clear, concise, measurable, and answerable?” under requirements for “Step 1: Review the Selected Study Topic,” as it relates to that step, but is on the next page.

Response

We will maintain the current placement and formatting as the call-out box refers to “Step 2: Review the Study Question.”

Comment

One commenter noted that in “Table 1.1, Critique of illustrative PIP study questions” the examples imply that interventions must be specified (developed) before the problem is identified. The commenter stated this may be a challenge for plans prior to conducting an analysis of the data (Step 7) and causal/barrier analysis in Step 8. Identifying an intervention/action in the question limits the MCPs to just testing that one action/activity to determine if it leads to the desired improvement.

Response

We will rephrase “illustrative” to “example.” The example questions are not intended to be exhaustive, nor a template for a PIP aim statement. Instead, they are intended to provide examples of both well-stated and poorly-stated aim statements. While an intervention is likely not to be known on day 1 of planning, an intervention should be known and stated in a PIP.

Comment

One commenter suggested adding an additional PIP question example to Table 1.1: ““Do targeted interventions increase the percentage of patients diagnosed with depression and obesity who have a PHQ-9 Depression Assessment Screening score of 1-4 (minimal depression) during the measurement year?” and the following critique bullets:

  • Specifies the PIP intervention. The use of “targeted interventions” allows MCOs to have flexibility in the interventions needed to address barriers identified through conducting a causal/barrier analysis.

  • Defines the population: patients diagnosed with (depression and obesity) and the time period. The use of “during the measurement period” allows for the study question to cover multiple measurement periods (i.e., Baseline, Remeasurement 1, Remeasurement 2).

  • Specifies the measure impact. The use of a “increasing the percentage” and a nationally recognized depression screening assessment with a score and levels of depression defines how the impact will be measured.

Response

We disagree with including the proposed example. The additional example provided by the commenter introduces more variability into the proposed/example PIP, which may make the PIPs harder to implement and measure. This example implies that MCPs do not have to be precise in identifying an intervention or specify the measurement targets.

Comment

One commenter recommended revising a sentence in Protocol 1, Step 3 to reflect that the MCP identified the population rather than the PIP identified the population.





Response

We accept the suggested edit: “In this step, the EQRO assesses whether the MCP clearly identified the population for the PIP in relation to the study question (such as age, length of enrollment, diagnoses, procedures, and other characteristics).”

Comment

One commenter suggested editing the following step title from, “Step 5: Review the Selected Study Variables and Performance Measures,” to “Step 5: Review the Selected PIP Measures.”

The phrase “study variable” may be a confusing term to MCPs, since the improvement project is not a research study. Using the language “PIP measure” signifies objective, measurable outcomes that will be used to measure improvement. Use of the term “performance measures,” may be confused with measures evaluated in the mandatory performance measure validation EQR activity.

Response

We have revised Protocol 1 to change “study variable,” to “PIP variable.” The term “study variable” (now “PIP variable”) is defined in the call out box. The section explains that variables for PIPs—when appropriately chosen—have implications for the design of the PIP. The PIP performance measures are then used to measure the outcomes of the PIP intervention(s).

We do not agree that using the term “performance measures” will be confused with measures evaluated in the mandatory performance measure validation EQR activity. They are separate activities with separate protocol guidance.

Comment

One commenter recommended moving the “What is a Study Variable” call-out box to “Step 6. Review the Data Collection Procedures,” as Step 5 focuses on defining the measure(s).

Response

We will maintain the current placement and formatting, but change “study variable,” to “PIP variable.” The term “study variable” is first used in Step 5.

Comment

One commenter recommended adding clarifying language to the following sentence, “Data availability should also be considered when selecting study variables, as more frequent access to data, such as on a monthly, quarterly, or semi-annual basis, supports continuous quality improvement (QI) and Plan Do Study Act (PDSA) efforts and can allow an MCP or state to correct or revise course more quickly, if needed.” The commenter recommended adding: “If plans collect monthly, quarterly, or semi-annual data, a methodology to ensure comparability should be used. For example, a rolling 12-month methodology” in Protocol 1, Step 5.

Response

We accept this edit, and will clarify as follows: “Data availability should also be considered when selecting PIP variables, as more frequent access to data, such as on a monthly, quarterly, or semi-annual basis, supports continuous quality improvement (QI) and Plan Do Study Act (PDSA) efforts and can allow an MCP or state to correct or revise course more quickly, if needed. If plans collect monthly, quarterly, or semi-annual data, the plan should use a methodology to ensure comparability, such as a rolling 12-month methodology.”

Comment

One commenter suggested Protocol 1, Step 5 needs more clarification about study variables. Study variables appears to be the same as data elements collected to calculate the performance measure rate. If this is the case, these should be a part of Step 6.

Response

If the commenter is suggesting that study variables are the same as data elements, or performance measure elements, we disagree. Study variables include data elements for measures, and may also be descriptive (that is, not specifically used for performance measurement).

Comment

One commenter suggested using consistent language throughout the Protocol 1, Step 5 section: the term “variable” was used at times and the term “indicators” was used on pp. 30-33.

Response

Please see the response to each of these uses of “indicator:”

  • We changed the first use of “indicators” on page 30 in the Question Box to “variables.”

  • We maintained the second use of “indicators” on page 31. The use of “indicators of performance” is to demonstrate: “Variables in PIPs can take a variety of forms as long as the selected variables identify the MCP’s performance on the PIP questions objectively and reliably and use clearly defined indicators of performance.”

  • We revised the following sentence on page 32: “For example, measures of avoidable hospitalizations or emergency department visits can serve as indicators of the adequacy of access to preventive and primary care and effectiveness of care for acute and chronic conditions,” to “For example, measures of avoidable hospitalizations or emergency department visits can demonstrate the adequacy of access to preventive and primary care and effectiveness of care for acute and chronic conditions,”

  • On page 33, the use of “prevention quality indicators” is part of AHRQ’s terminology, and is used as an example of sources of existing measures to review when selecting performance measure(s) for a PIP.

Comment

One commenter recommended providing an example for “programmatically meaningful improvement as used in the following sentence:” “The EQRO should review the PIP methods and findings to assess whether there is evidence of statistically significant improvement that is clinically or programmatically meaningful and that may be associated with the intervention implemented as part of the PIP.”

Response

We revised the sentence to remove the phrase in question to clarify the setence: “The EQRO should review the PIP methods and findings to assess whether there is evidence of statistically significant improvement that may be associated with the intervention implemented as part of the PIP.”

Comment

One commenter recommended editing the following sentence in Protocol 1, Worksheet 1.2 from, “Assess the appropriateness of the selected study topic by answering the following questions. Insert comments to explain “No” and “Not Applicable (NA)” responses,” to “Assess the appropriateness of the selected PIP question by answering the following questions. Insert comments to explain “No” and “Not Applicable (NA)” responses” for consistency.

Response

We accept this edit.

Comment

One commenter suggested moving Element 3.3 in Worksheet 1.3 to Worksheet 1.6 in Protocol 1, “Review the Data Collection Procedures,” as it references data collection approaches.

Response

We decline to make this revision in e 3.3, but added the following revision to guide the user to Worksheet 1.6: “If data can be collected and analyzed through an administrative data system, it may be possible to study the whole population. For more guidance on administrative data collection, see Worksheet 1.6.”

Comment

One commenter recommended moving Element 5.1, Study Variables to Worksheet 1.6 in Protocol 1, stating that Step 5 should be for PIP measures only.

Response

We do not understand the comment. We ask the commenter to clarify or explain why the EQRO would not assess variables for PIP measures.

Comment

One commenter suggested that the requirements for Protocol 1 Step 6, Elements 6.9-6.14 in the worksheet appear to be related to performance measure validation.

Response

We do not understand the comment. We ask the commenter to clarify or explain why Worksheet 1.6, Elements 6.9-6.14 appear to be related to performance measure validation. The purpose of the Worksheet 1.6 is to help EQROs assess whether the data collection procedures for PIPs were valid and reliable. Section 1 of Worksheet 1.6 helps assess the overall data collection procedures for the PIP; Section 2 of Worksheet 1.6 helps assess data collection procedures for administrative data sources; and Section 3 of Worksheet 1.6 helps assess data collection procedures for medical record review.

Comment

One commenter recommended adding a bullet to Protocol 1, Worksheet 1.9, Element 8.2 to specify that “interventions should be measurable on an ongoing basis using Intervention Tracking Measure (e.g., quarterly, monthly, weekly).”

Response

We accept this suggestion and added a third bullet to Worksheet 1.8, Element 8.2: “It is expected that interventions should be measurable on an ongoing basis (e.g., quarterly, monthly) to monitor intervention progress.”

Comment

One commenter suggested changing “statistical” to “quantitative” in Protocol 1, Worksheet 1.9, Element 9.4 as the term “statistical” is ambiguous and MCPs have misused statistical significance testing by applying hypothesis testing to samples that are not independent, e.g., by comparing baseline measurement year to subsequent measurement years.

Response

We disagree with the recommended edit. The initial series of questions is asking about improvement – quantitative, secular, sustained, etc. Element 9.4 asks specifically about statistical improvement. This question adds separate and unique information to the series of questions.

Comment

Protocol 1, Worksheet 1.11 uses “PIP Aim Statement.” The commenter recommend the protocol use and reference “PIP Aim Statement” rather than “Study Question” for Step 2.

Response

We accept this recommendation. The revision aligns with how we abstract and report on detailed PIPs. We initially revised “study question” to “PIP question,” and then to “PIP Aim Statement” in Step 2, and throughout where applicable. A “PIP Aim statement” is a statement of what the PIP hopes to achieve over a specified amount of time, including the amount or magnitude of change.

Comment

One commenter recommended adding, “Robustness of interventions would be enhanced by requiring that interventions are measurable using Intervention Tracking Measures to monitor progress of interventions” in Protocol 1.

Response

We disagree with this edit. The spirit of the sentence is represented throughout the protocols with repeated references to using strong measurement strategies.

Protocol 2

Comment

In Protocol 2, Activity 1, Step 3 (blue box), p. 66, one comment noted that the language indicates that the EQRO should validate PIPs and PMV (HEDIS®) measures that were calculated by a Medicare Advantage or private accreditation entity. The commenter stated that EQRO validation of PIPs and PMV processes and results are almost as intensive as conducting the activity, and that this process seems contrary to the non-duplication concept outlined in 42 CFR §438.360.

Response

We ask that the commentator to clarify their question about the text box in Protocol 2, Activity 1, Step 3. Non-duplication allows a state to use information from a comparable review process. For the validation of performance measures, the EQRO still needs to verify that measures used the correct specifications (in cases where the state requires HEDIS measures and NCQA-certified software calculates the measures). This verification is part of the ISCA for mandatory EQR-related activities in that MCP’s information systems must ensure that data received from providers are accurate and complete by (1) verifying the accuracy and timeliness of reported data, (2) screening the data for completeness, logic, and consistency, and (3 ) collecting data from providers in standardized formats to the extent feasible and appropriate.

Comment

In Protocol 2, Activity 1, Step 4, one commenter recommended editing the sentence from “will” to “may” to reduce the MCP burden when a convenience sample would not benefit the MCP: “The EQRO will may review a convenience sample of records across measures to identify potential problems for MCP correction. NCQA’s HEDIS Compliance Audit™ recommends selecting up to 10 difficult-to-review measures and obtain copies of at least 2 complete medical record review tools and charts per measure.”

Response

We accept this edit.

Comment

One commenter requested that for the re-abstraction and validation of medical record review, CMS provide rationale for the sample of 30 medical records and at least 2 measures (Protocol 2, Activity 1, Step 4). The commenter compared this to the NCQA HEDIS medical review process, stating it involves reviewing 16 records per measure across several measure domains.

Response

We disagree with this recommendation. This guidance is consistent with the previous version of this protocol. We believe requesting a sample of 30 medical records is consistent with statistical standards.

Comment

One commenter recommended editing the following sentence by deleting “an operator:” “The EQRO will directly observe the Extract, Transform, and Load (ETL) process and its replication by two separate operators an operator through the process using an observation guide to confirm the activities, as well as the process where data are incomplete (e.g., a claim without a provider identification number.”

Response

We accept this edit.


Protocol 3

Comment

One commenter noted that “Numbers 1, 2, and 3 seem to omit the rest of Subpart D” in Protocol 3, Activity 5, Step 3.

Response

We do not understand this comment. We ask that the commenter be more specific.







Comments on the EQR process and regulations


Comment

One commenter recommended that CMS require External Quality Review Organizations (EQROs) to validate Value Based Program (VBP) measures and goals, and ensure that programs are actuarially sound and assess the reasonableness of withholds.


Response

It is unclear if this recommendation is requesting CMS or states to require the EQROs to validate performance measures used in VBP programs. States have discretion to choose the performance measures that they deem most appropriate for their various Medicaid and CHIP programs, including measures for VBP initiatives. Measures used in a state’s value based purchasing program would be validated as part of the mandatory EQR validation activity (corresponding to Protocol 2) if the state either requires managed care plans to include them in their Quality Assessment and Performance Improvement Programs (see 42 C.F.R. §438.330(b)(2)), or if the state calculates the VBP measures.

In addition, states may choose to contract with EQROs to calculate performance measures in addition to those specified by the state for inclusion in managed care plans’ QAPI programs (see the optional EQR-related activity of performance measure calculation 42 C.F.R. § 438.358(c)(3)) If a state has a VBP program and does not include the measures in QAPI, the state could choose to contract with the EQRO to calculate these performance measures under this optional activity, which corresponds to EQR Protocol #7. In that case, EQR Protocol #7 provides guidance to states on the calculation of additional (non-QAPI) performance measures to monitor the care provided by managed care plans to enrollees covered by Medicaid and CHIP.


Comment

One commenter made several general recommendations for improving the EQR process, unrelated to the proposed revised EQR protocols. The recommendations were:

  • Promote greater transparency in developing VBP programs to ensure that health plans can model the calculations and track their progress over time as we strive for quality improvement.

  • Identify common measure sets for VBP Programs.

  • Require VBP measures and targets to be available before the start of the measurement year.

  • Ensure VBP measures align with state quality priorities.

  • Use a standard Performance Improvement Project (PIP) template that strikes an appropriate balance of clarity and technical information.


Response

Each of these are valuable recommendations for us to consider in developing new quality improvement technical assistance and guidance documents for states through Medicaid.gov, but are out of the scope of these existing EQR protocols.

For the fifth recommendation, we note that protocol 1, Validation of Performance Improvement Projects, provides CMS’s expectations and standards for PIPs and provides worksheets for EQROs to use in evaluating and validating PIPs to make recommendations to states and managed care plans. We have tools for states and managed care plans related to developing oral health PIPs, available on Medicaid.gov at https://www.medicaid.gov/medicaid/benefits/dental/index.html. These tools and guidance include “how-to” guides for states and plans on developing oral health PIPs as well as a sample oral health PIP template.

Comment

One commenter requested that CMS amend the regulatory requirements for qualifications and independence of EQROs located at 42 C.F.R. §438.354 to allow accreditors to serve as EQR subcontractors for optional EQR-related activities.


Response

The regulatory requirements for qualifications and independence of EQROs are outside of the scope of this PRA package. It would require new notice and comment rulemaking in the Federal Register to propose changes to this section of Code of Federal Regulations. We note that the requirements for EQRO independence at 42 C.F.R. §438.354(c)(2)(iv) states that, an EQRO or its subcontractors, may not review any MCO, PIHP, PAHP or PCCM entity (described in §438.310(c)(2)) for which it is conducting or has conducted an accreditation review within the previous 3 years.


Comment

One commenter requested that CMS change the regulations at 42 C.F.R. §438.370 to allow states to claim enhanced federal financial participation (FFP) when non-EQROs perform EQR-related activities. The commenter cited other entities’ comments to the June 1, 2015 Medicaid and CHIP Managed Care Proposed Rule (80 FR 31098).


Response

The regulatory requirements providing for FFP for EQR reviews and EQR-related activities are outside of the scope of this PRA package. It would require new notice and comment rulemaking in the Federal Register in order to propose changes to this section of the Code of Federal Regulations. The January 24, 2003 EQR Final Rule (68 FR 3586), codified the provision at

42 C.F.R. § 438.370 providing FFP (1) at the 75 percent rate for EQR, the conduct of EQR activities, and the production of EQR results, by EQROs and their subcontractors, and (2) at the 50 percent rate for EQR-related activities performed by entities not qualifying as EQROs.


9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorHEATHER HOSTETLER
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy