CMS-10695 Supporting Statement Part B Research

CMS-10695 Supporting Statement Part B Research.docx

Quality Payment Program/Merit-Based Incentive Payment System (MIPS) Surveys and Feedback Collections (CMS-10695)

OMB: 0938-1399

Document [docx]
Download: docx | pdf

Supporting Statement Part B


General Instructions:



This part of the Supporting Statement must be completed for any information collection that includes surveys, research studies, or program evaluations in which the purpose is to make statistical generalizations beyond the particular respondents and to use the information for the description, estimation, or analysis of the characteristics of groups, segments, activities, or geographic areas. Surveys include both censuses (i.e., all members of the group of interest are asked to submit information) and sample surveys (i.e., only a subset of all members are asked to submit information). Research studies and program evaluations may involve the comparison of groups or testing of hypotheses about the impact or effectiveness of a program or the relation among variables, but the intention is to describe, estimate, or analyze the characteristics of groups or subgroups and draw inferences that can be generalized or applied beyond the particular observed sample. For more information, see OMB’s Standards and Guidelines for Statistical Surveys and Questions and Answers when Designing Surveys for Information Collections.1

Consistent with the Paperwork Reduction Act of 1995 and OMB’s government-wide Guidelines for Ensuring and Maximizing the /Quality, Objectivity, Utility and Integrity of Information,2 the design of the information collection must be consistent with the intended use of the data. For instance, in a program evaluation context, the design should be sufficiently rigorous to guide spending, policy, or implementation decisions; in an emergency response context, it may be that the utility of the collection is driven more by ability to get the information to decision makers quickly than by precision.

If this is an on-going collection, clearly identify changes in the study design and methods and explain the justifications for each in each section

Specific Instructions:


B1. Objectives

  1. List the specific project/study objectives.

The potential respondent universe for the surveys and feedback collections approved under this generic clearance may include any MIPS eligible clinician, voluntary reporter, or third-party intermediary, payers, Alternative Payment Model (APM) entities or participants within APM entities, Medicare beneficiaries and caregivers, and any other audiences who we anticipate will submit data to the Quality Payment Program.

For the CY 2024 performance period/2026 MIPS payment year, we estimate that approximately 13,413 clinicians will submit data as individuals using the Medicare Part B claims collection type; approximately 10,682 clinicians will submit data as individuals using MIPS CQM and QCDR collection type; and approximately 22,897 clinicians will submit data as individuals using eCQMs collection type.


For the HCD user testing volunteer sign-up, background information will be collected and used to target participants for the user satisfaction, product usage, and benchmarking surveys based on need. For example, when testing the cost performance category feedback, we may desire to target small to mid-size practices. Respondents for both surveys may be clinicians, practice staff, third-party intermediaries, and APM entities or participants within APM entities who have submitted data to the Quality Payment Program. For the compare tool user testing, respondents will include Medicare beneficiaries, caregivers, and members of the general public.


For the HCD user testing volunteer sign-up, we will invite participants to complete the survey and use the background information collected to select potential survey participants for the HCD user satisfaction, product usage, and benchmarking surveys. For the compare tool user testing, CMS intends to utilize a respondent pool of Medicare beneficiaries, caregivers, and general public which reflects the broad population of compare tool users.


Results will not be used to make statements representative of the universe of study, to produce formal statistical descriptions, or to generalize the data beyond the scope of the sample. When applicable, the method for soliciting participation will be described fully in each collection request. Participation for all surveys and feedback collections will be voluntary.



  1. Describe the intended generalizability of results.

Information collected under this generic clearance is not designed to yield generalizable quantitative findings; but procedures to maximize response will be employed so that an appropriately sized set of participants is involved in any survey or feedback collection. Survey questions will be designed so that they are easy to answer and will be structured to be as short as possible; this includes maximizing the use of nominal and ordinal scales as response modalities.

CMS continues to make every effort to reasonably reduce complexity and burden with each performance year. As described in Supporting Statement A, to encourage participation and timely responses, the compare tool team will offer monetary incentives to respondents. We anticipate these efforts will encourage ongoing participation in these activities as participants recognize the data and feedback they provide is utilized in future years in furtherance of this goal.

  1. Describe the appropriateness of the study design and methods given the objectives of the data collection, the target population, and the quality of the data needed for planned uses of the resulting information.

Each study that will be conducted under this generic clearance will be crafted to meet the specific needs of the challenge presented. For example, in the case of testing key functionality of a component of the digital experience, a short, simple, minimally invasive survey tool would be developed and presented only to users that may be able to provide feedback on its effectiveness and their satisfaction with the feature. For broader studies that require input from a variety of stakeholders, the intent of the survey will be clearly defined in the communications vehicle determined necessary for outreach so each individual potential participant can determine for themselves its relevance and their interest in providing feedback. Expected time to complete the survey will be included in the introductory language of all materials to provide clarity and burden expectation. All surveys are optional, allowing users the flexibility to choose whether or not they want to be part of the study.


B2. Methods and Design. Describe the following:

  1. The target population and its estimated size.

The potential respondent universe for the surveys and feedback collections approved under this generic clearance may include any MIPS eligible clinician, voluntary reporter, or third-party intermediary, payers, Alternative Payment Model (APM) entities or participants within APM entities, Medicare beneficiaries and caregivers, and any other audiences who we anticipate will submit data to the Quality Payment Program.

For the CY 2024 performance period/2026 MIPS payment year, we estimate that approximately 13,413 clinicians will submit data as individuals using the Medicare Part B claims collection type; approximately 10,682 clinicians will submit data as individuals using MIPS CQM and QCDR collection type; and approximately 22,897 clinicians will submit data as individuals using eCQMs collection type.

For the HCD user testing volunteer sign-up, background information will be collected and used to target participants for the user satisfaction, product usage, and benchmarking surveys based on need. For example, when testing the cost performance category feedback, we may desire to target small to mid-size practices. Respondents for both surveys may be clinicians, practice staff, third-party intermediaries, and APM entities or participants within APM entities who have submitted data to the Quality Payment Program. For the compare tool user testing, respondents will include Medicare beneficiaries, caregivers, and members of the general public.


For the HCD user testing volunteer sign-up, we will invite participants to complete the survey and use the background information collected to select potential survey participants for the HCD user satisfaction, product usage, and benchmarking surveys. For the compare tool user testing, CMS intends to utilize a respondent pool of Medicare beneficiaries, caregivers, and general public which reflects the broad population of compare tool users.


  1. For a census, explain why a sample survey cannot be used to reduce the cost and burden of the information collection.

The primary distribution methods for surveys under this generic clearance will be the QPP Listserv and the HCD Research Panel, both of which provide offer information about our users that would allow the team to select a sample of the audience to receive a sample survey. The QPP Listserv includes only email addresses and the HCD Research Panel is populated mostly through users that have opted into the list via the QPP Service Center, which only provides names and email addresses. The intent of the data collection and topic of the survey will be clearly described in any communication promoting the survey tools, providing users the opportunity to consider its relevance to their experience before deciding whether or not to provide feedback. When applicable, surveys will be built with internal logic to present users only questions that are relevant to their circumstance based on their self-reported experience and situation. For example, MIPS-related questions will only be presented to users that indicate they participate in MIPS and would not be shown to users that indicate they are MIPS exempt. More targeted, task-oriented customer satisfaction data collection will only be presented to those completing the task in context and not displayed for other users where the survey would not be relevant.


  1. For multisite studies – site selection characteristics.

Online survey methods will be used for the HCD user testing volunteer sign-up survey and HCD user satisfaction, product usage, and benchmarking survey. For the compare tool user testing, we anticipate using a combination of one-on-one in-depth interviews using an online platform and as feasible, in-person sessions at a designated testing facility in order to employ the use of eye tracking technology to identify gaze trails and locations where users look the most.


  1. For surveys employing statistical samples:

Surveys under this generic clearance will not use statistical sampling.


    1. the sampling frame used, why the frame is appropriate for the target population including any coverage issues, how the frame is updated, and how current the frame is (e.g., includes housing units as of December 31, 2004).


    1. the sampling methodology and the estimated sample size for each stratum.


    1. the precision needed for the key survey estimates (i.e., to address the primary objectives/key research questions) and sample size/power calculations (if comparing outcomes among groups, provide the estimated sample size for each group and the minimum detectable effects for the key outcomes).



  1. If nonprobability sampling methods (e.g., cut-off or model-based samples) will be used, provide a statistical justification for the methodology chosen.

Nonprobability sampling methods will not be used under this generic clearance.


  1. If planning to use pre-existing panels, provide the demographic and geographic characteristics of the population, what is known about its representativeness and the potential for non-response bias, the procedures used the recruit the sample, and the underlying response rates.

The QPP HCD Research panel has approximately 5,000 contacts at the time of this generic clearance request. This panel has been built primarily through QPP users opting-in to the contact list through the Service Center. The Service Center only provides names and email addresses. Upon being added to the panel, users are contacted with the request to provide demographic information, but this is voluntary, and few have completed this information. To manage this lack of information about individual users, the team clearly defines the topic and desired demographic for each research activity during recruitment to allow panelists the chance to decide if the activity fits their experience. Demographic information is collected as part of nearly all quantitative research so responses can be cross referenced based on self-reported demographic information.


  1. For alternative sampling approaches (e.g. snowball) or convenience samples, provide details.

These types of sampling approaches will not be used under this generic clearance.


B3. Design of Data Collection Instruments. Describe the following:



  1. How the survey or data collection instrument was developed including sources for items drawn from other surveys and testing of items created specifically for this study.

Online survey methods will be used for the HCD user testing volunteer sign-up survey and HCD user satisfaction, product usage, and benchmarking survey. For the compare tool user testing, we anticipate using a combination of one-on-one in-depth interviews using an online platform and as feasible, in-person sessions at a designated testing facility in order to employ the use of eye tracking technology to identify gaze trails and locations where users look the most.


For each data collection under the Quality Payment Program, respondents are required to submit data no more than once annually. Because the surveys and feedback collections under this generic clearance are primarily intended to collect data on user experience, it is necessary to collect this data with at least the same identical frequency. In most cases, more frequent collection would provide no additional benefit. However, some respondents may provide data on a more frequent than annual basis; in this case, it is possible these respondents would respond to certain surveys of feedback collections more than once per year. We do not anticipate using sampling or statistical estimation for any survey or feedback collection.


  1. This discussion should highlight the agency’s efforts to streamline the data collection instruments to ask only questions necessary to achieve the objectives of the data collection. The use of questions should be linked to data analysis plans.

Surveys under this generic clearance, like those that were developed and utilized during its predecessor, will utilize survey building technology that only presents users with information that is relevant to their experience. For example, survey respondents may be asked if they participate in MIPS or if they are MIPS exempt. Using workflow logic tools within the survey building platform, users will only be presented questions that are relevant to their self-reported experience. All quantitative research that would take place under this generic clearance will be defined through a Research Plan that is prepared in advance of the submission of a survey tool for OMB approval that indicates how information from said survey will be used to deepen the program’s knowledge of its users for the purposes of improving the digital experience.


  1. For studies with multiple instruments and/or long questionnaires, provide a table that identifies which instruments and questions are used for each of the objectives described in section B1.

The team conducts two annual long questionnaires around key milestones in the program’s annual cycle. Questioned marked as Milestone Survey would appear on both instruments. Questions included below are standard year-over-year. Team also evaluates evolving nature of the program and may include additional questions related to changes or developing inquiries.


Questionnaire

Question

Response Type

Milestone Surveys

For which performance years have you submitted data?

Multi-select

Milestone Surveys

Which reporting option(s) did you submit for during the performance year?

Multi-select

Submissions Experience

Did you personally submit or attest data using the QPP Portal?

Yes/No

Submissions Experience

How confident are you that CMS accurately received all of your submitted data?

Likert

Submissions Experience

How difficult or easy was it to submit or attest [category] data to QPP

Likert

Submissions Experience

Rate how unclear or clear the content, messaging, and other information in the QPP Portal was when you were completing your submission

Likert

Submissions Experience

How much to do agree or disagree with the following statement: It took me a reasonable amount of time to submit data

Likert

Submissions Experience

During your submissions experience for [performance year], which, if any, of the following issues did you encounter?

Multi-select

Submissions Experience

Why did your organization choose to report an MVP?

Multi-select, Open text

Submissions Experience

How do you rate your satisfaction with the MVP reporting experience

Likert

Submissions Experience

How difficult or easy was it to find important information on MVPs?

Likert

Submissions Experience

How prepared did you feel to report an MVP?

Likert

Submissions Experience

Which of the following is most closely aligned with your thoughts on the number of measures offered in your MVP?

Select One

Submissions Experience

Now that you have reported an MVP, which reporting option do you prefer?

Select one

Milestone Surveys (demographics)

  • How many clinicians participate in QPP through your practice?

  • How many TINS participate in QPP through your practice?

  • (if applicable) What APM model do you participate in?

  • Do you work with a Qualified Registry or QCDR?

  • What type of organization do you work for?


Select one

Feedback Experience

How difficult or easy was it to access your QPP Performance Feedback?

Likert

Feedback Experience

How difficult or easy was it to understand how your scores were determined?

Likert

Feedback Experience

How difficult or easy was it to find your payment adjustment?

Likert

Feedback Experience

How difficult or easy was it to understand how your payment adjustment was calculated?

Likert

Feedback Experience

Did you download your data from the QPP Portal?

Yes/No

Feedback Experience

If yes, how difficult or easy was it to find your downloadable data?

Likert

Feedback Experience

How clear or unclear was it that the score displayed was your Final Score without a payment adjustment?

Likert

Feedback Experience

How helpful or unhelpful was it to view your Final Scores before your payment adjustment was ready?

Likert

Feedback Experience

If there was something about your score that you thought was incorrect, what would you have done?

Select One or Open Text



  1. Any pre-tests or developmental/methodological work (including cognitive, usability, and field tests) conducted or planned on any component of the survey and provide results, if available. (Note that pre-tests that involve collecting information from ten or more persons must be approved by OMB under the PRA.)

Pretesting may be done with internal staff, a limited number of external colleagues, and/or customers who are familiar with the programs and products. If the number of pretest respondents exceeds nine members of the public, the Agency will submit the pretest instruments for review under this generic clearance. The questions to be asked will be similar to those used by other Federal agencies in their customer surveys. If respondents are unable to supply the data, questions may be reworded.


To ensure quality while the data are being collected, special attention will be paid to: proper wording of questions to reflect intent, survey completion rates, response rates of individual survey items, and comments CMS receives regarding the survey.


  1. How the survey or study design identifies and minimizes measurement error.


B4. Collection of Data and Quality Control. Describe the following:



  1. Who will be collecting the data (e.g., agency, contractor, local health departments)?

Contractor will be responsible for data collection based on the needs defined by the agency.


  1. What is the recruitment protocol?

      1. Determine if a survey is the best method of data collection to learn information needed to address the problem statement.

      2. Determine our target audience. For example, is this a general audience topic or is it specific to a subset of users, such as, Shared Savings Program participants.

      3. Based on answer to #2, choose the best method for reaching this group. If the QPP HCD Team does not own this communication method, connect with channel owners and collaborate on outreach.

      4. Craft clear, direct message that communicates the purpose of the data collection, the type of users we seek to hear from, and the amount of time expected for the completion of the survey. With this information, the target audience can determine if they would like to participate.

      5. Provide “Opt-out” option in all communication so users can remove themselves from lists for future data collection.


  1. What is the mode of data collection?

Surveys hosted in FedRAMP approved application such as Qualtrics.


  1. How are the data collection activities monitored for quality and consistency (e.g., interviewer training).

The team has developed standards and practices for all external data collection that is introduced to all researchers as part of the on-boarding process to the team. This information is available on the team’s Confluence space. The Research Lead is responsible for ensuring quality and consistency across recruitment efforts.


  1. What data evaluation activities are planned as part of monitoring for quality and consistency in this collection, such as re-interviews?

Quantitative data collected for customer feedback and satisfaction is one prong of a mixed methods approach that includes qualitative interviews and service center ticket analysis among other research activities that are used to validate one another. Data collected under this generic clearance is a single component of understanding the needs, attitudes and behaviors of our users, and by incorporating additional methods we can assure quality and consistency across all activities.





B5. Response rates and potential nonresponse bias: Describe the following:



    1. Formulas to be used for calculating unit and item response rates. Response rates should include all stages of response. For example, pre-existing panel studies should include initial recruitment non-response in addition to estimated completion rate.

Surveys may be distributed via several channels, including the QPP User Panel, the QPP Listserv, Small Practice Newsletter, SSP Newsletter, the QPP homepage, with other potential outlets still available. Channels will be selected based on applicability to the subject of the survey. The team does not calculate response rates and accepts nonresponse bias as a potential risk given the lack of information we have about each contact in our pool. All feedback received through quantitative surveys is one component of understanding users, leading us to apply a mixed methods approach to user satisfaction that reduces the risk of nonresponse bias.


    1. For previously approved surveys, provide unit response rates since the last OMB approval.

Long questionnaires approved under the last OMB approval receive approximately 10% response rate when distributed to the QPP HCD User Panel


    1. Estimates of expected response rates and the basis for the expectation (e.g., estimates from similar studies or pretests).

We estimate between a 10%-20% response rate depending on the survey distribution type and desired target audience. 


    1. Methods that will be used to attain the expected response rates (do not repeat information about incentives provided in Part A).

Response rate will not be calculated.

    1. Procedures for handling nonresponse (e.g., imputation, weight adjustment) when creating survey estimates.

Incomplete surveys and responses will still be collected and included in analysis where any information was provided from the user. Insights are broken down by demographics to show important trends within groups.

    1. Nonresponse bias analyses plans, including the methods that will be used and the specific information that is available to assess potential bias.

Before conducting the analysis, we can identify potential sources of nonresponse bias looking at previous studies conducted. These could include demographic factors, geographic location or behavioral characteristics that may influence the likelihood of survey participation. We compare the respondent population to non respondents to check for major discrepancies.


B6. Production of Estimates and Projections. Describe the following:



  1. Whether the estimates produced are for official external release by the agency or for internal use only, e.g., for program management and improvement.

  2. Estimation methods that will be applied to the survey data to develop target population estimates for use internally and/or for dissemination. Also, the use of any biased estimators must be identified and justified.

  3. As applicable, the methodology to measure sampling error and estimation error.

  4. The weights applied to the survey data to calculate target population estimates; or if an alternative method of sample design is utilized (e.g., ratio estimation), any evaluations to ensure the method results in population estimates of high quality.

  5. Other activities (e.g., use of auxiliary data) that will be utilized in conjunction with the survey data to improve the quality of the estimates.

  6. If the survey data are collected with an intention of developing model-based estimates of target population characteristics or projections of future values, explain the methods/models that will be applied to the survey data.


B7. Data Handling and Analysis.



  1. Procedures for editing to mitigate or correct detectable errors, including checks built into computerized instruments.

Data is collected and analyzed using FedRAMP approved tools that includes tools to evaluate responses and support error detection.


  1. Procedures to minimize errors due to data entry, coding, and data processing.

The research team collaboratively manages quantitative activities covered under this generic clearance to monitor and review data at each step of the process to ensure accuracy and minimize errors.


  1. Documentation that will be released to the public to improve understanding of how to properly interpret, analyze, and evaluate information from the collection.

Data collected by the QPP HCD team to understand user needs, attitudes and behaviors is not released to the public and used for internal system feature development and road mapping.


  1. Describe the plans for analyzing the data including:

    1. Methods to be used for statistical tests to address the needs and uses of the information as described in Part A.

Utilize a combination of descriptive statistics and inferential statistics to address the needs and uses of the information gathered from email surveys and intercept surveys. Descriptive statistics include counts, percentages, and averages where appropriate.

    1. Complex analytical techniques that will be used.
      Utilize cross-tabulation analysis to examine responses along self-identified demographics such as practice size, performance pathway, and other relevant factors. This will allow for a comprehensive understanding of how satisfaction levels vary among different user segments.

    2. Qualitative analysis plans.

Conduct thematic analysis on open-ended survey responses to identify recurring themes and sentiments among customers. This qualitative approach will provide deeper insights into the reasons behind customer satisfaction or dissatisfaction.

    1. How the information collected will be used or interpreted in conjunction with other sources of information.

Integrate survey data with website analytics and user behavior data to gain a holistic understanding of customer satisfaction. By correlating survey responses with metrics such as website traffic, bounce rates, and downloads, we can identify areas for improvement on the website that directly impact users. Data from surveys can also be used in conjunction with user interviews to support claims. 




B8. Contact Person(s).


Provide the name, e-mail address of the lead individual(s) who can answer questions about the statistical aspects of the survey and the name(s) of the agency unit(s), contractor(s), grantee(s), or other person(s) who will actually collect, process, and/or analyze the information for the agency.


CMS will obtain information from statisticians or other experts in the development, design, conduct, and analysis of surveys and feedback collections, when appropriate. This expertise will be available from CMS staff or contractors. The names and contact information of persons consulted will be provided to OMB at the time the surveys or feedback collections are submitted.


Please contact the following CMS staff regarding the statistical and methodological aspects of the design or for agency information: Robert Dimas ([email protected]) for HCD and Julie Johnson (Johnson, Julie (CMS/CCSQ) [email protected]) for the Compare Tool.


Attachments

  • Agencies are welcome to submit as attachments any documents that clearly outline some of the above, such as IRB approvals, evaluation design reports, analysis plans, etc. In the text, the agency should provide a summary and direct the reviewer to the attached document.



1https://www.whitehouse.gov/sites/whitehouse.gov/files/omb/assets/OMB/inforeg/pmc_survey_guidance_2006.pdf

2 67 FR 8452 (Feb. 22, 2002), available at https://www.federalregister.gov/documents/2002/02/22/R2-59/guidelines-for-ensuring-and-maximizing-the-quality-objectivity-utility-and-integrity-of-information

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDenise King
File Modified0000-00-00
File Created2024-09-05

© 2024 OMB.report | Privacy Policy