Attachment T- CHIPRA Quality Demonstration Evaluation Research Questions

Attachment T- CHIPRA Quality Demonstration Evaluation Research Questions.docx

Evaluation of the Children's Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Quality Demonstration Grant Program

Attachment T- CHIPRA Quality Demonstration Evaluation Research Questions

OMB: 0935-0190

Document [docx]
Download: docx | pdf

Mathematica Policy Research






Appendix T

specific evaluation questions
for All categories



Category A: Evaluation Questions

Note: Potential evaluation questions from the HHS solicitation are included in regular font; additional evaluation questions developed by the national evaluation team are italicized.


Evaluation Question


  1. Was the grantee able to collect and report on the full set of core measures?

  1. To what extent were pediatric quality measures already being collected through an existing state health information exchange or other means?


  1. Which, if any, measures was the grantee unable to collect and report on? Why? What would need to happen to enable collecting and reporting on these measures?


  1. How complete and faithful to CMS specifications were the core measures that were collected and reported on?


  1. How often were the core measures collected?


  1. What was the cost of collecting and reporting on core measures (including provider as well as State costs)?


  1. To what extent is the collection of core measures sustainable after the grant’s end and replicable by others?


  1. What does the program look like in a steady state?


  1. What is required to maintain the program?


  1. What are the prerequisites for successful replication?


  1. Which elements of the HIT are essential to achieving the same results?


  1. How did the grantee collect data for and generate the core measures?

  1. How did the state implement Category A?


  1. Which resources (e.g., cross-state workgroups) played the most critical role?


  1. To what extent did stakeholders have input on how Cat. A was implemented?


  1. Were data use and end user data agreements established for all necessary providers/sites?


  1. Which, if any, measures were easy to collect, and why?


  1. What problems were encountered in collecting and reporting on the core measures? How were they addressed? How could other States avoid such problems?


  1. What data infrastructure limitations/ barriers were encountered in testing the measures and how were they overcome?


  1. What changes to data infrastructure or reporting systems were needed?


  1. What kind of technical assistance was obtained? From whom? Who received the technical assistance (e.g., State personnel, providers)? What technical assistance was critical to the ability to collect and report on the core measures?


  1. Did the grantee integrate data collection for core measures with other data collection activities, and if so, how?


  1. Did the collection and reporting of core measures displace any pre-existing quality measurement or improvement activities?


  1. How did stakeholders use core measures?

  1. How are HEDIS quality measure and/or CAHPS patient satisfaction data currently used, and by whom?


  1. Who used the core measures and how were they used?


  1. Did all stakeholders endorse measures and associated reports?


  1. Which stakeholders used which reports for what purpose? With what measurable impact?


  1. Who prepared the reports?


  1. What did they report on?


  1. What audiences were the reports tailored to?


  1. To whom were the reports distributed?


  1. How did the target audience respond to the reports?


  1. Who analyzed the core measures and decided what quality improvement activities to undertake?


  1. What quality improvement activities were undertaken?


  1. Who implemented the quality improvement activities?


  1. How were the core measures used to construct the incentive?


  1. How was it decided which of the core measures to use in constructing the incentive and how to weight them?


  1. What were providers’ reactions to this use of the core measures? Was consensus achieved, and if so, how?


  1. What is the impact of the core measures on improving the child health care delivery system?

  1. Did the core measures inform the state’s quality strategies related to children’s health care, and if so, how?


  1. Did the collection and reporting of the core measures have an impact on any other quality measurement activities? If so, what was the impact?


  1. How useful were the core measures in assessing program quality and managing the Medicaid and/or CHIP program?


  1. Were they useful to measure improvement over time?


  1. Were they useful to compare provider performance?


  1. Were they useful to compare with other payers or States?


  1. What would make them more useful?


  1. Did the core measures increase evidence-based decision making by consumers, payers, providers, the State, or other stakeholders?


  1. Did the core measures meet stakeholders’ needs for reporting on child health access and quality in Medicaid and CHIP? If not, how could these needs be met?


  1. What has been the impact on child health access and quality of any reporting, payment, quality improvement, or other activities based on the core measures?


  1. What was the impact of the core measures on health care for children not enrolled in Medicaid or CHIP?


  1. What were the unanticipated impacts, if any, of collecting and using the core measures (e.g., decreased provider participation in Medicaid and/or CHIP)?




Category B: Evaluation Questions

Note: Potential evaluation questions from the HHS solicitation are included in regular font; additional evaluation questions developed by the national evaluation team are italicized.


Evaluation Question


  1. What kind of HIT or HIT enhancements was designed to improve the quality of children’s health care and/or reduce costs?

  1. What hardware and software was used?


  1. What was the functionality of the HIT?


  1. What systems were connected by the HIT?


  1. What providers had access to the HIT?


  1. What kind of information was communicated?


  1. Did the design specifically take into consideration children with special health care needs?


  1. To what extent is HIT sustainable after grant’s end and replicable by others?


  1. What does the program look like in a steady state?


  1. What is required to maintain the program?


  1. What are the prerequisites for successful replication?


  1. Which elements of the HIT are essential to achieving the same results?


  1. How did the grantee, its partners, and its sub-contractors, implement the HIT?

  1. Who was involved in the implementation effort? What roles did they play? Were all the important stakeholders included?


  1. To what extent did previous knowledge, skills, or experience related to HIT provide support for the current project?


  1. To what extent did the Governor, Medicaid director, HHS director, and key stakeholders provide support for the current project?


  1. What were the critical baseline features of the state’s delivery system for children in Medicaid/CHIP?


  1. What were the critical baseline features of the state’s HIT infrastructure?


  1. How did interventions under other grant categories contribute to the success of the Category B HIT intervention?


  1. How many and what types of practices participated in the HIT effort under this category?


  1. How many and what types of children had clinical data made available to participating providers?


  1. What were the start-up and on-going costs associated with HIT implementation (including provider as well as State costs)?


  1. What incentives, if any, were used to promote adoption and use of the HIT? Were they effective?


  1. How else was adoption and use of the HIT promoted? How did actual HIT adoption and use compare with the demonstration project’s goals?


  1. What kind of technical assistance or training was obtained? From whom? What quantity? Who received the technical assistance or training? What technical assistance or training was critical to implementation of the HIT?


  1. How was implementation of the HIT monitored? What measures were used and what was learned from them?


  1. What systems of quality assurance were used? Were they effective?


  1. Did the grantee integrate the HIT with other state or provider HIT activities, and if so, how?


  1. Was the HIT implemented as planned? If not, why not? What kind of adjustments had to be made?


  1. What implementation problems were encountered (e.g., delays, incompatible systems or other technical problems, privacy issues, cost overruns)?


  1. Were any of these problems unique to the pediatric setting?


  1. How were they addressed?


  1. How could other States avoid such problems?


  1. Was the implementation plan adequate? If not, what elements were missing?


  1. How was the HIT used?

  1. Who conducted data entry?


  1. Who aggregated/analyzed data?


  1. With whom were data or analyses shared?


  1. Who used the data or analyses?


  1. What was the goal of using the data or analyses (e.g., reducing errors or duplication, increasing access, continuity, coordination)?


  1. How were they in fact used?


  1. Were the data accurate and complete enough to use them for their intended purpose?


  1. What quality improvement/cost containment activities were undertaken as a result of the HIT?


  1. Who implemented the quality improvement/cost containment activities?


  1. How were the results of quality improvement/cost containment activities monitored? What measures were used and what was learned from them?


  1. What was the impact of the HIT on the health care quality of children enrolled in Medicaid or CHIP?

  1. Did partnering providers gain the knowledge and skills to use the new HIT tools and system linkages?


  1. Did partnering providers actually use the new HIT tools and system linkages in the development and sharing of care plans?


  1. Did patients and families become more satisfied with the care received?


  1. Did the project improve the comprehensiveness of patient records? (e.g,. increase the number of patients that had ER data in their provider’s EMR)


  1. Did the project improve children’s access to health care?


  1. Did the project reduce the chances of children experiencing a medical error?


  1. Did the project improve the timeliness of children’s health care?


  1. Did the project increase the delivery of effective children’s health care?


  1. Did the project increase rates of behavioral health screening and visits to mental health specialists (if applicable)? decrease time elapsed between the referral and the visit?


  1. Did the project reduce hospital admissions, ED use, and/or hospitalizations for ambulatory care-sensitive conditions?


  1. Did the project reduce redundant tests?


  1. Did the project improve the patient-/family-centeredness of children’s health care?


  1. Did the project improve the coordination of care (e.g., increase the number of providers who were informed of care a child received from another provider)?


  1. Did the project have an impact on efficiency (e.g., decrease inappropriate health services, decrease duplication of services)?


  1. Was the cost of care per participating child reduced?


  1. Did the HIT result in cost savings, and if so, who received the benefit?


  1. Was it sufficient to offset the cost of implementing the HIT?


  1. What elements of the model were responsible for the cost savings?


  1. Did the project reduce health care disparities?


  1. Did the project increase evidence-based decision-making by consumers, payers, providers, the State, or other stakeholders?


  1. What was the impact of the HIT on health care for children not enrolled in Medicaid or CHIP?


  1. What were the unanticipated impacts, if any, of the HIT?


  1. Which aspects of the HIT were largely responsible for its impact? Which aspects are essential to achieving the same results?


  1. How long must the HIT be in effect to begin demonstrating results?


  1. Did the model HIT increase transparency and consumer (youth/family) choice? (For consumer facing HIT only.)

  1. Did consumers use the HIT?


  1. What proportion of consumers who had the HIT available to them used them?


  1. What were the characteristics of consumers that used the HIT? That did not use the HIT?


  1. For what purpose did consumers use the HIT?


  1. Did consumers find the HIT useful? Was the EHR/PHR easy to use?


  1. Did consumers make better informed decisions based on information from the HIT?




Category C: Evaluation Questions

Note: Potential evaluation questions from the HHS solicitation are included in regular font; additional evaluation questions developed by the national evaluation team are italicized.


Evaluation Question


  1. What was the provider-based model of care that was implemented?

  1. Who was involved in planning the provider-based model of care? Over what period of time?


  1. What was the level of cooperation among stakeholders, and how was it maintained?


  1. Was a stakeholder collaborative framework used to design, implement, and sustain the provider-based model of care?


  1. What practices work best to encourage provider participation as well as collaboration among participating providers, payers, and stakeholders?


  1. What were the most common benefits resulting from such collaboration and did these benefits extend beyond the particular provider-based model under review?


  1. What specific strategies were planned to improve quality?


  1. What provider-based model was implemented? (e.g., detailed description, including PCMH definition/standards used)


  1. What types and amounts of payments were offered to participating providers? How did this differ from the prior payment approach?


  1. What types of resources and technical assistance were available to practices participating in the Learning Collaborative (if applicable)?


  1. How was the provider-based model to improve health care quality implemented?

  1. Who was involved in the implementation effort? What roles did they play? Were all the important stakeholders included?


  1. To what extent did previous knowledge, skills, or experience related to provider-based models (e.g., PCMH) and/or other quality improvement approaches provide support for the current project?


  1. To what extent did the Governor, Medicaid director, HHS director, and key stakeholders provide support for the current project?


  1. What were the critical baseline features of the state’s delivery system for children in Medicaid/CHIP?


  1. How did interventions under other grant categories contribute to the success of the Category C intervention?


  1. How many and what types of practices implemented the provider-based model? (e.g., implemented a PCMH, participating in learning collaborative, received technical assistance, etc.)


  1. How many and what types of children received services through the new provider-based model?


  1. What incentives, if any, were used to promote implementation? How else was implementation promoted?


  1. What kind of technical assistance or training was obtained? From whom? What quantity? Who received the technical assistance or training? What technical assistance or training was critical to implementation of the provider-based model?


  1. How many and what types of practices participated in the Learning Collaborative? What types of content was delivered? How many sessions were held? (if applicable)


  1. How was implementation of the provider-based model monitored? What measures were used and what was learned from them?


  1. What were the start-up and on-going costs associated with implementation of the provider-based model (including provider as well as State costs)?


  1. Did the grantee integrate the provider-based model with other state or provider quality improvement activities, and if so, how?


  1. Was the provider-based model implemented as planned? If not, why not? What kind of adjustments had to be made?


  1. What implementation problems were encountered? Were any of these problems unique to the pediatric setting? How were they addressed? How could other States avoid such problems?


  1. Was the implementation plan adequate? If not, what elements were missing?


  1. To what extent is the provider-based model sustainable after grant’s end and replicable by others?


  1. What does the program look like in a steady state?


  1. To what extent is the State prepared to expand implementation of this model?


  1. What is required to maintain the program?


  1. What are the prerequisites for successful replication?


  1. Which elements of the model are essential to achieving the same results?


  1. What was the impact of the provider-based model on children’s health care quality?

  1. Did partnering providers gain knowledge of the provider-based interventions being implemented (e.g., PCMH concept, T.A. availability, etc.)?


  1. Did partnering providers believe that PCMHs could improve quality of care for children?


  1. Did partnering providers gain the skills to implement the model? (e.g., gain competencies in population management tools, care coordination, evidence-based care, systems-based quality and safety, leadership, family & community engagement, advocacy, and increasing access to care)


  1. Did partnering providers have the motivation to implement the model?


  1. Did partnering providers believe that sufficient incentives were offered to cover the cost of making practice transformations?


  1. Did partnering providers increase their “medical homeness”?


  1. Did partnering providers report high satisfaction with learning collaborative and/or technical assistance recieved?


  1. Did partnering providers make changes as a result of participation in the learning collaborative (if applicable)?


  1. Did partnering providers use practice-level data for quality improvement?


  1. Did partnering providers have the infrastructure to track the provider-based model’s impact on quality and health outcomes?


  1. Did patients and families believe that the provider-based model could improve the quality of care?


  1. Did patients and families perceive changes related to the new provider-based model? (e.g., changes in values, systems, principles, operating characteristics in line with medical home concepts, and increased shared decision-making, clinician compassion, coordination of care, culturally-sensitivity, access)


  1. Did patients and families develop a better understanding of their conditions and how to manage them?


  1. Did patients and families receive help arranging care or other services?


  1. Did patients and families become more involved in care decisions?


  1. Did patients and families become more satisfied with the care received?


  1. Did the project improve children’s access to health care?


  1. Did the project reduce the chances of children experiencing a medical error?


  1. Did the project improve the timeliness of children’s health care?


  1. Did the project increase the delivery of effective children’s health care?


  1. Did the project increase EPSDT rates (if applicable)?


  1. Did the project increase immunization rates (if applicable)?


  1. Did the project increase rates of behavioral health screening and visits to mental health specialists (if applicable)? decrease time elapsed between the referral and the visit?


  1. Did the project reduce hospital admissions, ED use, and/or hospitalizations for ambulatory care-sensitive conditions?


  1. Did the project reduce redundant tests?


  1. Did the project increase the use of community-based services and social services (if applicable)?


  1. Did the project decrease the rate of prescriptions for psychotropic drugs? (for starts targeting children with several emotional disturbances)


  1. Did the project improve the patient-/ family-centeredness of children’s health care?


  1. Did the project improve the coordination of care (e.g., increase the number of providers who were informed of care a child received from another provider)?


  1. Did the project have an impact on efficiency (e.g., decrease inappropriate health services and psychotropic drug use, if applicable, decrease duplication of services)?


  1. Was the cost of care per participating child reduced?


  1. Did the provider-based model result in cost savings, and if so, who received the benefit?


  1. Was it sufficient to offset the cost of implementing the provider-based model?


  1. What elements of the model were responsible for the cost savings?


  1. Did the project reduce health care disparities?


  1. Did the project increase evidence-based decision-making by consumers, payers, providers, the State, or other stakeholders?


  1. What was the impact of the provider-based model on health care for children not enrolled in Medicaid or CHIP?


  1. What were the unanticipated impacts, if any, of the provider-based model?


  1. Which aspects of the provider-based model were largely responsible for its impact? Which aspects are essential to achieving the same results?


  1. What practice barriers and facilitators affect the process of transformation into a medical home?


  1. How long must the provider-based model be in effect to begin demonstrating results?




















Category D: Evaluation Questions

Evaluation Question


  1. Was the grantee able to get the model pediatric EHR adopted and used?

  1. How did the pediatric EHR intersect with CHIPRA demonstration activities in other categories?


  1. Who was involved in promoting adoption and use of the model pediatric EHR? What roles did they play? Were all the important stakeholders included?


  1. What incentives, if any, were used to promote adoption and use of the model pediatric EHR? Were they effective?


  1. How else was adoption and use of the model pediatric EHR promoted?


  1. How was adoption and use of the model pediatric EHR monitored? What measures were used and what was learned from them?


  1. How did actual model pediatric EHR adoption and use compare with the demonstration project’s goals?


  1. Who adopted the model pediatric EHR?


  1. What were the characteristics of providers who adopted and who chose not to adopt the model pediatric EHR?


  1. Did any providers that decided to adopt the model pediatric EHR fail to implement it, and if so, why?


  1. What were the start-up and on-going costs associated with promoting the adoption and use of the model pediatric EHR implementation?


  1. To what extent is the model pediatric EHR sustainable after the grant’s end and replicable by others?


  1. What is required to maintain the program?


  1. What are the prerequisites for successful replication?


  1. How was the model pediatric EHR implemented by providers?


  1. What hardware and software was used?


  1. What aspects of the model pediatric EHR, if any, did not get implemented?


  1. What systems were connected to the model pediatric EHR?


  1. Was the implementation plan adequate? If not, what elements were missing?


  1. Who was involved in the implementation effort? What roles did they play? Were all the important stakeholders included?


  1. How was implementation of the model pediatric EHR monitored? What measures were used and what was learned from them?


  1. Was the model pediatric EHR implemented as planned? If not, why not? What kind of adjustments had to be made?


  1. What implementation problems were encountered (e.g., delays, incompatible systems or other technical problems, privacy issues, cost overruns)?


  1. Were any of these problems unique to the pediatric setting?


  1. How were they addressed?


  1. How could other States avoid such problems?


  1. What kind of technical assistance was obtained? From whom? Who received the technical assistance? What technical assistance was critical to implementation of the model pediatric EHR?


  1. What systems of quality assurance were used? Were they effective?


  1. What were the start-up and on-going costs associated with the model pediatric EHR implementation?


  1. Did the Grantee integrate the model pediatric EHR with other State or provider Health IT systems or activities, and if so, how?


  1. How were data from the model pediatric EHR used?

  1. What quality improvement/cost containment/consumer empowerment activities were undertaken as a result the model pediatric EHR?


  1. Were data from the EHR used to report on the core quality measure set for children’s health care or to demonstrate meaningful use for the Recovery Act Medicaid Health IT incentive payments?


  1. Who implemented the quality improvement/cost containment/consumer empowerment activities?


  1. How were the results of quality improvement/cost containment/consumer empowerment activities monitored? What measures were used and what was learned from them?


  1. What was the impact the model pediatric EHR on children’s health care quality?

  1. Did the model pediatric EHR improve children’s access to health care?


  1. Did the model pediatric EHR reduce the chances of children experiencing a medical error?


  1. Did the model pediatric EHR improve the timeliness of children’s health care?


  1. Did the model pediatric EHR increase the delivery of effective children’s health care?


  1. Did the model pediatric EHR improve the patient-/family-centeredness of children’s health care?


  1. Did the model pediatric EHR have an impact on efficiency (e.g., decrease inappropriate health services, decrease duplication of services)?


  1. Did the model pediatric EHR result in cost savings, and if so, who received the benefit?


  1. Was it sufficient to offset the cost of implementing the model pediatric EHR?


  1. What elements of the model were responsible for the cost savings?


  1. Did the model pediatric EHR increase evidence-based decision making by consumers, payers, providers, the State, or other stakeholders?


  1. What were the unanticipated impacts, if any, of the model pediatric EHR?


  1. Which aspects of the model pediatric EHR were largely responsible for its impact? Which aspects are essential to achieving the same results?


  1. How long from the time of implementation of the model pediatric EHR will there begin to be demonstrable results?


  1. Did the model pediatric EHR increase transparency and consumer (youth/family) choice?

  1. If the model pediatric EHR contains a personal health record portal: what are the characteristics of those who used it, how was it used and how was it perceived?


  1. Did consumers make decisions based on information from their EHR?



Category E: Evaluation Questions

Evaluation Question


  1. What was the model that was implemented?

  1. Who was involved in planning the model? Over what period of time?


  1. What was the level of cooperation among stakeholders, and how was it maintained?


  1. Was a stakeholder collaborative framework used to design, implement, and sustain the model?


  1. What practices work best to encourage provider participation as well as collaboration among participating providers, payers, and stakeholders?


  1. What were the most common benefits resulting from such collaboration and did these benefits extend beyond the particular model under review?


  1. What specific strategies were planned to improve quality?


  1. How was the model to improve health care quality implemented?

  1. Was the implementation plan adequate? If not, what elements were missing?


  1. Who was involved in the implementation effort? What roles did they play? Were all the important stakeholders included?


  1. What incentives, if any, were used to promote implementation? How else was implementation promoted?


  1. Was the model implemented as planned? If not, why not? What kind of adjustments had to be made?


  1. What implementation problems were encountered? Were any of these problems unique to the pediatric setting? How were they addressed? How could other States avoid such problems?


  1. What kind of technical assistance was obtained? From whom? Who received the technical assistance? What technical assistance was critical to implementation of the model?


  1. How was implementation monitored? What measures were used and what was learned from them?


  1. What were the start-up and on-going costs associated with implementation (including stakeholder as well as State costs)?


  1. Did the grantee integrate the program with other State or provider quality improvement activities, and if so, how?


  1. To what extent is the model sustainable after grant’s end and replicable by others?


  1. What does the program look like in a steady state?


  1. To what extent is the State prepared to expand implementation of this model?


  1. What is required to maintain the model?


  1. What are the prerequisites for successful replication?


  1. Which elements of the model are essential to achieving the same results?


  1. What was the impact of the model on children’s health care quality?

  1. Did the model increase knowledge of providers or stakeholders?


  1. Did the model help providers or stakeholders acquire new skills?


  1. How did Category E initiatives contribute to the overall impacts achieved by a state?


  1. Did the model improve children’s access to health care?


  1. Did the model reduce the chances of children experiencing a medical error?


  1. Did the model improve the timeliness of children’s health care?


  1. Did the model increase the delivery of effective children’s health care?


  1. Did the model improve the patient-/family-centeredness of children’s health care?


  1. Did the model have an impact on efficiency (e.g., decrease inappropriate health services, decrease duplication of services)?


  1. Did the program model result in cost savings, and if so, who received the benefit?


  1. Was it sufficient to offset the cost of implementing the model?


  1. What elements of the model were responsible for the cost savings?


  1. Did the model reduce health care disparities?


  1. Did the model increase evidence-based decision making by consumers, payers, providers, the State, or other stakeholders?


  1. What was the impact of the model on health care for children not enrolled in Medicaid or CHIP?


  1. What were the unanticipated impacts, if any, of the program?


  1. Which aspects of the model were largely responsible for its impact? Which aspects are essential to achieving the same results?


  1. How long must the program be in effect to begin demonstrating results?




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSharon D. Clark
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy