CHIPRASupporting Statement Part A -- Revised Feb 15

CHIPRASupporting Statement Part A -- Revised Feb 15.docx

Evaluation of the Children's Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Quality Demonstration Grant Program

OMB: 0935-0190

Document [docx]
Download: docx | pdf



SUPPORTING STATEMENT



Part A









Evaluation of the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Quality Demonstration Grant Program







Version: February 8, 2012



Agency for Healthcare Research and Quality (AHRQ)

CONTENTS

A. Justification 3

1. Circumstances that Make the Collection of Information Necessary 3

2. Purpose and Use of Information 8

3. Use of Improved Information Technology 9

4. Efforts to Identify Duplication 9

5. Involvement of Small Entities 10

6. Consequences If Information Is Collected Less Frequently 10

7. Special Circumstances 10

8. Register Notice and Outside Consultations 11

9. Payments/Gifts to Respondents 11

10. Assurance of Confidentiality 11

11. Questions of a Sensitive Nature 12

12. Estimates of Annualized Burden Hours and Cost 12

13. Estimates of Annualized Respondent Capital and Maintenance Costs 13

14. Estimates of Annualized Cost to the Government 13

15. Changes in Hour Burden 13

16. Time Schedule, Publication and Analysis Plans 14

17. Exemption for Display of Expiration Date 15

List of Attachments ………………………………………… ………………….11


A. Justification

1. Circumstances that Make the Collection of Information Necessary

The mission of the Agency for Healthcare Research and Quality (AHRQ), set out in its authorizing legislation, The Healthcare Research and Quality Act of 1999 (see http://www.ahrq.gov/hrqa99.pdf), is to enhance the quality, appropriateness, and effectiveness of health services, and access to such services, through the establishment of a broad base of scientific research and through the promotion of improvements in clinical and health systems practices, including the prevention of diseases and other health conditions. AHRQ shall promote health care quality improvement by conducting and supporting:

  1. research that develops and presents scientific evidence regarding all aspects of health care; and

  2. the synthesis and dissemination of available scientific evidence for use by patients, consumers, practitioners, providers, purchasers, policy makers, and educators; and

  3. initiatives to advance private and public efforts to improve health care quality.

Also, AHRQ shall conduct and support research and evaluations, and support demonstration projects, with respect to (A) the delivery of health care in inner-city areas, and in rural areas (including frontier areas); and (B) health care for priority populations, which shall include (1) low-income groups, (2) minority groups, (3) women, (4) children, (5) the elderly, and (6) individuals with special health care needs, including individuals with disabilities and individuals who need chronic care or end-of-life health care.

Section 401(a) of the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA), Pub. L. 111-3, amended the Social Security Act (the Act) to enact section 1139A (42 U.S.C. 1320b-9a). AHRQ is requesting approval from the Office of Management and Budget (OMB) for the collection of qualitative data through in-depth interviews to support a comprehensive, mixed-methods evaluation of the quality demonstration grants authorized under section 1139A(d) of the Act (Attachment A). Evaluating whether, and through what mechanism, projects funded by the CHIPRA demonstration grants improve the quality of care received by children in Medicaid and CHIP aligns with AHRQ’s mission of improving the quality and effectiveness of health care in the United States.

CHIPRA included funding for five-year grants so that states can experiment with and evaluate several promising ideas related to improving the quality of children’s health care in Medicaid and CHIP.1 In February 2010, the U.S. Department of Health and Human Services announced the award of 10 demonstration grants to states that convincingly articulated an achievable vision of what they could accomplish by the end of the five-year grant period, described strategies they would use to achieve the objectives, and explained how the strategies would achieve the objectives. Applicants were encouraged by CMS to address multiple grant categories (described below) and to partner with other states in designing and implementing their projects.

Of the 10 grantee states selected, six are partnering with other states, for a total of 18 demonstration states. The demonstration states are: Colorado (partnering with New Mexico); Florida (with Illinois); Maine (with Vermont); Maryland (with Wyoming and Georgia); Massachusetts; North Carolina; Oregon (with Alaska and West Virginia); Pennsylvania; South Carolina; and Utah (with Idaho).

These demonstration states are implementing 48 distinct projects in at least one of five possible grant categories, A to E. Category A grantees are experimenting with and/or evaluating the use of new pediatric quality measures. Category B grantees are promoting health information technology (HIT) for improved care delivery and patient outcomes. Category C grantees are expanding person-centered medical homes or other provider-based levels of service delivery. Category D grantees will evaluate the impact of a model pediatric electronic health record. Category E grantees are testing other state-designed approaches to quality improvement in Medicaid and CHIP.

AHRQ’s goal in supporting an evaluation of the CHIPRA Quality Demonstration Grant Program is to provide insight into how best to implement quality improvement programs and to provide information on how successful programs can be replicated to improve children’s health care quality in Medicaid and CHIP.2 To meet these goals, the evaluation has the following requirements:

  1. to identify CHIPRA state activities that measurably improve the nation’s health care, especially as it pertains to children.

  2. to develop a deep, systematic understanding of how CHIPRA demonstration states carried out their grant-funded projects.

  3. to understand why the CHIPRA demonstration states pursued certain strategies.

  4. to understand whether and how the CHIPRA demonstration states’ efforts affected outcomes related to knowledge and behavior change in targeted providers and/or consumers of health care.

To meet AHRQ’s goals and carry out the requirements, the agency’s evaluation contractor, Mathematica Policy Research, with its subcontractors The Urban Institute and AcademyHealth, has designed a comprehensive evaluation that will make the best possible use of quantitative and qualitative research methods, depending on the evaluation requirement.

Specifically, the quantitative component of the evaluation is designed to measure any improvements in children’s health care quality (requirement 1) and outcomes related to knowledge and behavior change in targeted providers (requirement 4) that grant-funded projects intend to affect. To do this, the evaluation will acquire data from states’ Medicaid and CHIP administrative and claims files and will conduct a survey of physicians in demonstration states. (OMB approval to conduct a physician survey will be included in a subsequent information collection request.) Administrative and survey data will be analyzed with descriptive and inferential techniques appropriate to answering questions about outcomes and impacts.

In contrast, the qualitative component of the evaluation is designed to develop a rich understanding of states’ implementation activities (requirement 2), document the rationale for the selection of particular strategies (requirement 3), and support judicious interpretations about project implementation and how projects may or may not contribute to observed outcomes (if any). The qualitative component of the evaluation has three main parts: (1) in-depth interviews with the people most closely involved in the design, management, or day-to-day implementation of CHIPRA-funded projects; (2) focus groups with the parents, guardians, or caregivers of the children meant to benefit from the projects; and (3) the review of participating grantees’ funding applications, operating plans, and progress reports to CMS. This information collection request seeks approval to conduct in-depth interviews with several respondent types in 2012 and to conduct follow-up interviews with one respondent type in 2013. (OMB approval to conduct focus groups and to conduct a second wave of in-depth interviews will be included in a subsequent information collection request.)

No research method other than in-depth, semi-structured interviews will meet AHRQ’s need for detailed information about project implementation and states’ selection of implementation strategies. In addition to meeting this need for rich descriptive information, the interview data gathered through this collection will be analyzed in conjunction with quantitative data about project outcomes to foster understanding of the strategies and resources that seemed to contribute most (or least) to those outcomes. In brief, the interview data collected under this request will directly support an analysis of (1) the implementation of specific strategies related to quality measurement and reporting, HIT, provider-based models, and pediatric electronic health records; (2) whether and how implementation seemed to affect children’s care quality in Medicaid and CHIP; (3) the likelihood that quality improvements will be sustainable when grant funding ends; and (4) the potential for other states to replicate the achievements of the demonstration states.

Interviews in the Demonstration States

Under the proposed information collection, researchers will visit each of the 18 demonstration states in 2012 to conduct in-person interviews with four types of respondents: (1) key project staff directly involved in the implementation of the demonstration projects, (2) other personnel involved in implementation of the demonstration projects, (3) external stakeholders with an interest in children’s care quality, and (4) health care providers who treat children enrolled in Medicaid or CHIP. About one year after the visits, researchers will conduct follow-up telephone interviews with key project staff to monitor the progress of project implementation. The respondent types, anticipated number of respondents per state, and purpose of the interviews are as follows:

  1. Key Project Staff (up to 4 per state) – semi-structured, in-person interviews in 2012, and semi-structured, telephone interviews in 2013, with staff directly involved in the design and oversight of grant-funded activities. Key staff members are the project director, project manager, and principal investigator and/or medical director. The purpose of these interviews is to gain insight into the implementation of demonstration projects, to understand contextual factors, and to identify lessons and implications for the broad application and sustainability of projects (see the interview protocol in Attachment B).

  2. Other Implementation Personnel (up to 16 per state) – semi-structured, in-person interviews with staff involved in the day-to-day implementation of grant-funded projects. These staff members include state agency employees, provider trainers or coaches, health IT vendors, and/or project consultants. The purpose of these interviews is to gain insight into the opportunities and challenges related to key technical aspects of project implementation (see the interview protocol in Attachment C).

  3. External Stakeholder (up to 8 per state) – semi-structured, in-person interviews with external stakeholders who have a direct interest in children’s care quality in Medicaid and CHIP. Stakeholders include representatives of managed care organizations, state chapters of the American Academy of Pediatrics, advocacy organizations for children and families, and social service agencies. These stakeholders will be familiar with the CHIPRA projects and may serve on advisory panels or workgroups related to one or more projects. The interviews will gather insight into the opportunities and challenges related to project implementation, stakeholder satisfaction with their project involvement, and contextual factors. Attachment D presents two protocols, one for stakeholders representing Medicaid managed care organizations (MCOs) and another for all other stakeholders. A separate protocol was developed for the former group to allow the interviewers to gather important information about the MCO’s market share and other distinguishing attributes.

  4. Health Care Providers (up to 12 per state) – semi-structured, in-person interviews with health care providers who are, or are not, participating in demonstration grant activities (participating and comparison providers, respectively). Depending on the projects a state is implementing, providers can include clinicians from private practices, public clinics, federally qualified health centers, care management entities, or school based health centers. Interviews with participating providers will take place in all 18 demonstration states and will capture information about project-relate activities, providers’ perceptions of the likelihood of achieving intended outcomes, and providers’ involvement in other quality-improvement initiatives (see the interview protocol in Attachment E). Interviews with comparison providers will take place only in states using a comparison-group design to evaluate their projects. Interview topics include the provider’s experiences providing care to children in Medicaid and CHIP, coordinating with other providers, use of HIT, and provision of patient-centered care (this interview protocol is also in Attachment E).

Number of Respondents per Type. The total number of interviews that will have to be conducted to yield a comprehensive, multi-faceted understanding of project implementation will range considerably, from 20 to 40, depending on the number, scope and complexity, and nature of the projects in a given state. States listed in their grant applications to CMS the specific individuals they planned to involve in project design and implementation. AHRQ used these lists to determine the types of respondents to include in the proposed information collection and to estimate the maximum number of respondents per type (indicated in parentheses in the paragraphs above). The lists make clear, for example, that the number of key project staff ranges only from two to four per state, and the number of external stakeholders serving as project advisors ranges from about six to eight per state. More variable are the number of “other implementation personnel” involved in projects and the number of health care providers states are recruiting to participate in their grant-funded projects. Judging again from the state applications, we believe it will suffice to allocate evaluation resources for up to 16 interviews with other implementation personnel and up to 12 interviews with health care providers per state.

Anticipated Nonresponse. Nonresponse to requests for interviews is expected to be minimal. First, CMS stipulated in its invitation to apply for grants that “grantees, their partners and subcontractors, are required to participate in the national implementation, systems, and impact-based evaluations.” Second, past experience suggests that the types of respondents included in this information collection are generally very willing to participate in interviews about their work and professional interests. To preserve this goodwill, AHRQ’s evaluation contractor will schedule appointments well in advance, at a time and place convenient to the respondent, and will use only the time allocated for the interview. Judging from past experience, individual health care providers are the only respondent type that may have less availability to participate in interviews, especially during regular business hours. To maximize their participation, the evaluation contractor will schedule as many in-person interviews with providers as possible, but will also accommodate requests for telephone interviews as a convenience to busy providers.

Efficiency Considerations. To gain a well-rounded, comprehensive, and fair understanding of project implementation, this information collection deliberately raises the same discussion topics with multiple respondents, both within and across respondent types. This technique, sometimes referred to as source triangulation, is a mainstay of case study methodology. The technique allows researchers to examine different stakeholders’ perspectives and the extent to which they agree or disagree on key issues (e.g., barriers and facilitators). For example, one might expect some differences in implementation challenges and potential solutions among state policymakers, health plans, and providers.3 Were the technique not used, the resulting data could well be biased (favoring one perspective over others) and incomplete (not capturing “all sides” of a story).

AHRQ is mindful,, however, that the need for complete information reflecting many perspectives must be balanced against respondent burden. Toward this end, the agency pretested several of the protocols developed for this information collection in August 2011. Pretests with key project staff (1) affirmed the need to strategically match respondents with survey modules to make the best use of respondents’ time and ensure data are collected to answer AHRQ’s research questions, and (2) provided insights into how to go about this efficiently. For example, the pretests revealed that some respondents will have an overarching perspective of all or most of the demonstration activities in a given state, while others will have more specialized knowledge of activities in a particular grant category. Based on the pretests and the opinion of pretest respondents, this observation especially applies to key project staff, other implementation personnel, and some external stakeholders.

In preparing for site visits, researchers will consult with key project staff to determine in advance whether specific respondents are “generalists” (i.e., have a good perspective on the state demonstration overall and most major categories) or “specialists” (i.e., primarily familiar with one aspect or category). During interviews, interviewers will then use cross-category modules with generalists and category-specific modules with specialists, plus introductory and wrap-up modules for all. The interview protocols have been modified to cue interviewers to this strategy.

Interviews in Non-Demonstration States

To supplement information collected from the demonstration states, AHRQ proposes to conduct a relatively small number of interviews in non-demonstration states. AHRQ will work with its evaluation contractor to identify eight or nine non-demonstration states that are pursuing quality-related initiatives in Medicaid and/or CHIP, especially as they relate to quality measurement, adoption and use of HIT, and provider-based delivery models. In each of the selected states, AHRQ and its contractor will identify up to five Medicaid or CHIP staff members who are most knowledgeable about the state’s quality-related goals and strategies. These individuals (up to a total of 45) will receive a request for a semi-structured telephone interview of 45 to 60 minutes. The purpose of these interviews is to enrich AHRQ’s understanding of how the CHIPRA quality grants contribute to improved care quality above and beyond other quality-related initiatives happening at the same time (see the interview protocol in Attachment F). Examples of other quality-related initiatives include those funded by the HITECH Act, the Pediatric Quality Measures Program, and various medical home initiatives.

The request-for-interview email that will be sent to all prospective interview respondents is included in Attachment G.

2. Purpose and Use of Information

The information collected through the semi-structured interviews described in Section 1 will be a key source of evidence for the cross-state evaluation of the demonstration. Collecting high-quality, timely interview data from a wide range of knowledgeable respondents directly serves AHRQ’s goal of understanding project implementation and the selection and execution of strategies, and of identifying the particular activities and resources that contributed most to any observed improvement in children’s care quality. The products that will result from this project include practice profiles, replication guides, case studies, and peer-reviewed journal articles.

As indicated in the preceding section, the overall goal in conducting a mixed-methods evaluation of the CHIPRA Quality Demonstation Grant Program is to describe and analyze the contribution of demonstration activities to improving the quality of children’s health care services. In some cases, the evaluation will achieve this goal by conducting rigorous impact analyses to determine whether particular interventions improved care quality and child health outcomes. Such impact analyses will be possible in some states implementing patient-centered medical home projects (Category C). In such cases the evaluation will use claims and administrative data and statistical techniques to compare the outcomes of children receiving care from participating or comparison-group practices. In other cases, it may be possible to measure changes in outcomes (for example, the number of core quality measures reported by a state or the level of care coordination achieved by a practice using electronic medical records for that purpose), but it will not be possible to definitively attribute changes to grant-funded activities, although a thorough understanding of implementation and implementation fidelity will help rule out some erroneous conclusions. Limitations will be fully described in all publications and products resulting from the evaluation. The design plan guiding this evaluation, written by researchers with quantitative and qualitative expertise, and vetted by AHRQ, CMS, and a technical expert panel, ensures that research questions will be answered with the most suitable data and methods while clearly acknowledging the limitations of the data and methods.4

3. Use of Improved Information Technology

Digital audio recording of all interviews (with respondents’ permission) will be the primary electronic method for ensuring the completeness and quality of interview data. Recording also enhances efficiency and reduces respondent burden by allowing researchers to review and edit their written or typed notes without calling respondents for clarification or to check quotes.

Obtaining high-quality data through semi-structured interviews requires a flexible exchange and a conversational rapport between interviewer and respondent. While information technology can greatly enhance the smooth administration of large-scale surveys with complex skip patterns, in qualitative interviewing it is often best to avoid complex skip patterns in the first place. For this data collection, we will minimize the skip patterns an interviewer must navigate during interviews by customizing the protocols in advance. The protocols accompanying this package all consist of multiple modules, including modules for each of the five grant categories for which states may receive funding. Because we will know before we visit a state which grant categories that state is pursuing, we will pare down protocols so they include only the relevant category-specific modules. In addition, the site visit teams will be led by trained, experienced interviewers. The interviewers will be thoroughly familiar with protocol content so they can readily move back and forth within the protocol without disrupting the conversational flow or asking questions the respondent has already answered.

After information collection, researchers will use an electronic software program, NVivo, that enables systematic coding and retrieval of textual data according to a specified scheme.

4. Efforts to Identify Duplication

The evaluation of the CHIPRA quality demonstration grants will not duplicate any prior evaluation efforts. No other data collection effort currently exists to collect and analyze data across all of the demonstration grant states and across all grant categories.

The Centers for Medicare & Medicaid Services (CMS), however, does allow grantees to engage contractors to conduct independent evaluations of the grant-funded projects in their states. Eight grantees (Colorado, Florida, Maine, Maryland, Massachusetts, Oregon, South Carolina, and Utah) have allocated funds for independent, state-level evaluations. AHRQ’s contractor is working closely with these state-based evaluators to coordinate data collection activities, avoid duplication, and ensure that the combined cross-state and state evaluations are more comprehensive than either would be alone. For example, agreements being reached between AHRQ’s contractor and each state-based evaluator will establish that AHRQ’s contractor will conduct interviews with all key project staff other implementation personnel, and providers. There may be shared responsibility for interviews with external stakeholders (for example, such interviews would be conducted by either AHRQ’s contractor or the state-based team, not by both).

Semi-structured interviews will be used only to collect evaluation information that cannot be obtained from other sources. Where possible, AHRQ will use existing administrative data and secondary data sources, such as states’ written progress reports to CMS, Medicaid and CHIP enrollment, claims, and encounter data; and all-payer databases, to address its research questions. For states with independent evaluation teams, sharing of data by the state-based evaluators with AHRQ’s evaluation contractor will reduce duplication of efforts to access and prepare data sets.

5. Involvement of Small Entities

The evaluation may collect data from physicians in small private practices. Every effort will be made to schedule interviews at the convenience of these respondents. In addition, the interviews with these respondents will be short relative to interviews with other respondent types, in part to accommodate small entities. Interview staff will ensure that each interview lasts no more than 45 minutes. Furthermore, to gain a broad picture of participating physicians’ perspectives, the respondents will be distributed across multiple practices. Thus, the burden on whole entities will be small. The information being requested will be held to the minimum required for the intended use.

6. Consequences If Information Is Collected Less Frequently

If the interview data described in this document are not collected, AHRQ will not be able to evaluate the implementation and appropriateness of the CHIPRA Quality Demonstration Grant Program across all project categories in all 18 states receiving funding. Without these data, AHRQ will be unable to monitor implementation effectively, provide feedback to states, or make informed decisions about the funding of future initiatives in children’s care quality.

The in-person, in-depth interviews in the 18 demonstration states will begin in late 2011, once states are approximately 10 months into implementation. This time frame is most appropriate for learning about the early implementation experience. It provides AHRQ an opportunity to provide feedback to the states about their implementation processes, and gives states a chance to make changes based on that feedback. Interviews in non-demonstration states also will be conducted in late 2011 to allow a valid cross-state comparison of progress toward quality-improvement goals.

Key project staff in each demonstration state will be interviewed, for a second time, by telephone in late 2012. Interviewing key staff yearly is necessary for keeping abreast of project implementation. Without this data collection, AHRQ would have an incomplete picture of implementation progress. Additionally, other states will not be able to build on the lessons learned from this demonstration program if implementation steps are not documented. No other respondent types will be re-interviewed in 2012.

7. Special Circumstances

This request fully complies with the general information collection guidelines of 5 CFR 1320.5(d)(2). No special circumstances apply.

8. Register Notice and Outside Consultations

a. Federal Register Notice

As required by 5 CFR 1320.8(d), notice was published in the Federal Register on (date and page number of 60 day notice) for 60 days (see Attachment H). No comments have been received to date. Public comments received will be summarized here after the 60-day review period.

b. Outside Consultations

AHRQ’s contractors continually consult individuals outside the agency about the research and data collection activities for this evaluation. These individuals include the CMS personnel who oversee and monitor grant planning and implementation in the demonstration states: Barbara Dailey and Karen Llanos (CMS/CMCS), and the late David Greenberg (CMS/CMSO). AHRQ’s evaluation contractor also consults a 14-member technical expert panel on design, measurement, and analytical challenges. (The panel strongly recommended, for example, that the evaluation of the CHIPRA quality demonstration include data collection activities in non-demonstration states.) The panel meets annually, but members have agreed to also be available for individual consultation. Attachment I lists the members of the technical expert panel and their professional affiliations. There are no unresolved issues stemming from these consultations.

9. Payments/Gifts to Respondents

No payments or gifts will be provided to respondents.

10. Assurance of Confidentiality

Individuals and organizations will be assured of the confidentiality of their replies under Section 934(c) of the Public Health Service Act, 42 USC 299c-3(c). They will be told the purposes for which the information is collected and that, in accordance with this statute, any identifiable information about them will not be used or disclosed for any other purpose.

Respondents will be given this assurance during recruitment (in an advance e-mail, presented, as noted, in Attachment G) and again immediately before their interview. They will further receive assurance that the information being gathered is for research purposes only. Respondents will also be asked if they give permission to have the conversation audio-recorded solely for the purpose of filling in any gaps in the research notes.

Respondents’ name, professional affiliation, and title will be collected. Social security numbers, home contact information, and similar information that can directly identify the respondent will not be collected.

Safeguarding Data. The contractor has established data security plans for the handling of all interview notes, coded interview data, and data processing for the interviews that it conducts. Its plans meet the requirements of U.S. federal government agencies and are continually reviewed for compliance with new government requirements and data collection needs. Such security is based on (1) exacting company policy promulgated by the highest corporate officers in consultation with systems staff and outside consultants, (2) a secure systems infrastructure that is continually monitored and evaluated with respect to security risks, and (3) secure work practices of an informed staff that take all necessary precautions when dealing with confidential data.

During site visits, evaluation researchers will at all times keep notebooks and laptop computers on their persons or in secure, locked locations.

All contractor staff members sign a pledge of confidentiality. A copy of this text is in Attachment J. Confidential data are kept in study-specific folders that only a minimum number of staff members may access. All typed or electronically coded qualitative data are periodically backed up and preserved on secure media.

11. Questions of a Sensitive Nature

AHRQ is not collecting from any respondent information of a sensitive nature. Questions will elicit information and perspectives about how the CHIPRA demonstration grants are being implemented in the respondent’s state.

12. Estimates of Annualized Burden Hours and Cost

Exhibit 1 shows the estimated annualized burden hours for the respondent’s time to participate in this evaluation. Key Staff Interviews will be conducted twice with 4 persons from each of the 18 CHIPRA demonstration States and will last for about 1 ½ hours. Implementation Staff Interviews will include 16 persons from each of the 18 CHIPRA demonstration States and take an hour to complete. Stakeholder Interviews will include 8 persons from each of the 18 CHIPRA demonstration States and also take an hour to complete. Health Care Provider Interviews will be conducted with 12 persons from each of the 18 CHIPRA demonstration States and will last 45 minutes. Non-demonstration States Interviews will be conducted with 5 persons from 9 non-demonstration States and will take about 1 hour to complete. The total burden for this evaluation is estimated to be 855 hours.

Exhibit 2 shows the estimated annualized cost burden associated with the respondent’s time to participate in this evaluation. The total cost burden is estimated to be $32,914.

Throughout the information collection process, we will monitor the length of the interviews, comments received from participants and field interviewers, and the number of individuals who refuse to be interviewed. If this information indicates that the burden on participants is so great as to undermine the collection of high quality data, we will revise our procedures accordingly. For example, we may reduce the length of the semi-structured interviews. If we need to revise our procedures, we will work with OMB to implement specific changes.



Exhibit 1. Estimated Annualized Burden Hours

Data Collection

Number of respondents*

Number of States

Number of responses per respondent

Hours per response

Total burden hours*

Key Staff Interviews

4

18

2

1.5

216

Implementation Staff Interviews

16

18

1

1

288

Stakeholder Interviews

8

18

1

1

144

Health Care Provider Interviews

12

18

1

45/60

162

Non-demonstration States Interviews

5

9

1

1

45

Total

45

na

na

na

855

* The number of respondents that will be interviewed in each state will vary depending on the number, scope, complexity, and nature of the projects implemented. This table reflects upper-bound estimates of total burden hours and the number of respondents per type per state.



Exhibit 2. Estimated Annualized Cost Burden

Data Collection

Number of respondents

Number of States

Total Burden Hours

Average hourly wage*

Total cost burden

Key Staff Interviews

4

18

216

$36.35

$7,852

Implementation Staff Interviews

16

18

288

$34.67

$9,985

Stakeholder Interviews

8

18

144

$18.68

$2,690

Health Care Provider Interviews

12

18

162

$62.50

$10,125

Non-demonstration States Interviews

5

9

45

$50.26

$2,262

Total

45

na

855

na

$32,914

* Based upon the mean of the average wages, National Compensation Survey: Occupational wages in the United States May 2009, “U.S. Department of Labor, Bureau of Labor Statistics.” Key project staff are state government workers who are general managers. Other implementation personnel are state workers who are managers of social and community services. External stakeholders are civilian workers who are in community and social services occupations. Participant providers are civilian pediatric physicians. Medicaid/CHIP personnel are federal employees in a medical and health service management role.

13. Estimates of Annualized Respondent Capital and Maintenance Costs

Capital and maintenance costs include the purchase of equipment, computers or computer software or services, or storage facilities for records, as a result of complying with this data collection. There are no additional costs to the respondents.

14. Estimates of Annualized Cost to the Government

Exhibit 3 shows the total and annualized cost for this evaluation. The total cost to the government of the entire evaluation contract is $8,258,311 (including a base period and four option periods); the annualized cost is $1,651,662 per year (Exhibit 3). These costs will be incurred from 2010 to 2012.

Exhibit 3. Estimated Total and Annual Cost

Cost Component

Total Cost

Annual Cost

Administration

$571,422

$114,284

Coordination

38,003

7,601

Stakeholder Feedback

201,637

40,327

Technical Expert Panel

359,276

71,855

Evaluation Design & Implementation

3,981,390

796,278

Technical Assistance Plan

934,440

186,888

Data Collection Instruments

138,997

27,799

OMB Clearance

35,617

17,808

Section 508 Compliance

13,883

2,777

Data and Analysis Reports

735,426

147,085

Interim Evaluation Reports

408,803

81,761

Dissemination

736,149

184,037

Final Report

103,269

103,269

Total

$8,258,311

$1,651,662

15. Changes in Hour Burden

This is a new data collection.

16. Time Schedule, Publication and Analysis Plans

AHRQ expects data collection will begin in March 2012, pending OMB clearance. AHRQ’s contractor will synthesize the interview data in the form of site visit briefings for AHRQ’s review by fall 2012. The contractor will go on to prepare communication materials for a range of audiences (state policymakers, state agency staff, Medicaid and CHIP providers, and academics), beginning in August 2013 until the contract ends in 2015. The effort to publish will include preparing and submitting manuscripts to one or more peer-reviewed publications. The first such manuscript will be prepared in 2014. Exhibit 4 presents the anticipated data collection and reporting schedule.

Exhibit 4. Schedule of Proposed Data Collection, Reports and Publication

Respondent interview types

Start of data collection

Completion of data collection

Report to AHRQ

Reports and Communications to Other Audiences

Key project staff

March 2012

March 2013

June 2012

May 2013

August 31, 2012

June 30, 2013

August 2013 and later

Other implementation personnel



March 2012


June 2012

August 31, 2012

August 2013 and later

External stakeholders

March 2012


June 2012

August 31,, 2012

August 2013 and later

Providers

March 2012


June 2012

August 31,, 2012

August 2013 and later

Medicaid/CHIP personnel in non-demonstration states

April 2012


July 2012

August 31,, 2012

August 2013 and later

Interview data described in this clearance package will be analyzed to address the research goals described in Section 1.

  • In 2012, the interview data from all respondent types will be used to assess the use, availability, and perceived importance (that is, the importance as perceived by the respondents) of resources in the preceding year; the selection and implementation of early strategies; resulting outputs; and perceptions of short-term outcomes.

  • In 2013, the telephone interviews with key project staff will be used to assess the progress in, and impediments to, project implementation; contextual developments in the states; and the use, availability, and perceived importance of resources in the preceding year.

  • In subsequent years, the interview data will be assessed alongside other data sources to inform the interpretation of how and why quality-related outcomes were or were not achieved as intended.

The analysis of the use, availability and perceived importance of resources will rely on interview data collected from all respondent types. Key staff, other implementation staff, and external stakeholders will be probed to characterize the perceived contribution of state resources, including state government support, stakeholder involvement, and budget outlook, on the project. Moreover, providers will be asked to characterize the contribution of provider level resources, including HIT infrastructure, prior experience working on quality demonstrations, and compensation for participation in the CHIPRA demonstration, on their success in the demonstration.

To assess the strategies implemented and the resulting outputs, all respondents in CHIPRA demonstration states will be asked to characterize their major activities in the past year, barriers and facilitators to implementing those strategies, and the progress made toward project goals and milestones.

The analysis of interview data on resources, strategies, and outputs will be supplemented by a review of semi-annual progress reports submitted by each grantee and analysis of any quantitative data (e.g., number of providers recruited to participate in demonstration, amount of compensation provided to participating providers, number of core measures collected) that the grantees wish to provide to help describe their activities.

As noted above, notes from all interviews and document reviews will be typed, uploaded to NVivo, and coded according to a specified scheme. Analysis of the interviews will emphasize fidelity to implementation plans and progress in implementation. The analysis will include identification of themes within and across states by grant category. Throughout the process of gathering, reviewing, and analyzing qualitative data, quotations will be noted that capture a point of view or an experience particularly well. For each project in each of five grant categories, findings from the implementation analysis will be used to interpret findings about outcomes and to help establish a basis for causal inference. In brief, the interview data collected under this clearance package, when combined with evaluation data from other sources, will directly support an analysis of (1) the implementation of specific strategies related to quality measurement and reporting, HIT, provider-based models, and pediatric electronic health records; (2) whether and how implementation seemed to affect children’s care quality in Medicaid and CHIP; (3) the likelihood that quality improvements will be sustainable when grant funding ends; and (4) the potential for other states to replicate the achievements of the demonstration states.

17. Exemption for Display of Expiration Date

AHRQ does not seek this exemption.



List of Attachments:

Attachment A – Children’s Health Insurance Program Reauthorization Act of 2009
Attachment B – Key Staff Interview Guide
Attachment C – Implementation Staff Interview Guide
Attachment D – Stakeholder Interview Guides
Attachment E – Health Care Provider Interview Guides
Attachment F – Non-demonstration States Interview Guide
Attachment G – Advance & Confirmation Letters
Attachment H – Federal Register Notice
Attachment I – TEP Members
Attachment J – Confidentiality Pledge

1 Department of Health and Human Service, Centers for Medicare & Medicaid Services. Medicaid and Children’s Health Insurance Programs: Children’s Health Insurance Program Reauthorization Act of 2009: Section 401(D). Invitation to Apply for FY2010 CHIPRA Quality Demonstration Grants. September 30, 2009, CFDA 93.767.

2 Ibid.

3 Patton, Michael Q. Qualitative Evaluation and Research Methods (2nd Edition). Thousand Oaks, CA: Sage Publications, 2001.



4 Ireys, Henry, et al. Design Plan for the National Evaluation. Washington, D.C., Mathematica, 2011.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSheena Flowers
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy