0990-CHIP Part B_Attachment A_Evaluation Design Report

0990-CHIP Part B_Attachment A_Evaluation Design Report.pdf

CHIPRA_ Children Health Insurance

0990-CHIP Part B_Attachment A_Evaluation Design Report

OMB: 0990-0384

Document [pdf]
Download: pdf | pdf
ATTACHMENT A
CHIPRA 10—STATE EVALUATION: EVALUATION DESIGN REPORT

PAGE INTENTIONALLY LEFT BLANK FOR DOUBLE-SIDED COPYING

CHIPRA 10—State Evaluation:
Evaluation Design Report
Final Report
April 21, 2011
Mathematica Policy
Research
Mary Harrington
Christopher Trenholm
Kimberly Smith
Margo Rosenbach
Julie Ingels
Danna Basson
Eric Grau
Sheila Hoag
Patricia Higgins
Victoria Peebles

Urban Institute
Genevieve Kenney
Ian Hill
Sarah Benatar
Lisa Clemans-Cope
Jennifer Haley
Adela Luque
Timothy Waidmann

Mathematica Reference Number:
06873.501
Submitted to:
U.S. Department of Health and Human Services,
Office of the Assistant Secretary for Planning
and Evaluation
200 Independence Ave., SW
Washington, DC 20201
Project Officer: Elizabeth Pham
Submitted by:
Mathematica Policy Research
555 S. Forest Ave.
Suite 3
Ann Arbor, MI 48104-2583
Telephone: (734) 794-1120
Facsimile: (734) 794-0241
Project Director: Mary Harrington

CHIPRA 10—State Evaluation:
Evaluation Design Report
Final Report
April 21, 2011
Mathematica Policy
Research
Mary Harrington
Christopher Trenholm
Kimberly Smith
Margo Rosenbach
Julie Ingels
Eric Grau
Danna Basson
Sheila Hoag
Patricia Higgins
Victoria Peebles

Urban Institute
Genevieve Kenney
Ian Hill
Sarah Benatar
Lisa Clemans-Cope
Jennifer Haley
Adela Luque
Timothy Waioman

Mathematica Policy Research

CONTENTS
I

BACKGROUND AND PURPOSE OF THE REPORT ................................................ 1

II

OVERVIEW OF THE DESIGN ............................................................................. 5
A. Conceptual Framework ............................................................................ 6
B.

Analytic Approach: Key Questions and Methods ...................................... 8

C. Addressing Key Challenges.................................................................... 14
III

STATE SELECTION AND PARTICIPATION........................................................ 17
A. State Selection ....................................................................................... 17
1.
2.
B.

Ten States Selected to Participate ................................................... 17
Replacement Criteria for Any States Unable to Participate ............... 22

Securing State Participation ................................................................... 22

C. Data Acquisition .................................................................................... 23
IV

OMB CLEARANCE PROCESS ........................................................................... 26

V

CASE STUDIES .............................................................................................. 29
A. Document Review .................................................................................. 29
B.

Site Visits .............................................................................................. 30

C. Focus Groups ........................................................................................ 35
D. Analysis................................................................................................. 38
E.
VI

Challenges and Limitations .................................................................... 39

SURVEY OF STATE PROGRAM ADMINISTRATORS ........................................... 41
A. Instrument Content and Development ................................................... 41
B.

Data Collection Approach ...................................................................... 43

C. Analysis................................................................................................. 44
D. Challenges and Limitations .................................................................... 44

iii

Contents
VII

Mathematica Policy Research
SURVEY OF ENROLLEES AND DISENROLLEES .................................................. 47
A. Sample Design ...................................................................................... 47
B.

Instrument Content and Design ............................................................. 59
1.
2.
3.
4.
5.
6.

Substantive Content of the Instrument ............................................ 59
Instrument Development Process .................................................... 61
Instrument Design Challenges ........................................................ 61
Translation and Interpretation ........................................................ 65
Pretesting ....................................................................................... 65
CATI Programming ......................................................................... 66

C. Data Collection Approach ...................................................................... 67
1.
2.
3.
4.
5.
6.
7.
8.

Sample Release Schedule ................................................................ 67
Optimizing Contact Information, Locating Sample Members, and
Scheduling Interviews ..................................................................... 67
Survey Respondents........................................................................ 69
Conducting the Interviews (Unclustered and Clustered Samples) ..... 69
Minimizing Nonresponse ................................................................ 70
Staffing, Training, and Monitoring for QA ....................................... 71
Tracking the Data Collection Effort ................................................. 73
Challenges...................................................................................... 73

D. Descriptive Analyses of CHIP Enrollees and Disenrollees ........................ 74
1.
2.
3.
E.

Analysis of CHIP’s Impact on Children’s Access, Use, and Other
Outcomes .............................................................................................. 80
1.
2.
3.

F.

Research Questions ........................................................................ 74
Data and Measures ......................................................................... 75
Analytic Approach .......................................................................... 77

Research Questions ........................................................................ 81
Data and Measures ........................................................................ 81
Analytic Approach .......................................................................... 82

Analysis of Relationship Between CHIP, Medicaid, and Private
Coverage ............................................................................................... 84
1.
2.

Research Questions ........................................................................ 86
Data, Measures, Analytic Approach ................................................. 86

G. Analysis of Retention and Reenrollment ................................................. 90
1.
2.
3.

Research Questions ........................................................................ 90
Data and Construction of Analytic File ............................................ 91
Analytic Approach .......................................................................... 92

iv

Contents
VIII

Mathematica Policy Research
STATE PROGRAM DATA ................................................................................ 97
A. Analysis of CHIP Annual Reports and Other Secondary Data ................... 97
1.
2.
3.
4.
B.

Analysis of CHIP and Medicaid Enrollment and Eligibility Data.............. 101
1.
2.
3.
4.

IX

CARTS ............................................................................................ 97
SEDS ............................................................................................... 99
Accessing the Data ....................................................................... 100
Analyzing the Data ....................................................................... 100

Research Questions ...................................................................... 102
Data ............................................................................................. 102
Focal Measures ............................................................................. 103
Analytic Approach ........................................................................ 105

NSCH/SLAITS ............................................................................................. 107
A. Design/Content of the NSCH and Uninsured Component ..................... 107
B.

Analysis............................................................................................... 110

C. Challenges and Potential Limitations ................................................... 112
X

REPORTING ................................................................................................ 110
A. Reports to Congress ............................................................................ 116
B.

Case Study Reports.............................................................................. 117

C. Other Reports ...................................................................................... 117
D. Standalone Executive Summary ........................................................... 119
REFERENCES ............................................................................................................. 120
APPENDIX A: ENABLING LEGISLATION FOR THE ORIGINAL AND CURRENT
EVALUATION.............................................................................................. 130
APPENDIX B: STATE SELECTION MEMO ...................................................................... 136

v

This page has been left blank for double-sided copying.

Mathematica Policy Research

TABLES
II.1

Key Evaluation Questions and Data Sources .......................................... 10

III.1

Criteria for Selecting States for the CHIPRA 10-State Evaluation ............ 18

III.2

Primary Selection Characteristics of the Ten Selected States.................. 21

III.3

State CHIP Income Limits and Percentage of Uninsured Children at
Selected Income Ranges ....................................................................... 22

III.4

Memorandum of Understanding Summary of Data Required ................. 25

IV.1

Schedules for Two OMB Submissions .................................................... 27

V.1

Outline of Core Site Visit Protocol, CHIPRA 10-State Evaluation ............ 31

V.2

Outline of Core Focus Group Moderators Guide for Parents of CHIP
Enrollees, CHIPRA 10-State Evaluation .................................................. 36

VII.1

Confidence Interval (CI) Half Widths for Illustrative Outcomes Given
Equal Allocation of the CHIP Sample Across States ................................ 56

VII.2

Confidence Interval (CI) Half Widths for Illustrative Outcomes Given
a Compromise Allocation of the CHIP Sample Across States .................. 57

VII.3

Minimum Detectable Differences (MDDs) for Illustrative Outcomes
Given Equal Allocation of the CHIP Sample Across States ...................... 58

VII.4

Minimum Detectable Differences (MDDs) for Illustrative Outcomes
Given a Compromise Allocation of the CHIP Sample Across States ........ 59

VII.5

Schedule of First Pretest ....................................................................... 66

VII.6

Illustrative Outcome Variablesa ............................................................. 76

VII.7

Illustrative Explanatory Variables .......................................................... 78

VII.8

Patterns of Insurance Coverage in 6 Months prior to CHIP
Enrollment............................................................................................ 87

VII.9

Broad Categories for Why Coverage Ended Among Recent Enrollees
with Prior ESI ........................................................................................ 88

VII.10

Potential Substitution for Established CHIP Enrollees ............................ 89

VII.11

Life Table of CHIP Enrollment Spells ..................................................... 93

VII.12

Duration of Enrollment Spells by Subgroup ........................................... 95

vii

Tables

Mathematica Policy Research

VIII.1

Overview of Content of the CHIP Annual Report Template System
(CARTS) ............................................................................................... 98

VIII.2

SEDS Enrollment Measures and Definitions ........................................... 99

IX.1

Topics Covered in Uninsured Section of the 2011 National Survey
of Children’s Health............................................................................ 109

X.1

Data Sources for the 2011 and 2013 CHIP Reports to Congress .......... 117

X.2

Other Source-Specific Reports ............................................................ 118

viii

Mathematica Policy Research

FIGURES
II.1

Conceptual Framework for the Evaluation of CHIP................................... 7

VII.1

llustrative Recall Periods by Program Status at Interview for New
Enrollees Sample .................................................................................. 62

VII.2

Illustrative Recall Periods by Program Status at Interview for Established
Enrollees Sample ................................................................................... 62

VII.3

Illustrative Recall Periods By Program Status at Interview for Sample of
Disenrollees ........................................................................................... 63

ix

This page has been left blank for double-sided copying.

Mathematica Policy Research

EXHIBITS
III.1

Illustrative Memorandum of Understanding Template ........................... 24

xi

This page has been left blank for double-sided copying.

Mathematica Policy Research

I.

BACKGROUND AND PURPOSE OF THE REPORT

The Children’s Health Insurance Program (CHIP) was enacted in 1997 to help close coverage
gaps for low-income children whose families could not afford private coverage for them but had
incomes too high to qualify for Medicaid. Since that time, CHIP has grown to cover more than 5
million children—the largest expansion of public health insurance coverage for children since
Medicaid. CHIP is funded as a block grant to states, with federal matching rates higher than those
typically received under Medicaid. States have some control over the design of their CHIP programs,
including program type (Medicaid expansion, separate program, or a combination of the two);
eligibility thresholds; outreach strategies; and enrollment and retention policies. States also have
flexibility, within parameters set by the statute, to design CHIP benefit packages and cost-sharing
rules. Because of this flexibility, the characteristics of CHIP programs vary across states (Rosenbach
et al. 2007). By 2005, 29 states had adopted eligibility thresholds under CHIP of 200 percent of the
federal poverty level (FPL); 13 set thresholds below 200 percent of the FPL; and 8 expanded
eligibility to children in families with incomes above 200 percent of the FPL (First Focus 2009).
Furthermore, about two-thirds of states chose to implement their CHIP expansions through a
separate program, either alone or in combination with a Medicaid expansion, which introduced
variations in benefits and cost-sharing. The coverage offered by separate CHIP programs more
closely resembles the very broad coverage available under Medicaid than that typically available
under private health insurance, and most are operated under capitated managed care arrangements
(Kenney and Dorn 2009; Wooldridge et al. 2005).
Research evidence from CHIP’s early years indicates the program has made great progress in
several areas. With expansions in the program, new investments in outreach, and enrollment
simplifications introduced beginning in the late 1990s, uninsured rates declined among children,
both for those made newly eligible for public coverage under CHIP and those already eligible for
Medicaid (Hudson and Selden 2007; Davidoff et al. 2005; Kenney and Yee 2007; Kenney and Chang
2004; Dubay et al. 2007; Kenney et al. 2005; Rosenbach et al. 2007). The early research also indicates
improvements in access to care and increases in receipt of preventive care among the children who
gained public coverage (Rosenbach et al. 2007; Kenney and Chang 2004). At the same time,
however, millions of children remained uninsured despite being eligible for Medicaid or CHIP, and
many enrolled in public coverage did not receive recommended levels of care (DeNavas et al. 2009).
Moreover, uninsured rates among low-income children vary widely from state to state and across
subgroups (Lynch et al. 2010).
The Children’s Health Insurance Program Reauthorization Act. Uncertainty surrounding
ongoing funding ended in February 2009 when CHIP was reauthorized for an additional four and a
half years through the Children’s Health Insurance Program Reauthorization Act (CHIPRA)
(Georgetown Center for Children and Families 2009). CHIPRA provided states with new tools to
address shortfalls both in enrollment and in access to and quality of care. A number of provisions
were designed to expand eligibility for public coverage among children and increase takeup of public
coverage among uninsured children already eligible for Medicaid and CHIP (Georgetown Center for
Children and Families 2009). 1 CHIPRA authorized new outreach and enrollment grants, as well as
These provisions include (1) adopting 12-month continuous eligibility for all children, (2) eliminating the asset
test for children, (3) eliminating in-person interview requirements at application and renewal, (4) using joint applications
and supplemental forms and the same application and renewal verification process for the two programs, (5) allowing
1

1

Chapter 1: Background and Purpose of the Report

Mathematica Policy Research

bonus payments to states that both adopted five of eight enrollment/retention strategies and
exceeded target enrollment numbers. States also received new options to use Express Lane
Eligibility strategies to facilitate eligibility determination, enrollment, and retention, and for meeting
citizen documentation requirements. CHIPRA allowed states to use federal dollars to cover legal
immigrant children who had been in the United States less than five years (previously, coverage for
such children had to be financed exclusively with state funds), provided higher federal matching
rates for translation and interpreter services and additional federal allotments to states to cover the
costs of expanding eligibility and enrolling more eligible children. Other provisions were designed to
improve access to and quality of care for the children served by Medicaid and CHIP (HHS 2010).
The CHIP Program Today. Since the enactment of CHIPRA in early 2009, a number of
states have introduced policy changes to their Medicaid and CHIP programs: 15 have expanded
eligibility to higher-income children; 17 have sought approval to introduce improvements in their
enrollment and retention processes; 4 states have received approval to take advantage of the new
Express Lane option for Medicaid (Alabama, Iowa, Louisiana, and New Jersey), and one state
(Alabama) has also received approval to do this for CHIP; and 19 states have begun using federal
funds to cover legal immigrant children and/or pregnant women who had been in the country less
than five years (HHS 2010; Families USA 2010). An initial $40 million in outreach grants was
awarded to 42 states and the District of Columbia, and an additional $10 million was awarded for
targeting Native American children.
In addition, as required under CHIPRA, a core set of quality measures has been developed by
the Agency for Healthcare Research and Quality (AHRQ), and Quality Demonstration Grants have
been awarded that include both single-state projects and multistate collaborations involving 18 states
overall; an evaluation of the Quality Demonstration Grants has been funded; and a contract to
develop a model electronic health record format for children in Medicaid and CHIP has been
awarded. Also, the Government Accountability Office (GAO) has initiated three mandated studies
on, respectively, Medicaid/CHIP dental services for children, parent and caretaker coverage, and
Medicaid/CHIP primary and specialty services for children; and the Institute of Medicine has
formed a committee to study Pediatric Health and Pediatric Health Care Quality.
CHIP covered 5 million children in June 2009 and 7.7 million children over the course of
federal fiscal year 2009 (Cohen Ross 2009). As of December 2009, all but 4 states (Alaska, Idaho,
North Dakota, and Oklahoma) had eligibility thresholds for children at or above 200 percent of the
FPL, with almost half (24 states) having thresholds at or above 250 percent of the FPL and 18 with
thresholds of 300 percent or higher (HHS 2010; Cohen Ross et al. 2009). Most states have chosen to
expand coverage of children through separate programs, either alone or in combination with smaller
Medicaid expansions, while 12 states have relied exclusively on Medicaid expansions for child
populations covered by CHIP. Still, most states have not taken full advantage of the flexibility in
CHIP to streamline eligibility policies and thus maximize their potential for enrollment. For
example, while most have dropped the asset test as part of the eligibility determination process and
(continued)

for administrative or paperless verification at renewal through the use of prepopulated forms or ex parte determinations,
(6) exercising the option to use presumptive eligibility when evaluating children’s eligibility for coverage, (7) exercising
the new option in the law to use Express Lane Eligibility procedures; and (8) exercising the new options in the law
regarding premium assistance.

2

Chapter 1: Background and Purpose of the Report

Mathematica Policy Research

no longer require an in-person interview at enrollment or renewal, fewer than half have 12-month
continuous eligibility for children, and just 14 states have presumptive eligibility (Cohen Ross 2009).
Most states charge premiums and/or copayments in their CHIP programs, but the amount charged
varies across income groups and states (Cohen Ross 2009).
CHIP in the Future. CHIP’s evolution is occurring within a rapidly changing health care
environment. The 2010 Patient Protection and Affordable Care Act (PPACA) introduces
comprehensive health reforms, including an expansion of Medicaid to adults and children up to 133
percent of the FPL; a maintenance of effort (MOE) requirement through 2019 on state Medicaid
and CHIP coverage for children; new subsidies for coverage for families with incomes up to 400
percent of the FPL; the creation of state health insurance exchanges and reforms to health insurance
markets; the development of streamlined enrollment systems; and the introduction of coverage
mandates for both individuals (including children) and employers. PPACA also provides two
additional years of federal funding for CHIP (extending it through 2015) and increases federal CHIP
matching rates by as much as 23 percentage points in 2015 and beyond. Starting in January 2014,
more parents below 133 percent of the FPL will become eligible for Medicaid, and children in that
income group who are enrolled in CHIP will be transitioned to Medicaid. The MOE requirements
under PPACA limit the ability of states to change eligibility and enrollment procedures for Medicaid
and CHIP but may lead to cuts in provider payment rates for the next few years. Also, despite the
MOE requirement on CHIP and Medicaid coverage for children through 2019, it is not clear how
long states will be able to continue their CHIP programs beyond 2015 unless additional federal
allotments are provided. With no additional federal funding for CHIP after 2015, many children
enrolled in separate CHIP programs will likely be shifted into health insurance exchanges or
employer-sponsored insurance (ESI) plans.
Mandate for the Evaluation. The 1997 CHIP legislation called for a congressionally mandated
evaluation and Mathematica Policy Research (Mathematica) and its subcontractors, the Urban
Institute and Mayatech, Inc., conducted that evaluation on behalf of the Assistant Secretary for
Planning and Evaluation (ASPE), Department of Health and Human Services (HHS). The CHIPRA
legislation includes a mandate for an updated evaluation of CHIP patterned after the previous
evaluation. Congress stipulated that the evaluation include surveys of enrollees and disenrollees in 10
states and specified several criteria to be used in selecting these states. 2 A report on the evaluation is
to be submitted to Congress by December 31, 2011. In September 2010, Mathematica and its
subcontractor the Urban Institute were awarded the contract to conduct this second congressionally
mandated evaluation of CHIP, which will be conducted over a three-year period.
Goals of the Evaluation. Coming 5 years after completion of the first evaluation, the current
evaluation will provide new and detailed insights into how the program has evolved since its early
years, what impacts on children’s coverage and access to care have occurred, and what new issues
have arisen as a result of policy changes related to CHIPRA and PPACA. Building on prior
evaluations focused on the early years of CHIP, it will explore how states have grappled with
important implementation challenges as the program matured and their experiences in enrolling,
retaining, and delivering care to children in low-income families. It will place particular emphasis on
understanding enrollee experiences in getting care and the types of services received, as well as how
CHIP compares with other public and private coverage. Using a mixture of quantitative and
2

Legislation calling for the 1997 and 2009 congressionally mandated evaluations is reproduced in Appendix A.

3

Chapter 1: Background and Purpose of the Report

Mathematica Policy Research

qualitative research methods, the evaluation will document how CHIP programs have developed,
where they stand today, and where they may be headed in the future. It will draw on new primary
data collection efforts modeled after the previous evaluation, including surveys of enrollees and
disenrollees in CHIP (10 states) and Medicaid (3 states), site visits and focus groups in the 10 survey
states, and a survey of program administrators in every state. To analyze states’ progress in enrolling
and retaining children and to document effective policies and practices, the evaluation will also make
use of various secondary data sources, including annual reports, other program data states submit to
the Centers for Medicare & Medicaid Services (CMS), and administrative data files from state
eligibility and enrollment systems. It also will tap data from other national surveys to understand
how CHIP and Medicaid are perceived by low-income families with uninsured children who may be
eligible and to gauge the extent to which CHIP is reducing the share of low-income children who
are uninsured.
Structure of the Report. In the remainder of this report we describe the approach we will use
to address a broad range of questions. We describe how we plan to collect the primary and
secondary data needed for the evaluation and our approach to analyzing these data. Our goal is to
provide enough detail so that our plans are clear while also recognizing that some of the particulars,
especially regarding the analysis, will take shape after we collect the data and assess the type of
analyses they can support.
In Chapter II, we introduce a conceptual framework, summarize the main research questions
and data sources, describe the core analytic components, and highlight how we plan to address some
of the challenges we expect to face in the evaluation. In Chapter III, we describe the state selection
process and steps we will take to secure their participation and acquire the data needed for the
evaluation.
The next six chapters are the core of the report, presenting our plans for collecting and
analyzing various types of primary and secondary data. We describe in Chapter IV how we will
support ASPE in securing clearance from the Office of Management and Budget (OMB) for the
three primary data collection efforts: case studies, the survey of enrollees and disenrollees, and the
survey of state program administrators. We then describe our approach for conducting the main
qualitative data components: case studies involving site visits and focus groups in 10 states (Chapter
V) and a telephone survey of CHIP administrators in every state (Chapter VI). In Chapter VII, we
cover all aspects of the survey of enrollees and disenrollees: the sample design and sampling process,
developing the survey instrument, fielding the survey, and conducting the core components of the
analysis. In the next two chapters, we focus on key secondary data components. We describe in
Chapter VIII how we will use annual reports and other program data from states, as well as detailed
administrative data from state eligibility and enrollment systems, to analyze enrollment and retention
trends and dynamics and state efforts to influence these outcomes. In Chapter IX, we present the
plan for analyzing data from the 2011 National Survey of Children’s Health to gain insight into why
some low-income families with uninsured children chose not to enroll or remain enrolled in
Medicaid or CHIP.
The report concludes with a discussion of the reports that will be produced under the
evaluation (Chapter X).

4

Mathematica Policy Research

II. OVERVIEW OF THE DESIGN
While the evaluation consists of more than a dozen tasks, it is perhaps more easily thought of as
a set of five coordinated components with findings that will be integrated to address a large number
of overlapping research questions:
1. The most ambitious component involves the design, administration, and analysis of data
from a major survey of CHIP enrollees and disenrollees to be conducted in 10
carefully chosen states. Administered to the parents or guardians of children with current
or recent CHIP coverage, the study will address questions that cannot be examined
satisfactorily from existing data. The survey will provide a critical source of information
on the demographic and socioeconomic characteristics of CHIP children and their
families; perceptions of and experiences with application and renewal processes; the
health status and health care needs of CHIP enrollees; enrollee experiences with
accessing health care; and satisfaction with the program. A complementary survey of
Medicaid enrollees, administered in 3 of the 10 CHIP survey states, will extend
findings on these and other questions to the children and families enrolled in Medicaid.
2. A second major component involves the design, execution, and analysis of qualitative
data from CHIP case studies in the same 10 states selected for the survey. Featuring
site visits to various state and local stakeholders (such as program administrators,
providers, and child advocates) and focus groups with families of CHIP-enrolled
children, these studies likewise will address many questions that cannot be explored well
through existing data. Examples include understanding perceptions of CHIP in the
selected states, the barriers eligible families may experience when enrolling in the
program or accessing health care, the extent to which CHIPRA has changed the
programs’ design or administration, and the likely ramifications of health care reform.
3. The last component to feature primary data is a survey of CHIP program
administrators conducted in all 50 states and the District of Columbia; this component
also involves the design, execution, and analysis of data. Reprising a similar survey
conducted as part of the original CHIP evaluation, the survey of program administrators
will focus on providing context for many of the questions examined through the case
studies, helping us to interpret findings in a national perspective.
4. The fourth component will make use of state program data—CHIP annual reports and
related data submitted by states, as well as administrative data from state eligibility and
enrollment systems—to analyze enrollment and retention trends and dynamics and
identify program features and other factors influencing these outcomes. We will explore
enrollment and retention trends, including transitions between CHIP and other coverage
and trends in churning out of and into the program. Using information from the case
studies and other program documents, we will investigate how state-specific factors,
such as innovative outreach practices and enrollment and retention policies, affect the
rates and patterns observed in these data.
5. Drawing on data from several national surveys (the NSCH module of the State and
Local Area Integrated Telephone Survey [SLAITS], CPS, and ACS), we will estimate
program participation rates, explore how low-income families with uninsured children

5

Chapter II: Overview of the Design

Mathematica Policy Research

perceive CHIP and Medicaid, and determine the implications of health reform
provisions for the larger population of families with uninsured children.
Each of these components will yield findings that will be captured in source-specific reports
released over the course of the evaluation. Despite their seeming independence, however, the design
and execution of the different components will be closely coordinated. For example, we will
coordinate instrument development for the stakeholder interviews conducted as part of the case
studies with the discussion guide for the CHIP survey of program administrators to ensure that we
address common research questions as completely and consistently as possible. Likewise, we will
coordinate the instrument development for the CHIP survey with the moderator guides for the
focus groups. Moreover, the findings from the source-specific reports will be synthesized into two
major reports. The first will be a 2011 evaluation report that will include findings from the analysis
of state program reports and other secondary data. The more comprehensive 2013 evaluation
report will integrate findings and lessons from all of the study components to address the full range
of research questions effectively. Details regarding the contents of these reports (and any alternatives
to the source-specific reports we may want to consider) will be discussed and refined during the first
year of the evaluation.
In the remainder of this chapter, we introduce the conceptual framework guiding our evaluation
approach, the major research questions and analytic methods we will use to address them, and our
plans for addressing several overarching challenges we will face during the evaluation.
A. Conceptual Framework
The conceptual framework guiding decisions on the design and execution of the major
evaluation components is shown in Figure II.1. The framework illustrates the process by which
CHIP contributes to the health and well-being of eligible low-income children. Several important
factors may mediate the effects of CHIP (Box B), including the state and federal program contexts
(left side) and the design of the program in a given state (right side)—all of which must be
considered carefully in the evaluation. Examples of contextual factors at the state level include the
demographic characteristics of the target population, the baseline rate of uninsured children,
Medicaid eligibility policies, and the structures of private insurance markets and health care delivery
systems. At the federal level, contextual factors include the implementation of significant health care
reform provisions, such as the individual mandate and the movement of many low-income citizens
into Medicaid. Changes such as the individual mandate could result in a large influx of first-time
applicants to CHIP as parents pursue coverage for their children.
Program design features (right side of Box B) long have been recognized as a major potential
influence on CHIP enrollment and service delivery. Examples of these features include program
model, outreach approaches, eligibility determination and redetermination processes, benefit design,
delivery system, and premiums and other cost-sharing. The flexibility afforded by CHIPRA only
adds to this list of potential design features (such as coverage of recent immigrants) and the variation
in features across states.
Child and family characteristics are also important mediating factors. Prior experiences with
Medicaid and CHIP and with health care more broadly, health status, age, race/ethnicity and cultural
background, and socioeconomic status are among the characteristics that may influence enrollment
and service use experiences and outcomes.

6

Figure II.1 Conceptual Framework for the Evaluation of CHIP

7
7

Chapter II: Overview of the Design

Mathematica Policy Research

To understand the role that these mediators may have in CHIP’s success, the evaluation will
focus considerable attention on their linkages to important intermediate outcomes (Box C),
including participation and uninsurance rates, patterns of program enrollment and retention, access
to health care, and quality of and satisfaction with care. For example, various outcomes pertaining to
health care access—such as the likelihood of having a usual source of care, the use of health care
services, and levels of unmet need—all may be affected by the backgrounds and experiences of
CHIP enrollees and/or the features of their state programs. Understanding these and other linkages
in turn forms the basis for assessing not merely whether CHIP is effective but also how and for
whom it is most effective, thereby greatly advancing our understanding of how well the program is
achieving its ultimate goal: the improved health of low-income children (Box D).
B. Analytic Approach: Key Questions and Methods
Our approach will combine a vast amount of data to address a broad range of research
questions (see Table II.1). As shown in the table, these research questions cluster into seven often
interrelated topic areas: (1) program context and design features; (2) outreach and enrollment; (3)
retention and disenrollment; (4) access, utilization, content of care, and satisfaction; (5) the
relationship between CHIP and other coverage; (6) effects on the uninsured; and (7) implications for
health reform. As further shown in the table, our investigation of these questions often will feature a
mix of qualitative and quantitative data sources, yielding a “mixed-methods” approach to addressing
many questions that will improve the depth, rigor, and generalizability of our findings. For some
questions, we will rely primarily on qualitative information and analysis, while for others the primary
approach will incorporate quantitative data and methods. Most often, the two types of data and
analyses will complement one another so that the final results will benefit from the specificity and
rigor associated with quantitative methods and the explanatory richness and contextual value of the
qualitative work.
Below, we briefly summarize our plans for analyzing research questions within each topic area.
Details on how we will conduct the various analyses to inform these different topics are contained in
Chapters V through IX.
Program Context and Design Features. A thorough understanding of the design features of
state CHIP programs, and the context within which they operate, is vital for assessing their influence
as mediating factors in several analyses, encompassing the experiences of both CHIP enrollees and
children eligible for CHIP but not enrolled. The primary data sources we will tap for information on
design features of state programs will include CHIP annual reports submitted to CMS and other
national data, site visits, and the survey of state program administrators. Questions explored in this
area will include the following: What are the key design features of state programs (program model;
eligibility policies; waiting periods and other policies to deter crowd-out of other coverage;
enrollment and renewal policies and practices; benefit packages and cost sharing; delivery systems,
managed care arrangements, provider networks, and payment policies)? How and why have these
features changed over time? How do program design features influence key program outcomes
(enrollment, retention, access, service use, and satisfaction)? What is the current budget picture for
states, and how has the passage of CHIPRA changed the funding debates in each state?
Outreach and Enrollment. CHIPRA provides new funding for state and local agencies to
engage in outreach activities for difficult-to-reach populations, such as minorities and immigrants. In
addition, the law encourages adoption of new processes to streamline enrollment. To understand

8

Chapter II: Overview of the Design

Mathematica Policy Research

which strategies are most effective at promoting enrollment, we will combine findings from analyses
of several data sources, including (1) the focus groups and key informant interviews conducted
through the case studies, (2) the survey of program administrators, (3) data on
application/enrollment experiences from the CHIP survey, (4) enrollment and other administrative
data that may highlight promising activities, and (5) data from SLAITS on the eligible-but-uninsured
population. The analysis will address such questions as: What are effective and ineffective outreach
strategies for CHIP and Medicaid? How have combined CHIP/Medicaid enrollment practices
affected enrollment in both programs? What are the trends in program churning and transitions
between Medicaid and CHIP?
Retention and Disenrollment. ASPE is interested in understanding enrollment and retention
trends and dynamics and why these trends may have changed over time. Of particular concern is
whether there are barriers that prevent low-income children from remaining enrolled in the program
and to what extent CHIP acts as a long-term source of insurance coverage. We will address these
issues by using CHIP (and when available, Medicaid) enrollment/administrative data to measure the
flow of low-income children into and out of the program, combining these measures with qualitative
data from the case studies and the survey of program administrators to understand patterns. Data
from SLAITS will provide insights into why some uninsured children disenroll despite still being
eligible. The analyses will address such questions as: Why do children disenroll from CHIP? How
effective are streamlining practices, such as paperless verification or the elimination of in-person
interviews, at improving the retention rate in CHIP? How long do children typically remain enrolled
in CHIP?
Access, Utilization, Content of Care, and Satisfaction. CHIP aims to reduce barriers to care
and unmet needs and improve access to needed services and receipt of appropriate preventive and
acute care services. Achieving good quality of care for children requires coordination across multiple
providers and systems, especially for children with special health care needs. Another key issue is
understanding how cost-sharing affects the use of services. To address questions in these areas, we
plan to focus primarily on data from the CHIP surveys, supplemented with findings from the case
studies and the survey of program administrators to help explain the basis for any positive effects
and how any variation in outcomes across states may be linked to program design. Questions
addressed in this analysis include: What experiences do CHIP enrollees have in seeking or obtaining
health care, and how does this compare with their experiences prior to enrollment? How satisfied
are enrollees with CHIP and the health services they receive? What impact does CHIP have on the
type of health care received, the content of care, and family well-being (i.e., financial concerns and
confidence in the ability to obtain needed care)?
Relationship Between CHIP and Other Coverage. The CHIP program is positioned as an
important bridge between Medicaid and private health insurance. To explore the dynamic between
these types of coverage, we will rely on three main data sources/analyses—the CHIP survey, the
Medicaid survey, and the SLAITS data. In addition, data from our case studies will supplement the
findings by providing insights into how CHIP affects family coverage decisions and the basis for any
notable variation in findings across states. The analysis will address such questions as: How has
CHIP altered or factored into the movement of low-income children between public coverage,
private coverage, and uninsurance? Do families view the CHIP program as a long-term or shortterm coverage option?

9

Chapter II: Overview of the Design

Mathematica Policy Research

Table II.1. Key Evaluation Questions and Data Sources
Key Evaluation Questions

Qualitative Data/Analyses

Quantitative
Data/Analyses

Program Context and Design Features
How do key design features vary across states?
What design changes have states made, and why?
How do CHIP benefit packages and delivery system
features compare with Medicaid and private
coverage?
What effect do program design features have on
key program outcomes (enrollment, retention,
access, use, and satisfaction)? Do states with
specific program features experience increased
enrollment and/or lower rates of uninsurance?
How has the economic downturn affected states?
What is the current state budget picture? How has
the passage of CHIPRA changed the funding
debates in each state? In what ways are states
preparing for implementation of national health
care reform? How has the enactment of PPACA
affected state CHIP programs?
How do findings in this area compare with findings
from the previous evaluation?

CARTS, SEDS,other program data
Site visits
Survey of program administrators
CARTS, SEDS,other program data
National data sources on
Medicaid and private insurance
Site visits
Survey of program administrators
CARTS, SEDS,other program data CHIP survey
Site visits
SLAITS
Survey of program administrators CPS/ACS
Site visits
Survey of program administrators
National data sources on state
economic indicators

All of the above

All of the above

Site visits
Focus groups

CHIP survey
Medicaid survey
SLAITS

Outreach and Enrollment

How do families learn about CHIP and Medicaid?
What information is most helpful in their decisions
about applying/enrolling? What aspects of the
program are most appealing, and what factors
influence enrollment decisions?
What are effective and ineffective outreach
strategies for Medicaid and CHIP? How do different
outreach strategies affect families’ knowledge of
public programs and motivation to enroll?

Site visits
CHIP survey
Focus groups
Medicaid survey
Survey of program administrators Enrollment/admin
data
SLAITS
What are the principal barriers to enrollment for
Site visits
CHIP survey
Medicaid and CHIP? What role do waiting lists and Focus groups
Medicaid survey
waiting periods play?
CARTS, SEDS,other program data Enrollment/admin
Survey of program administrators data
SLAITS
What policies and practices are states employing to Site visits
improve enrollment outcomes? What strategies are CARTS, SEDS,other program data
used for specific populations, such as children with Survey of program administrators
special needs, racial/ethnic minorities, and
children in immigrant families?
What are the trends in CHIP enrollment, Medicaid
CARTS, SEDS,other program data Enrollment/admin
enrollment, and enrollment in public coverage
data
overall for the study states? How do trends differ
across states? To what extend are trends driven by
changes in new enrollment versus changes in
disenrollment/retention?
What are the trends in program churning and
CARTS, SEDS,other program data Enrollment/admin
transitions between Medicaid and CHIP? How do
data
these vary across states? What effect do these have
on enrollment in public coverage?

10

Chapter II: Overview of the Design

Mathematica Policy Research

Table II.1 (Continued)
Key Evaluation Questions

Qualitative Data/Analyses

Quantitative
Data/Analyses

In states that are more successful in enrolling
eligible children in Medicaid and CHIP, what
practices make them more successful? If other
states adopt these practices, are they likely to get
the same results?
How do premiums, cost–sharing, and other program
design features influence enrollment outcomes?

Site visits
Focus groups
CARTS, SEDS,other program data
Survey of program administrators

CHIP survey
Medicaid survey
Enrollment/admin
data
Enrollment/admin
data

How does coordination (or lack of coordination)
between Medicaid and CHIP affect the enrollment
of children in both programs?

Site visits
Focus groups
CARTS, SEDS,other program data
Site visits
CARTS, SEDS,other program data
Survey of program administrators

CHIP survey
Medicaid survey
Enrollment/admin
data
SLAITS
Site visits
Enrollment/admin
Survey of program administrators data
All of the above

All of the above

How do families learn about program renewal
requirements and procedures? What are their
experiences with the renewal process?

Site visits
Focus groups

Why do children exit the program? To what extent
are exits intended/voluntary versus unintended?

Focus groups

CHIP survey
Medicaid survey
SLAITS
CHIP survey
Medicaid survey
Enrollment/admin
data
CHIP survey
Enrollment/admin
data
SLAITS
CHIP survey
Medicaid survey
Enrollment/admin
data
Enrollment/admin
data

What are the impacts of state budget constraints
and maintenance-of-effort requirements on the
level of state outreach and enrollment efforts?
How do outreach and enrollment findings compare
with findings from the previous evaluation?
Retention and Disenrollment

How long do children remain enrolled? How does
Site visits
this vary across states? What policies and practices Focus groups
seem to influence enrollment duration?
CARTS, SEDS,other program data

What portion of children exiting to uninsured status Site visits
may still be eligible for CHIP or Medicaid? What
CARTS, SEDS,other program data
portion returns to the program after a spell of
disenrollment?

How do premiums, cost–sharing, and other program Site visits
design features influence retention outcomes?
Focus groups
CARTS, SEDS,other program data
What are more and less effective retention practices Site visits
for Medicaid and CHIP?
Focus groups
CARTS, SEDS,other program data
Survey of program administrators
How do retention and disenrollment findings
All of the above
compare with findings from the previous
evaluation?

CHIP survey
Medicaid survey
Enrollment/admin
data
All of the above

Access, Utilization, Content of Care, and Satisfaction

What experiences do enrollees have in seeking and Focus groups
obtaining health care? Have they had difficulties in
finding a doctor or dentist? Have they been able to
get timely appointments? How do these
experiences compare with their experiences before
enrollment?
Where do enrollees usually access care? Do they
Focus groups
have a usual source of care?

11

CHIP survey
Medicaid survey

CHIP survey
Medicaid survey

Chapter II: Overview of the Design

Mathematica Policy Research

Table II.1 (Continued)
Key Evaluation Questions

How adequate are provider networks in meeting the
needs of enrollees?
What types of services do enrollees receive? To what
extent does the care received include
recommended preventive care screenings,
guidance, immunizations, and other services?
How well does the process of care align with the
core principles of a patient-centered medical
home?
How well are providers communicating with
families?
How do cost-sharing and other benefit design
features affect access and use?
How do the costs incurred by families compare with
other coverage the child may have had before, or
to which they currently have access?
What unmet health care needs do children have
while enrolled? Are costs a factor?
How has the program affected family well-being
(financial burden and confidence that their child’s
health care needs will be met)?

Qualitative Data/Analyses

Quantitative
Data/Analyses

Site visits
CHIP survey
Survey of program administrators Medicaid survey
CHIP survey
Medicaid survey
Focus groups

CHIP survey
Medicaid survey

Focus groups

CHIP survey
Medicaid survey
CHIP survey
Medicaid survey

Site visits
Focus groups
Focus groups
Focus groups
Focus groups

CHIP survey
Medicaid survey
CHIP survey
Medicaid survey

How satisfied are families with the health services
Focus groups
received and with the program overall?
What impact does CHIP have on access, use, content
of care, and satisfaction?
How do findings in this area compare with findings All of the above
from the previous evaluation?

CHIP survey
Medicaid survey
CHIP survey

What type of coverage do children have prior to
enrollment and after disenrolling? How long do
they have that coverage and why do they lose it?

All of the above

Relationship Between CHIP and Other Coverage

Focus groups

For those uninsured prior to enrolling, how long
were they uninsured? Was this influenced by CHIP
waiting period policies?

Focus groups

How does the coverage children have before
enrolling and after they exit compare with
coverage under CHIP? What are the major
differences in covered services and costs?
To what extent is CHIP substituting for (crowding
out) private coverage? What share of new enrollees
was uninsured prior to enrolling?
How has CHIP affected the Medicaid program (e.g.,
structure, scope, enrollee perceptions, relationship
with other coverage)?

Site visits
Focus groups

CHIP survey
Medicaid survey
Enrollment/admin
data
CHIP survey
Medicaid survey
Enrollment/admin
data
CHIP survey
Medicaid survey
Enrollment/admin
data
CHIP survey
Medicaid survey

Site visits
Focus groups

CHIP survey
Medicaid survey

What share of CHIP enrollees has private coverage
Focus groups
prior to enrolling? What share has access to private
coverage while enrolled? How does that vary with
program design/crowd-out policies?

How has CHIP altered or factored into the
movement of low-income children between public
coverage, private coverage, and uninsurance?

Site visits
CHIP survey
Focus groups
Medicaid survey
Survey of program administrators
Site visits
CHIP survey
Survey of program administrators Medicaid survey
Enrollment/admin
data

12

Chapter II: Overview of the Design

Mathematica Policy Research

Table II.1 (Continued)
Key Evaluation Questions

Does CHIP serve as a short- or long-term coverage
approach for low-income children?

Qualitative Data/Analyses

Site visits
CARTS, SEDS,other program data

Are children making seamless transitions from CHIP Site visits
to Medicaid and vice versa? What policies are in
place to promote these transitions? What
improvements could be made?
How does the role of public coverage for lowincome children vary from state to state? How has
CHIP affected this dynamic?

Site visits

How do findings in this area compare with findings
from the previous evaluation?

All of the above

Quantitative
Data/Analyses

CHIP survey
Enrollment/admin
data
CHIP survey
Medicaid survey
Enrollment/admin
data
CHIP survey
Medicaid survey
Enrollment/admin
data
All of the above

Effects on the Uninsured

What effect has CHIP had on the rate of health
insurance among low-income children?
How well are states covering children in specified
target groups?

Site visits

Implications for Health Reform

Enrollment/admin
data
CPS, ACS
Enrollment/admin
data
CPS, ACS

What lessons from CHIP are most applicable to
health reform?

Site visits
Survey of program administrators
How has PPACA affected state programs, and what Site visits
future changes are expected?
Survey of program administrators
How are families of CHIP enrollees likely to respond Focus groups
CHIP survey
to coverage options introduced through health
Medicaid survey
reform? How important are different plan/coverage
features in their health insurance decisions?
CARTS = CHIP Annual Report Template System
SEDS = Statistical Enrollment Data System
CPS = Current Population Survey.

ACS = American Communities Survey.

SLAITS = State and Local Integrated Telephone Survey.

Impact on Uninsured Children. A central objective of CHIP is to provide insurance coverage
to low-income children who are not eligible for Medicaid and do not have other insurance. ASPE is
particularly interested in assessing what impact the CHIP program is having on the uninsured rate
for low-income children, how this varies from state to state, and how well states are reaching their
targeted populations. To inform ASPE about this issue, we will draw on analyses of the CPS and
ACS, along with national sources of program enrollment data, to estimate participation rates among
eligible low-income children. We will supplement these data with enrollment data from states and
CHIP survey data to examine this issue more closely in the 10 targeted states. As with several other
analyses, qualitative data from the case studies and CHIP program administrator survey will help us
interpret findings and provide a qualitative assessment of this vital matter. The analysis will inform
such questions as: What are the implications of setting eligibility at higher levels to target (uninsured)
children?

13

Chapter II: Overview of the Design

Mathematica Policy Research

Implications for Health Reform. The passage of health reform legislation in early 2010
substantially changed the context for this evaluation. ASPE now must gather information to help
inform the role CHIP will play in an environment with broader Medicaid enrollment and a mandate
for coverage supported by state-based exchanges for purchasing private insurance and facilitating
enrollment in public coverage. Discovering all we can about family coverage preferences and
parents’ ability to navigate the insurance market are important first steps for predicting future CHIP
enrollment and easing the transition from one program to another. We mainly will use a
combination of the CHIP survey and focus group data to address such questions as: Do parents
prefer to have everyone in the family under the same coverage (CHIP or ESI)? What do parents
know about purchasing coverage in the health insurance market and through such mechanisms as
exchanges?
C. Addressing Key Challenges
Our ability to address rigorously many of the research questions summarized in Table II.1 will
depend in large part on our ability to address important analytic challenges. Our mixed-methods
approach, combining different sources to tackle individual questions, is one of the central ways that
we address these challenges, four of which are of particular concern:
1. Drawing Causal Connections: CHIP Impacts. To draw causal inferences on the
effects of CHIP—for example, on the rate of uninsured children and the heath care
outcomes for those it serves—we must have a valid (counterfactual) measure of what
the outcomes would have been in the absence of the program. Lacking any
opportunity for random assignment, this evaluation must rely on quasi-experimental
design (QED) methods, which estimate program impacts using a comparison group
that proxies for an experimentally based control group. Fortunately, as we detail in
Chapter VII.E, the methods that we will draw on for these “CHIP impact” questions
have been used previously to produce credible estimates of coverage programs. For
example, to measure CHIP’s impact on children’s health care access, use, and other
outcomes, we will follow the design successfully adopted for the prior CHIP
evaluation; in this case, the pre-coverage outcomes of new enrollees (on the CHIP
survey) can be used as a credible counterfactual for the outcomes of more established
enrollees. As with prior studies, this and other causal analyses will use a series of
sensitivity tests to determine the robustness of our estimates and incorporate
qualitative data to explore further the validity of the findings. While important for all
impact studies, these added steps are essential when implementing QED designs.
2. Drawing Causal Connections: Impacts of Program Design Features. Given the
large number of program design features that may impact outcomes of CHIP enrollees
and disenrollees, we most likely cannot isolate their individual impacts through a tenstate study sample. A possible exception is for design features that vary among
enrollees within states, such as premiums and co-payments that (if imposed) often vary
by household income. Such variation would allow us to introduce state fixed effects
into models that estimate their impacts, thereby accounting for state-specific features
that could otherwise not be accounted for in a ten state analysis. Assessing the impact
of even these features through formal causal models remains challenging and
uncertain, however, as there also needs to be sufficient variation in the program design
features across states to distinguish these design effects from the effects of family
income.
14

Chapter II: Overview of the Design

Mathematica Policy Research

This limitation in assessing the causal impact of program design features was true for
the prior CHIP evaluation as well and, as an alternative, we often explored their possible
influence through a descriptive, mixed-mode approach – drawing on the survey and site
visit data to look for linkages between enrollee outcomes and the adoption of different
program design features. In some instances, this analysis yielded findings that were quite
robust, though it is not possible to know in advance when such findings will emerge.
For example, findings from the prior evaluation offered substantial evidence that the
adoption of the S-CHIP model was associated with greater disruptions in coverage
among children disenrolling from CHIP, despite the evidence being largely descriptive.
In the current study, we would continue to explore these kinds of linkages, drawing on
the range of both qualitative and quantitative data to assess the extent to which program
design may be contributing to differences in the outcomes for CHIP children within
and across states.
3. Integrating Findings. The intent to combine data across many sources to inform most
questions can be both a strength and a challenge. To be meaningful, such integration
should begin in the design phase of the project; we have done so, and will continue to
coordinate design aspects as the project unfolds. Fortunately, the project team has
experience with such coordination, having teamed successfully in the prior CHIP
evaluation, which had many of the same features and complexities. We thus are
confident of achieving a high level of coordination on the proposed evaluation, thereby
enhancing research results and minimizing redundancies.
4. Generalizing Findings. As in the prior CHIP evaluation, a major focus is placed on ten
diverse but purposefully chosen states that together represent more than half of all CHIP
enrollees nationwide. Findings from data collected in these states (through site visits and
a major household survey) will provide a rich understanding of the policy context and
experience of CHIP families that reside in them. Generalizing these findings outside the
study states, to all states, must naturally be done with caution as each state’s program,
target population and context is distinct. Nevertheless, based on the prior evaluation, we
anticipate that many important findings from the ten study states can be generalized with
credibility, in large part because we expect findings to be largely consistent across the
states. In addition, drawing on data from interviews with CHIP administrators in all 50
states and from the SEDS/CARTS, we will be able to provide a profile of CHIP
nationwide – for example, its program characteristics, policy context and patterns of
enrollment across all states. Through this profile, we will gain an understanding of how
the 10 focal study states compare with states nationally, providing a strong foundation
for assessing how our survey findings generalize to states outside the study.

15

This page has been left blank for double-sided copying.

Mathematica Policy Research

III. STATE SELECTION AND PARTICIPATION
A. State Selection
Our choice of states was guided first by the legislation authorizing the procurement for this
evaluation, which specifies that the 10 states chosen must (1) utilize diverse approaches to providing
child health assistance, (2) represent various geographic areas (including a mixture of rural and urban
areas), and (3) each contain a significant portion of uncovered children. In addition to these criteria,
with ASPE’s input we developed a robust list of criteria for selecting states and a set of decision
rules for applying them (Table III.1). We grouped these criteria into three stages:
1. Stage I includes criteria that are vital to informing policy and so must be met in
selecting the 10 states. Three of these criteria (noted above) come directly from the
federal legislation that authorized this evaluation.
2. Stage II includes additional policy-relevant criteria that should be considered in selecting
the 10 states, such as eligibility rules and other state program features.
3. Stage III includes two practical criteria that states must meet to be enrolled/selected for
the study: that a state’s data system(s) can support the evaluation’s needs, and that a state
is willing to participate.
1.

Ten States Selected to Participate

We applied the Stage I and Stage II selection criteria sequentially, which produced a primary list
of 10 states: 3 (1) Texas, (2) California, (3) Florida, (4) Ohio, (5) Alabama, (6) Louisiana, (7) New
York, (8) Wisconsin, (9) Utah, and (10) Virginia. These states successfully meet all of the Stage I and
Stage II selection criteria described in Table III.1. After ASPE sends letters to each state
encouraging their participation, we will begin contacting states. If the letters from ASPE are sent out
by the end of April, we would hope to have signed Memorandum of Understanding (MOU)
documents (either with these states or substitutes agreed upon by Mathematica and ASPE, discussed
below) by July 1, 2011. Table III.2 provides more details about how each state, and the states in
combination, satisfy the Stage I selection criteria. Highlights include:
• Size of the Target Population. The 10 states together include 53 percent of the overall
target population and represent 2.6 million children eligible for, but not enrolled in,
CHIP, as measured by the share of uninsured children with family income below 200
percent of the federal poverty level (Lynch et al. 2010). Table III.3 provides more detail
on the income distribution of uninsured children under 400 percent of the poverty level.
As the table shows, in all of the selected states nearly two-thirds of uninsured children
have incomes under 200 percent of the FPL (the upper income limit for CHIP in 4 of
the 10 states). A smaller percentage of uninsured children have incomes over 250
percent of the FPL. During the site visits we will investigate states’ outreach strategies
for reaching higher income children in states where they are eligible for CHIP (for
example, Alabama, New York, and Wisconsin, have income thresholds of 300 percent or
higher).
3

A more detailed discussion of the process used to select the states is provided in Appendix B to this report.

17

Chapter III: State Selection and Participation

Mathematica Policy Research

Table III.1. Criteria for Selecting States for the CHIPRA 10- State Evaluation
Criteria and Rationale

Proposed Decision Rule(s) for Meeting Criteria
Stage I: Primary Selection Criteria (must be satisfied)

1. Program type: Legislation
specifies importance of
selecting states with diverse
approaches to providing
coverage

Selected states should approximate the national distribution on
program type:
a) 2 or 3 states with Medicaid expansions only (either pure M-CHIP
programs or Combination programs with more than 80 percent
of enrollees in the M-CHIP program)
b) 5 or 6 states with separate programs (either pure S-CHIP
programs or Combination programs with more than 80 percent
of enrollees in the S-CHIP program)
c) 1 or 2 states with Combination programs with enrollment
relatively evenly divided between M-CHIP and S-CHIP

2. Size of the uninsured
population: Legislation
specifies that selected states
should contain a significant
portion of children not
covered

Selected states include:
a) at least 50 percent of nation’s low-income uninsured children
b) at least 2 states from among the 10 with highest rate uninsured
children below 200 percent FPL

4. Program participation and
retention: Programs with
varied success in enrolling
and retaining eligible
children can improve
generalization of findings
and provide basis to
compare and contrast state
experiences

Selected states include:
a) at least 2 in the top and bottom quartiles in estimated
participation rate
b) those meeting at least one of the four subcriteria for “best
practices” in enrollment/retention (see Stage II.1 below)
c) at least 2 states that report their S-CHIP enrollment in MSIS

3. Program size: Larger
programs will support
generalizing findings at the
national level. Moderate
sized programs will help
generalize findings to more
states

Selected states include:
a) at least 40 percent of CHIP enrollees nationally
b) at least 5 states from the top 10 largest programs nationally

5. Geographic characteristics:
Legislation specifies the need
to represent various
geographic areas (including
mix of more rural and more
urban states, variation in
race/ethnicities)

Selected states include:
a) at least 2 states where at least 25 percent of the population
lives in an urban area and at least 2 states where at least 25
percent lives in a rural area
b) at least one state from every Census region
c) at least 7 must be in top half of states in percentage non-white;
at least 3 in the top quartile in percentage Hispanic; at least 3 in
top quartile in percentage African American

Stage II: Secondary Selection Criteria (will be satisfied in proposed order of priority)
1. Best practices for
enrollment and retention:
Inclusion of states with
different policies and
procedures for enrolling and
retaining eligible children
can help link the impact of
these various approaches on
enrollment and reductions in
the number of uninsured
children

Selected states include:
a) at least 2 with (separate) program components that have
integrated their Medicaid and CHIP eligibility systems
b) at least 2 that have received CHIPRA bonus payments
c) at least 2 that have adopted ELE and 2 that have adopted SSA
matching
d) at least 2 that do not satisfy a through c above

18

Chapter III: State Selection and Participation

Mathematica Policy Research

Table III.1 (Continued)
Criteria and Rationale

Proposed Decision Rule(s) for Meeting Criteria

2. Cost- sharing: Inclusion of
states with different cost
sharing approaches can help
inform about the impact on
access, use, and other key
health care outcomes

Selected states include:
a) at least 2 states that charge premiums and at least 2 that do not
charge premiums
b) at least 2 states that have co-payments and at least 2 that do
not have co-payments

4. Program eligibility:
Including states with
different income eligibility
limits, those that use or do
not use buy-in programs,
and those that include or
exclude parents in their CHIP
programs can help inform
about the effects on take-up
of offers of health insurance

Selected states include:
a) at least 2 with income eligibility limits above 300 percent FPL
and at least 2 with income eligibility limits below 200 percent
FPL
b) both those that have and do not have buy-in programs
c) at least 2 states with an adult/parent CHIP expansion

3. Delivery system: Including
states with different
approaches to care delivery
can help inform their
possible links to access, use,
and other key health care
outcomes

Selected states should approximate the national distribution on use
of capitation-based managed care arrangements:
a) at least 2 states enrolling 90 percent or more of the CHIP
population in managed care
b) one state with no managed care enrollment
c) at least 4 states with a mix of managed care, PCCM, and FFS

5. Participation in other key
research: Opportunities to
leverage findings from other
studies

Selected states include:
a) at least 4 that participated in the prior CHIP evaluation
b) at least 2 that received CHIPRA quality grants (and are the focus
of the evaluation of those grants)
c) at least 2 that are participating in the Maximizing Enrollment for
Kids program and evaluation

Stage III: Screening Criteria (must be satisfied for final selection)
1. Sufficient capability of state
data systems: State data
systems must be able to
provide accurate, complete,
and timely data for survey
sampling
2. Willingness of state to
participate: State
cooperation is essential to
ensuring accurate, complete,
and timely data for survey
sampling

Note:

Qualitative assessment of study team as to whether criterion is met.
(Note that ready access to Medicaid data will be part of the
assessment and could affect whether criterion is met.)

Signed MOU with state that specifies roles and responsibilities of
both state staff and evaluation team members

CHIP = Children’s Health Insurance Program; CHIPRA = Children’s Health Insurance Program
Reauthorization Act; ELE = express lane eligibility; FFS = fee for service; FPL = federal poverty
level; M-CHIP = Medicaid expansion CHIP program; MOU = memorandum of understanding;
MSIS = Medicaid Statistical Information System; PCCM = primary care case management; SCHIP = Separate CHIP program; SSA = Social Security Administration.

• Size of the CHIP Program. The 10 states together include an estimated 2.8 million
children, or roughly 57 percent of children nationwide enrolled in CHIP as of June 2009
(Kaiser Family Foundation 2010). Half of the states selected are from the top 10 largest
CHIP programs in the nation.

19

Chapter III: State Selection and Participation

Mathematica Policy Research

• Program Model. The 10 selected states are distributed across program models as
follows:
o 6 states operate Separate CHIP (S-CHIP) models (either pure S-CHIP
programs or Combination model programs with more than 80 percent of
enrollees in the S-CHIP portion of the program).
o 2 states operate Medicaid-expansion (M-CHIP) models (either pure MCHIP programs or Combination model programs with more than 80
percent of enrollees in the M-CHIP portion of the program).
o 2 states operate true Combination programs (Combo), with a relatively
even split in M-CHIP and S-CHIP enrollment
• Regional and Urban/Rural Representation. The regional distribution of the 10 states
is as follows: South, 5 states; West, 2 states; Midwest, 2 states; Northeast, 1 state. In 2 of
the states (Alabama and Wisconsin), at least 25 percent of child residents live in a rural
area; in all 10 states, at least 25 percent of child residents live in an urban area.
• Program Participation and Retention. Among the 10 states, 3 are in the top quartile
in terms of Medicaid/CHIP participation rates (in all 3 states, more than 85 percent of
their eligible population participates). Three are in the bottom quartile of participation
rates (with rates below 75 percent). Several have implemented enrollment and retention
best practices: 2 have received CHIPRA bonus payments; 2 have adopted express lane
eligibility; 6 have adopted SSA matching.
• Population Characteristics. Seven of the 10 states are in the top half of states ranked
by the percentage of non-white child residents (they range from 38 percent to 68 percent
non-white child residents). Four of the 10 states are in the top quartile of states for
percentage of Hispanic child residents, and 4 are in the top quartile of states for
percentage of African American child residents.
We will next apply the Stage III selection criteria to assess state data capabilities and willingness
to participate. To assess whether state data systems can support the sampling and analysis needs of
the evaluation, we will draw on existing expertise from other projects involving many of these
states. 4 For other states, we will conduct brief screening calls with state staff to learn more about
those aspects of their data systems more essential to the evaluation. While we will include an
assessment of whether we would be able to link Medicaid and CHIP data to identify transitions
between the two programs, we recognize that some states will need to remain under consideration
even if this capability is lacking. 5

Mathematica has considerable knowledge of state data system capabilities for five of the states: Alabama,
Louisiana, New York, Wisconsin, and Virginia. Texas, California, and Florida were included in the previous evaluation
and we have experience from other projects in working with California’s data. Our existing knowledge of data system
capabilities in Ohio and Utah is more limited.
4

Larger states, such as California, Texas, and Florida, cannot be ruled out on this basis without sacrificing the
study’s ability to produce findings that represent a majority of the CHIP population (both enrolled and eligible but
unenrolled).
5

20

Table III.2. Primary Selection Characteristics of the Ten Selected States

3.09%

Ohio

M

2.66%

Alabama

S

1.33%

1.39%

•

1.06%

2.55%

•

Louisiana
New York

C
(M:97%)
S

Utah

C
(mix)
S

Virginia

C
(mix)

Wisconsin

Source:

3.15%

•

•
•

•

(Top)

•

•

•

•

•

•

•

7.71%

(Top)

•
•
•

1.45%
0.84%

•
•

1.86%

1.94%

•

(Top)
•

•
•

(Bottom)

Program type data: Centers for Medicare & Medicaid Services 2010.

•

•
•

•

•

5.b.

•

S

•

•

•

W

•

•

•

S

•

•

•

MW

•

S

•

•

S

•

•

NE

•

•
•

MW
W

•

S

•

0.92%
1.51%

5.b

At least 3 states, top
quartile, percent African
American children

•

•

(Bottom)

5.b.

At least 3 states in top
quartile, percent Hispanic
children

4.53%

•

5a.

At least seven states in top
half, percent non-white
children

•

•

5.a.

At least one state from
each of the 4 Census
Regions

9.74%

C
(S:99.6%)

•

(Bottom)

5.a.

At least 3 states where at
least 25% of the population
lives in a rural area

22.71%

At least 2 states reporting
S-CHIP enrollment in MSIS

•

4.c.

At least 2 states that meet
none of the other 4.b.
criteria

14.57%

C
(S: 82%)

4.b.

At least 2 states, SSA
Matching

10.97%

4.b.

At least 2 states with ELE

•

4.b.

At least 2 states that
received CHIPRA bonus
payment

At least 2 states each, top
and bottom quartile,
Medicaid and CHIP
participation rate

16.64%

4.b.

At least 3 states where at
least 25% of the population
lives in an urban area

21

Florida

4.a.

At least 40% share of CHIP
enrollees nationally

California

3.b.

S

State
Texas

3.a.

At least 2 of the top 10
states, highest rate of
uninsured children

2.b.

At least 50% share of
uninsured children under
200% FPL

2.a.

Program type

1.

At least 5 states outside
top 10, CHIP program size

Stage I: Primary Selection Criteria (Must be Satisfied)

•

•

•

•

•

Uninsured rate among low-income children: Lynch et al. 2010.
CHIP enrollment as of June 2009: Kaiser Family Foundation 2010.
Medicaid and CHIP participation rate: Kenney et al. 2010.
CHIPRA bonus payments: U.S. Department of Health and Human Services 2009.
Express Lane Eligibility information: Families USA 2010.
SSA Matching information: Cohen Ross 2010.
Reporting of S-CHIP data in MSIS: Matthew Hodges, Research Analyst, Mathematica Policy Research, personal communication, November 16, 2010.
Geographic Data: U.S. Census Bureau, 2010.
Racial and Ethnic data: Urban Institute and Kaiser Commission on Medicaid and the Uninsured, 2010

Chapter III: State Selection and Participation

Mathematica Policy Research

Table III.3. State CHIP Income Limits and Percentage of Uninsured Children at Selected Income
Ranges

Texas
California
Florida
Ohio
Alabama
Louisiana
New York
Wisconsin
Utah
Virginia
Source:
Note:

State’s CHIP
income limit as
a percent of the
FPLa
200
250
250
200
300
250
400
300
200
200

Percent of uninsured children with incomes
Number of
uninsured
children
1,163,000
1,009,000
712,000
194,000
90,000
86,000
253,000
66,000
107,000
139,000

Under
100%
FPL
37.7
39.8
33.7
36.1
43.3
43.0
35.2
33.3
28.0
39.6

100 199%
FPL

32.2
30.7
33.1
30.9
28.9
17.4
25.7
34.8
41.1
25.9

200249%
FPL
9.9
9.1
10.7
12.4
7.8
12.8
9.9
7.6
9.3
12.9

250 299%
FPL
6.0
6.0
6.7
5.7
6.7
9.3
6.7
6.1
10.3
6.5

300 399%
FPL
7.0
6.0
8.1
7.2
6.7
8.1
7.9
9.1
5.6
7.2

400%
FPL and
higher
6.8
7.7
7.2
6.7
6.7
9.3
12.6
7.6
5.6
7.9

Urban Institute analysis of American Community Survey (ACS) 2008 data from the Integrated
Public Use Microdata Series (IPUMS), as reported in Lynch et al. 2010; and Heberlein et al.
2011.
Number of children are rounded to nearest thousand; percentages may not total to 100% due
to rounding.

In states with combination programs, the limit reported is whichever limit is higher (for example, in
Louisiana, the M-CHIP program’s income limit is 200% of the FPL, but the S-CHIP program permits
children up to 250% of the FPL to enroll, thus we reported 250%).

a

2.

Replacement Criteria for Any States Unable to Participate

Concurrent with developing the list of states recommended for inclusion, we developed a list of
substitute states, using the Stage I and II criteria. The back-up states we recommend are Colorado,
Nevada, Pennsylvania, Michigan, Kentucky, Maryland, Oklahoma, North Carolina, Oregon, and
Illinois.
If we need to replace one or more states, either because they are unwilling to participate or
because they cannot provide the essential data needed for the evaluation in a timely manner, we will
draw from the list of potential substitute states identified during the state selection process. In
recommending a substitute for a given state, we will look for a replacement with characteristics
similar to the state being replaced, so that we preserve the balance represented in the initial mix of
states to the extent possible. Final decisions about any substitutions will be made in close
consultation with ASPE.
B. Securing State Participation
After government approval of the selected states and an assessment of core data capabilities, we
will contact the states and work to secure their participation. Prior to contacting them, we will work
with ASPE to send a letter to the governors in each state that describes the study and asks for their
support. We then will send a letter to the CHIP/Medicaid directors in the selected states that
describes the study and the state selection process and provides a general description of what will be
involved in participation. After sending the letters, we will contact CHIP/Medicaid directors to set
up a time to discuss the evaluation more fully and address their questions or concerns. Prior to the
calls, we will send a project summary and a table summarizing our data and access needs. Senior
22

Chapter III: State Selection and Participation

Mathematica Policy Research

team members will lead these calls. In addition to discussing the evaluation and data requirements,
we will explain how states would be compensated for providing the data and discuss how we plan to
develop a MOU that will confirm in principle the state’s willingness to participate.
Subsequent conversations with state technical staff will be scheduled to open further
conversation regarding the details of the data request, specific features of the state data system, and
procedures for producing and transmitting the required data. The particular documents and data
elements that will be provided by each state, as well as procedures for transmitting these documents
and data, will be detailed in subsequent discussions between Mathematica and state technical staff. A
description of the specific arrangements regarding the files to be acquired and the frequency with
which they will be transmitted will be attached as an addendum to the signed MOU.
Exhibit III.1 shows a model MOU, which will serve as the starting point for developing a MOU
tailored to each state, and Table III.4 provides a summary of the data required for the study. In
addition to the MOUs, some states may require review by an institutional review board, and we
expect that most states will require a formal data use agreement.
C. Data Acquisition
A critical component of this evaluation is the ability of the state data systems to provide timely
eligibility and enrollment data for use in drawing samples for the surveys of enrollees and
disenrollees. Because these data are complex, we have developed a strategy that aims for the timely
acquisition of the necessary data without excessively burdening state staff. Our data-acquisition
strategy consists of two further steps once the MOU is signed:
1. Assessing the State Data Systems. Based on discussions with state technical staff (as
well as our existing knowledge of some state data systems), we will develop a profile
that summarizes broadly the CHIP and Medicaid data systems, their interrelationship,
and the availability of specific data elements needed for the evaluation. These profiles
will serve as the basis for the development of data use agreements (discussed below),
as well as the preparation of state-specific technical questions regarding the
characteristics of the systems and the availability of specific data elements.
2. Developing Data Use Agreements. Based on a review of the preliminary assessment
of the states’ data capabilities and initial contacts with key program staff in each state, we
will develop a detailed data use agreement for each state, which will serve as an
addendum to the state’s MOU. This agreement will describe in detail the types of data
and access to program staff that the state will provide to Mathematica and the
timeframes within which the data or access will be provided. These combined MOUs
and data use agreements subsequently will be formalized as a subcontract between
Mathematica and each of the 10 selected states.
We will work with the technical staff identified in each state to develop the best approach for
specifying, processing, and transmitting the relevant data files while adhering to the confidentiality
guidelines of each state. Our contacts with technical staff usually will conform to the timeframes that
appear in the data use agreements, although we might need to contact staff occasionally at other
points to follow up on specific issues regarding the structure and content of specific files or other
technical questions.

23

Chapter III: State Selection and Participation

Mathematica Policy Research

Exhibit III.1 Illustrative Memorandum of Understanding Template

Memorandum of Understanding, CHIPRA 10-State Evaluation
This memorandum of understanding (MOU) outlines an agreement between the state of [State Name] and Mathematica
Policy Research (Mathematica), regarding the state’s participation in the 2010–2013 Congressionally mandated evaluation
of CHIP being conducted by Mathematica and its subcontractor, The Urban Institute (Urban), for the Office of the
Assistant Secretary for Planning and Evaluation (ASPE) of the U.S. Department of Human Services (HHS). A
description of this evaluation is provided in Attachment A. This MOU describes in general terms the types of data and
access to program staff that the state agrees to provide to Mathematica and/or Urban, the timeframes within which the
data or access will be provided, and the payments that will be made to the state by Mathematica to defray data collection
costs. Table 1 provides a summary of the type of data or access required. The particular documents and data elements
that will be provided by the state, as well as procedures for transmitting the documents and data to Mathematica or
Urban, will be specified by Mathematica in consultation with the state at a later date.
Access to State Staff. The state agrees to give Mathematica and Urban access to CHIP [and Medicaid if applicable]
program staff for (1) assistance between July and October 2011 in selecting local communities and identifying
informants to participate in a site visit, (2) in-person interviews during a one-week site visit between October 2011 and
May 2012, (3) assistance in constructing sample frames for focus groups with parents of CHIP enrollees and
disenrollees, (4) telephone follow-up with participants as necessary between November 2011 and June 2012, and (5)
participation in a one-hour telephone survey of program administrators in [fall 2012].
Program Documents. The state agrees to provide Mathematica and Urban with various existing documents describing
policies, design, implementation, and performance of the state’s CHIP, as requested, between February 2011 and
September 2013.
Data for Survey Sampling and Related Analysis. The state agrees to provide extracts from CHIP [and Medicaid, if
selected for that survey] enrollment files that include data from the application process, contact information, and
eligibility and disenrollment history. These data will be provided up to four times between July 2011 and March 2012:
test data in July 2011, up to three extracts for sampling purposes between September 2011 and March 2012, and a final
extract in summer 2012 for an analysis of enrollment patterns. The final extract will include both CHIP and Medicaid
enrollment history. Mathematica will also need the state’s assistance in acquiring data for locating sample members,
including contact information that may be contained in application or eligibility determination data systems that are
separate from the CHIP enrollment data system. The content and structure of these files will be specified further in
consultation with state technical staff. A detailed schedule of data requests will be developed by Mathematica in
consultation with state technical staff by June 2011 and will be attached to this MOU as an addendum. Table 2 provides
an example of such a schedule. The state agrees to provide Mathematica with access to the appropriate state technical
staff for assistance related to these data requests.
Data Confidentiality. Each party shall protect the confidentiality of information provided by the other party, or to
which the receiving party obtains access by virtue of its performance under this MOU, that either has been identified as
confidential by the disclosing party or by its nature warrants confidential treatment. The receiving party shall use such
information only for the purpose of this MOU and shall not disclose it to anyone except those of its employees who
need to know the information. These nondisclosure obligations shall not apply to information that is or becomes public
through no breach of this MOU; that is received from a third party; or that is required by law, regulation, or subpoena to
be disclosed. Confidential information shall be returned to the disclosing party upon request. Mathematica shall ensure
that all information, records, data, and data elements pertaining to applicants and recipients of public assistance, or to
providers, facilities, and associations, shall be protected by Mathematica, its employees, its subcontractor, and their
employees from unauthorized disclosure pursuant to [42 USC 654(26) and 42 CFR Part 431, Subpart F – confirm and
update as needed].
Payment Schedule. Mathematica agrees to compensate the state for the cost of extracting data from its CHIP
enrollment files as follows: [TBD].
Name and Date (State): _____________________________________________________________
Name and Date (Mathematica): _______________________________________________________

24

Table III.4 Memorandum of Understanding Summary of Data Required
Evaluation Component

Type of Data or Access Needed

Frequency

Timing

Analysis of state
program data reported
to CMS

Access to state staff as needed to clarify information reported to CMS through
CARTS or SEDS.

Occasional

Winter/Spring
2011

Case studies (site visits;
focus groups)

Assistance in identifying informants for site visit interviews and selecting local
communities and informants. Access to state staff for in-person interviews.

Once

Fall 2011 or Winter
2012

Assistance in constructing sample frames for focus groups with parents of
CHIP enrollees and disenrollees.

Once

Access to state staff for telephone follow-up.

As needed

Existing documents describing relevant state policies, program design,
implementation experiences, and outcomes or performance.

As available

Eligibility and enrollment and disenrollment files that contain application and
contact information and historical enrollment and disenrollment data.

Once for test
data

Documentation on file structures, variable definitions and coding, and access
to state technical staff for guidance as needed.

Two times for
survey
sampling

Late Summer and
Fall 2011

Once

Spring 2012

Surveys of enrollees and
disenrollees

25

CHIP: 10 states
Medicaid: 3 states

February 2011
through
September 2013
July 2011

Access to additional data files necessary to support the locating of sample
members.
Analysis of enrollment
and retention

CHIP and Medicaid eligibility and enrollment history data for up to 24 months
prior to first extract for survey sampling and 12 months after final extract for
survey sampling.
Documentation on file structures, variable definitions and coding, and access
to state technical staff for guidance as needed.

This page has been left blank for double-sided copying.

26

Mathematica Policy Research

IV. OMB CLEARANCE PROCESS
Overview: Obtaining clearance from the Office of Management and Budget (OMB) ensures
the quality and utility of the data collected by a federal agency and minimizes the public burden
incurred by the collection process. Using HHS’s OMB guidelines, Mathematica will assist ASPE in
navigating the OMB process, preparing submissions, responding to public and OMB questions, and
obtaining clearance for the three data collection components of CHIP10: (1) a quantitative survey of
CHIP enrollees and disenrollees in the selected states (including a Medicaid sample in three states);
(2) qualitative case studies including focus groups in the 10 selected states; and (3) a qualitative
survey to collect contextual data from a census of state CHIP program administrators in 50 states
and the District of Columbia. For efficiency―the first two data collections will start in close
proximity to one another (early fall 2011)―we will combine the requests for OMB clearance into a
single package (OMB Package #1). OMB Package #2 will be prepared in time for a May 2012 data
collection start. Both processes will be monitored by the TOO, an HHS internal reviewer, and a
Mathematica quality assurance officer. Below, we outline the schedule for the two OMB
submissions and then discuss the content of the packages.
Table IV.1. Schedules for Two OMB Submissions

OMB Process

Submit the 60-day Federal Register Notice
(FRN) for TOO review

OMB Package #1 Survey of
Enrollees & Disenrollees and
Case Studies
1/14/11

TOO publishes 60-day FRN
During 60-day public comment period,
conduct survey pretest and submit
pretest report

Respond to public comment after 60 days
Submit DRAFT OMB package for HHS
review: includes responses to public
comment, final protocols/survey
instruments, pretest memo

OMB Package #2 Survey of
State Program
Administrators
9/29/11

3/28/11

~10/7/11

3/28 - 5/27/11

10/10 – 10/24/11

5/27 – 6/3/11

11/28 – 12/6/11

6/15/11

~1/24/12

8/14/11

3/21/12

6/3/11

TOO publishes 30-day FRN, submits FINAL
OMB package to OMB
OMB review usually takes ~60 days

Assist TOO in responding to any OMB
questions

Receive final OMB clearance

1/10/12

8/24/11

3/28/12

9/7/11

4/30/12

Starting the OMB Process: 60-Day FRN. For OMB packages #1 and #2, we will assist the
TOO in preparing the 60-day FRNs and developing a preliminary set of supporting documents in
case the public wishes to review them at some time during the public comment period. For OMB
package #1, the documents will include the preliminary supporting statement, the pretest version of
the survey of enrollees and disenrollees, and the preliminary case study protocols. For OMB package
#2, the documents will include the preliminary supporting statement and the pretest version of the
state program administrator survey. We will wait 60 days to receive public comments to the FRNs
and then will assist the TOO in responding to them.
27

Chapter IV: OMB Clearance Process

Mathematica Policy Research

Pretesting Instruments in Preparation for Submitting the Final Packages to OMB. While
waiting to receive public comments, we will conduct pretests of the surveys (the case study
protocols need not be pretested) and will present the two pretest reports with recommendations for
TOO consideration before including them in the draft and final OMB packages to be submitted to
the TOO.
30-Day FRN. Mathematica will assist the TOO in preparing the 30-day FRNs. Based on public
comments and the pretest reports, we will prepare draft and final versions of OMB packages #1 and
#2 for submission to the TOO and internal review at ASPE. For both packages, we will respond
succinctly to OMB’s established Part A and Part B questions using HHS guidelines. Both final OMB
packages will include copies of the federal authorizing legislation, the 60-day and 30-day FRNs,
pretest reports, public comments and responses from the 60-day FRN, and final versions of
instruments and protocols. As OMB prescribes, we will package Submissions Part A and Part B
separately, as they are read by different OMB staff. Both Part A and Part B will include the same
brief study overviews to frame the studies for reviewers.
Receiving Clearance. Once OMB receives the packages, it takes 60 days or often more before
ASPE will learn the outcome of the review (approved, approved with change, disapproved). If
“approved with change” (a not uncommon occurrence), OMB will present questions to ASPE and
Mathematica will assist the TOO in responding to them. Once OMB issues a control number and
expiration date, we can finalize instrument programming and training materials and begin data
collection.
IRB Clearance for the Enrollee and Disenrollee Studies. Because OMB expects that
Institutional Review Board (IRB) approval will be obtained in the same timeframe as OMB
clearance, we here briefly describe the IRB process. The IRB process focuses on ensuring that all
survey materials are understandable by the target population, participation risks and benefits are
stated clearly, confidentiality is assured, and respondents understand they may refuse to respond to
the whole or any part of the survey. Mathematica finds it more efficient to use a single external IRB
to review survey instruments and materials seen by respondents rather than seeking approval from
ten different states, each of which may suggest instrument changes peculiar to itself. The IRB we
normally use employs a set series of questions (focused on the topics listed above) to be answered
and reviews the responses, questionnaires and all materials seen by respondents. The process usually
takes two to three months and for complex studies such as CHIP10 may involve a presentation to
the IRB in its home office. It is possible, however, that each state may prefer to use its own IRB, in
which case we will work with the TOO to complete the IRB forms for each. Whether CHIP10 must
prepare a single IRB package or multiple ones, we will assist the TOO in the preparation.

28

Mathematica Policy Research

V. CASE STUDIES
The qualitative component of this evaluation hinges on our ability to collect information
systematically on a broad range of topics and from a large number of sources, and to organize this
information consistently within an analytical framework that addresses the key research questions of
interest. Document review, case studies, and focus groups will facilitate the characterization of
program implementation and impacts, implications of the Affordable Care Act, and enrollment,
retention, access, and utilization trends. This analysis will allow us to capture and distill the
tremendous variation across state CHIP programs by answering many questions directly, generating
hypotheses about program impacts that can be explored further through the quantitative analysis of
survey and administrative program data, and adding depth to quantitative findings. The three
components of the case studies include:
1. Review of documents, reports and summary materials produced by states, research
studies, national organizations, and others involved in studying CHIP program features
and outcomes
2. In-depth site visits in 10 states
3. Focus groups in the same 10 states with families of children enrolled in CHIP and
representing other key groups of children
In the following sections we discuss the qualitative methods by which data will be collected and
organized, the manner in which findings will be analyzed, and the cross-cutting syntheses we will
develop based on these analyses.
A. Document Review
As a first step in our qualitative assessment, we will draw on the extensive documentation of
CHIP program features, coverage and participation trends, and access and quality impacts that has
been produced over the past decade. Specifically, we will review:
• Annual reports by states to CMS
• State evaluations and policy briefs
• Materials produced by research and policy organizations (including Mathematica and the
Urban Institute)
• Findings generated from new national research studies currently under way
This step will be conducted in conjunction with preparation of the 2011 Report to Congress; as
renegotiated with ASPE, the 2011 report will be based on an analysis of secondary data and
information. Materials gathered and analyzed for the report will also provide a basis for our
preparation for the case studies.
Our document review will allow us to develop an analytic framework of critical CHIP design
features, policy variations, and implementation issues. This synthesis will inform our preparation for
conducting the site visits and focus groups, help us to tailor our interview protocols to explore statespecific issues, and ensure that interview time is used efficiently.

29

Chapter V: Case Studies

Mathematica Policy Research

B. Site Visits
Site visits in 10 states will allow the research team to develop an in-depth understanding of
CHIP implementation over the past decade and the effects of recent policy changes. We will inquire
about which program design features have and have not worked, persistent challenges states have
faced, and opportunities upon which they have capitalized. We will consider the implications of
CHIPRA and health reform, and the anticipated benefits and challenges associated with those
developments. Such qualitative findings provide a critical complement to the quantitative
components of this evaluation, allowing for a more nuanced understanding of state experiences as
well as the opportunity to explore the strengths, weaknesses, and effects of varied state contexts and
alternative approaches to ensuring children’s coverage.
Protocol. The interview protocol is a critical tool for conducting high-quality site visits within a
case study framework. A carefully structured protocol permits a range of issues to be discussed in a
consistent and thorough manner across all interviews and sites while also allowing the flexibility for
interesting issues to be considered as they arise. (Table V.1 provides an outline of the core site visit
protocol. Draft site visit protocols and focus group guides are in an attachment to this report.) The
protocols are organized into sections that correspond with the major topic areas for the evaluation,
including:
• Program design features and the rationale behind both initial decisions and changes over
time, including eligibility criteria, outreach and marketing efforts, enrollment and renewal
procedures, screening and enrollment processes, benefit packages, service delivery and
payment arrangements, and initiatives for special populations, such as racial and ethnic
minorities and children with special health care needs
• Participation trends over time
• Enrollment successes and challenges, including the impacts of outreach initiatives and
strategies to simplify and streamline enrollment and renewal, coordinate with Medicaid,
and prevent crowd-out
• Access and utilization trends, including the impacts of program design on access and
utilization and the extent to which contracted providers, especially managed care
organizations, are able to meet the needs of CHIP enrollees
• Cost sharing and premiums, including the rationale for employing them; their influence
on enrollment, retention, and service utilization; changes over time; and family reports
regarding levels and the impact of changes
• Role of CHIP and Medicaid as long-term strategies for reducing uninsurance among
children
• Anticipated impacts of health reform on CHIP, including family awareness, anticipated
policy and program changes, and states’ plans for integrating mechanisms into health
insurance exchanges for assessing eligibility and facilitating enrollment in Medicaid and
CHIP

30

Chapter V: Case Studies

Mathematica Policy Research

Table V.1. Outline of Core Site Visit Protocol, CHIPRA 10- State Evaluation
I. History/Evolution of CHIP Policy Development
−
−
−

−

What is the organizational structure of your state’s CHIP/Medicaid agencies?a Has this structure changed in
recent years (that is, since 2005)? Did enactment of CHIPRA cause any changes to program administration?
What is the history surrounding CHIP policy development in the state? a
Have there been significant changes to the design of the CHIP program in your state since 2005 (that is,
Medicaid versus Separate versus Combination approach)? a When did they occur?

What were the various and most important debates that surrounded these changes in your state’s CHIP model
design?

II. Eligibility/Outreach/Enrollment Strategies
−
−

−
−
−
−
−
−
−
−
−

−
−

What have been your state’s primary efforts to simplify or streamline eligibility determination for CHIP? a Are the
same strategies employed for Medicaid? a
Did your state qualify for a “performance bonus” by virtue of having enrolled Medicaid-eligible children above
target levels and adopting at least 5 of 8 strategies aimed at simplifying enrollment? a If so, what were the
strategies utilized? If not, what was missing?
To what extent has CHIPRA led to eligibility expansions? a Which populations have been targeted?
How do you conduct “screen and enroll”? What have been the major challenges to this process?
Did your state change its enrollment and retention processes as a result of CHIPRA? If so, what impacts have you
seen?
Does your state utilize Express Lane Eligibility? a If so, what have been your early experiences?
What strategies have you pursued to raise public awareness of the importance and availability of no-/low-cost
insurance under CHIP? During what time periods did your state operate such outreach campaigns?
What are your processes for eligibility redetermination/renewal under CHIP and Medicaid?
If a child is being disenrolled from CHIP, is their eligibility for Medicaid assessed? How does the referral process
between CHIP and Medicaid work in your state in such a situation?
If a child is being disenrolled from Medicaid, is their eligibility for CHIP assessed, and if so, how does that
process work?
Overall, what have been the most/least effective strategies for reaching, enrolling, and retaining children?
Has effectiveness of outreach, enrollment, and retention strategies varied across subpopulations of children (for
example, by race/ethnicity, immigration status, urban/rural, adolescents, CSHCN)?
What impact do you anticipate raising income eligibility standards will/would have on enrollment in your state?

III. Benefit Package Design
−
−
−
−
−

−

Describe the current benefit package under CHIP. a What do you see as the primary strengths and weaknesses of
the package?
Has the CHIP benefit package changed at any point since 2005? How, if at all, did it change with CHIPRA? If so,
what changed?
Did controversy surround the inclusion (or exclusion) of any particular benefits?
What are the key differences between your state’s CHIP and Medicaid benefits packages? a
How do CHIP benefits in your state compare with private insurance?
Overall, how well does the current CHIP benefit package in your state appear to be meeting the needs of
enrolled children?

IV. Service Delivery and Payment Arrangements
−
−
−
−
−
−

−
−

Describe the current service delivery systems used for children enrolled in CHIP. a Do you rely primarily on
capitated managed care or fee-for-service systems? Or on some combination of the two?
What are the primary differences between CHIP and Medicaid delivery and payment systems?
How do the systems differ with regard to use of managed care arrangements? To what extent do you contract
with the same health plans/provider networks under the two programs?
Has the number or array of health plans changed/fluctuated much over the years? Please describe.

Is your state pursuing any quality improvements for CHIP service delivery? What is the scope and progress of this
effort? What do you anticipate will be the challenges and benefits of this model?

How about Quality Assurance? What monitoring efforts do you have in place? Are there any financial incentives,
such as pay for performance programs, to improve quality care provided? If so, please describe these
arrangements and how they are working to date.
What care coordination or medical home features—if any—has your state implemented? Please describe your
experiences to date.
What is the role of safety net providers in serving CHIP-enrolled children?

31

Chapter V: Case Studies

Mathematica Policy Research

Table V.1 (Continued)
−
−

−

−
−
−

−

How do CHIP service delivery networks in your state compare with Medicaid or private insurance?
What is the rate of participation among private physicians in your state in CHIP? In Medicaid? What factors
appear to be affecting physicians’ decisions about whether or not to participate in these programs? How is the
state monitoring physician participation?
Do you believe there is an adequate supply of providers to serve CHIP enrollees? Medicaid enrollees? For what
populations or services is availability particularly strong? Weak?

How difficult is it for enrollees to find and see a doctor or dentist? What are average wait times for
appointments?
Overall, how would you rate your CHIP program’s ability to extend primary care access to children? How does
this compare to Medicaid?
How well are children accessing dental care under CHIP? Under Medicaid? How do service delivery and payment
arrangements for dental differ under CHIP, compared to Medicaid?
Overall, how would you rate your CHIP program’s capacity to serve special populations, including adolescents
and children with special health care needs? Has your service delivery approach facilitated children’s access to
(and use of) high-quality care?

V. Cost Sharing
−
−

−
−

−
−

Has your state changed its policies regarding cost sharing (premiums, copayments, coinsurance) since 2005? a
Please describe.
What were the debates surrounding these changes? (For example, was cost sharing reduced because it was seen
as posing a barrier to access or use? Or was cost sharing increased because it was seen as appropriate for
higher-income families?)
What is your perception of affordability of premiums and other cost-sharing amounts for low-income families?
What are your methods for collecting premiums and tracking families’ aggregate out-of-pocket costs?
What have been the impacts of premiums on enrollment and retention, and of copayments on access and
utilization?
Do you have any plans to increase premiums in the future?

VI. Crowd-Out
−
−
−
−
−

−

Is fear of “crowd-out” (that is, that CHIP coverage will substitute for available employer-sponsored coverage) an
issue in your state?
What debates surround the issue of crowd-out today, and how do they differ from those that existed while your
state’s CHIP program was being formulated?
What strategies have you adopted in an effort to deter crowd-out? a
Have those policies changed since 2005?
Overall, what are your perceptions of the impact of crowd-out prevention strategies (such as waiting lists) on
enrollment, employer behavior, and special populations?
In your opinion, how has the existence of CHIP affected private insurance markets? For example, has the longterm availability of CHIP led employers to decrease their offers of dependent or family coverage to their
employees?

VII. Family Coverage
−
−
−

−

Has your state extended CHIP coverage to parents of enrolled children? a
If so, when did this occur and what was the impetus for pursuing this coverage? Which stakeholders supported
the strategy, and why?
What have been your experiences implementing family coverage?

How does family coverage change the nature of your program? Do you believe families are more attracted to a
program that can cover the whole family? Has family coverage improved rates of enrollment and/or access?

VIII. Employer Subsidy Efforts and “Buy-In” Programs
−
−
−
−
−

−

Did your state consider subsidizing employer-based coverage under CHIP?
If so, what have been your experiences implementing employer subsidy programs?
What do you see as the major strengths, weaknesses, and impact of incorporating employer subsidy
arrangements into CHIP?
Does your state permit employers to purchase coverage for their employees through CHIP?
If so, what have been your experiences implementing such a buy-in program?

What do you see as the major strengths, weaknesses, and impacts of incorporating employer buy-in
arrangements into CHIP?

32

Chapter V: Case Studies

Mathematica Policy Research

Table V.1 (Continued)
IX. Coverage of Special Populations
−

−

How do various features of your CHIP program affect children with special health care needs?

Have you modified any policies—such as those related to outreach, enrollment, benefits, cost sharing, crowdout, or service delivery—to make your program more responsive to the needs of children with special health care
needs?

X. Financing
−
−
−
−
−

−

Please discuss the size of your state’s federal CHIP allocation, your views on its adequacy, and current aggregate
spending in the most recent year.
How did limited/uncertain funding over the past five years impact your state policies? At any point did you need
to freeze enrollment? Institute a waiting list? Tighten eligibility requirements?
What have been the primary sources of state funding for your CHIP program?
What is the current state budget picture?
Has the budget environment affected CHIP and Medicaid in your state? If so, what elements of the programs
have been targeted (i.e., provider payment, cost sharing, enrollment caps, etc.) to try to alleviate budget
pressures? Have these changes been implemented? If so, what observations or effects have been seen on access
to care and quality of care? Have there been other effects?
How has the passage of CHIPRA changed the funding debate in your state?

XI. PPACA
−
−
−
−
−

−

In what ways is your state preparing for implementation of national health care reform?
How has the enactment of the ACA affected your state’s CHIP program?
Have federal “maintenance of effort” rules safeguarded your state’s CHIP and Medicaid eligibility policies during
the recent economic downturn? Have these rules had other effects on CHIP and/or Medicaid?
Do you anticipate increasing payments for primary care providers consistent with mandated Medicaid increases?
To what extent do you anticipate funding uncertainty after 2015 will impact your state’s CHIP policies?

Does your state anticipate including CHIP enrollees in health insurance exchanges?

XII. Local Context
−
−

−

How has CHIP administration operated at the local level?
Has the program’s implementation varied significantly from county to county? If so, what factors have
contributed to this variation? (For example, do counties autonomously administer eligibility and renewal
systems, or do they operate under state authority and consistent state rules?)

Has program implementation varied between urban and rural areas of the state? If so, please describe variations
in outreach and enrollment approaches, service delivery approach, access to care, etc.

XIII. Lessons Learned
−

−

To what degree have lessons learned through CHIP implementation and administration “spilled over” to affect
Medicaid policy?

Overall, based on your implementation experiences, how well is your program (and its design features)
promoting enrollment, retention, access, use, family satisfaction, and strong coordination with Medicaid and the
private insurance system? In retrospect, are there any policies you would change so as to improve your state’s
attainment of these goals?

Such information will be gathered prior to site visits and reviewed/confirmed during key informant
interviews.

a

Key Informants. The site visits will gather in-depth information and insights from a broad
range of stakeholders at both the state and local level. We will divide our time on-site between each
state’s capital and up to two local communities. Through discussions with federal and state officials,
and based on our experiences conducting CHIP case studies in the prior evaluation, we will identify
key informants to interview during our site visits, ensuring that all appropriate perspectives are
represented. These informants will likely include:
• At the state level: officials responsible for CHIP and Medicaid administration, public
health and maternal and child health officials, governors’ health policy staff, state
legislators and their staffs, family and child advocates, vendors under contract with the
33

Chapter V: Case Studies

Mathematica Policy Research

state (such as those responsible for eligibility review and plan enrollment), and providers
representing such groups as the American Academy of Pediatrics and the state Primary
Care Association
• At the local level: county social services administrators, front-line eligibility workers, local
public health officials, managed care organizations, health insurance plans,
representatives of the business and employer communities, local clinic- and office-based
pediatric providers, and community-based organizations involved with outreach
During interviews at the state capital we will focus on state-level policy and program
implementation decisions and experiences. We will inquire about how programs and policies have
changed over time, persistent challenges states have encountered, and innovations that have been
successfully implemented. At the local level our questions will focus on the actual implementation of
the CHIP program. We will ask about the impacts of strategic policy efforts and budget constraints
on the program and the target population, and generally assess the impacts of the program on
consumers, providers, and communities.
Conducting the Site Visits. The site visit approach will systematically follow a series of steps
designed to ensure consistent collection of information and data from a broad and representative set
of key informants. The first site visit will serve as a pilot to test the proposed case study
methodology and interview protocols. Results from this site visit may prompt revisions to the
methodology; if so, we will implement them during the remaining nine site visits. We will conduct a
training session in advance to ensure that all team members are comfortable with the interview
guides and with the steps involved in planning and conducting the site visits. The steps are
summarized below. In total, the site visit team is comprised of six staff (three from the Urban
Institute and three from Mathematica), with two team members conducting each site visit. One team
member will conduct the interview, while the other team member takes detailed notes.
• Contact State Officials. As a first step, we will call key CHIP and Medicaid officials to
discuss the goals, objectives, and process of the case study; the types of organizations
and individuals we would like to interview; potential communities to be visited; and
possible dates for those visits. This task will launch the case study effort and secure the
participation and support of key state officials.
• Obtain and Review State Program Documents and Other Background Materials.
As described earlier, we will collect and review background materials pertinent to each
state’s CHIP program. Information and insights we garner will not only help us
understand the program from the outset, but also feed directly into our development of
state-specific questions for the interview protocols.
• Identify Key Informants and Local Sites. Working with state officials, we will identify
the full complement of state agencies, policymakers, vendors, child health advocates, and
associations involved with CHIP program design, implementation, and monitoring, and
select appropriate individuals and organizations to interview. We will also discuss and
identify several communities that might represent the typical experiences of localities in
implementing CHIP. We anticipate identifying a total of approximately 30 state and local
respondents for interviews in each study state.
• Establish Site Visit Logistics. Working with state and local officials, we will schedule
site visits lasting four to five days in each state; two days will be spent in the state capital
and two to three days in communities.
34

Chapter V: Case Studies

Mathematica Policy Research

• Conduct Interviews. We will conduct interviews with approximately 30 respondents
drawn from the aforementioned groups. Senior members of the Urban Institute and
Mathematica teams will have primary responsibility for conducting the interviews.
• Compile Notes. Upon completion of each site visit we will compile and clean notes
using qualitative analytic software (ATLAS.ti) that will ensure consistent coding, in
preparation for rigorous analysis by the evaluation team.
C. Focus Groups
As part of the case studies, in each state we will conduct three focus groups with families
touched by state CHIP and Medicaid programs, for a total of 30 focus groups across the 10 case
study states. Focus groups will be conducted during the same week that we are conducting site visit
interviews with key informants. We expect the focus group findings to enrich the other evaluation
components in several ways, while providing intrinsically valuable information regarding state and
local context. First, they will provide valuable detail about the concerns and experiences of families
affected by CHIP and Medicaid policies and program practices. Second, insights from the focus
groups will also highlight particular focal areas for our analysis of site visit findings. Third, and
perhaps most important, focus groups will bring to our evaluation the voices of parents and other
family members vividly describing their experiences with CHIP, while also enhancing our
understanding of concepts and issues identified through other components of the evaluation.
Sample Selection. We will hold focus groups with parents and other family members of
children who represent the following categories:
• Enrolled in CHIP or Medicaid
• Disenrolled from CHIP/Medicaid
• Eligible for CHIP or Medicaid but uninsured
• Covered under employer-sponsored insurance (ESI)
We believe that the most critical groups from the array above are parents of enrolled children
(since they will be able to discuss direct experiences with CHIP), parents of disenrollees (since they
will shed light on the various factors that led to disenrollment), and parents of children who are
eligible for CHIP and Medicaid, but are not enrolled and remain uninsured (since they will help us
understand more about this critical target group and what factors contribute to their not enrolling
their children into available coverage). On occasion, and to the extent possible, within these
categories we will attempt to conduct focus groups with selected special populations of particular
interest, including parents of children with special health care needs, non-English-speaking families
(we plan to conduct Spanish-language groups, led by a focus group leader fluent in Spanish and
English, in states with large Latino populations, such as California, New York, Texas and Florida),
newly eligibles, and certain racial and ethnic groups. These focus groups will provide insights about
the unique experiences of these populations and the particular challenges or circumstances they face.
Moderator Guides. The focus group moderator guide is a critical tool for consistent and
systematic information gathering. (A sample guide, focused on one type of respondent—parents of
CHIP enrollees--appears in Table V.2. Draft guides for each type of focus group are included in an
attachment to this report.) The moderator guides are designed to elicit both individual perspectives
35

Chapter V: Case Studies

Mathematica Policy Research

Table V.2 Outline of Core Focus Group Moderators Guide for Parents of CHIP Enrollees, CHIPRA 10State Evaluation
I. Outreach
−

−

How did you first hear about CHIP?

What were your initial thoughts or perceptions about the program and what it might provide for you and your
children?

II. Eligibility Determination and Renewal
−
−
−

−

How did you enroll in CHIP? (For example, fill out an application, visit a county eligibility office, apply online, and
so on.)
How did you find the CHIP eligibility/enrollment process? Was it easy and/or convenient? Was it difficult? Why?
How?
Did you receive any help in completing enrollment in CHIP?
How did you find the CHIP renewal process? Was it easy? Was it difficult? Why? How?

III. Cost Sharing
−
−
−
−

−
−

Do you pay monthly premiums or copayments for services under CHIP?
What do you think about paying premiums and copayments? Is it fair, in your opinion?
Would you be willing to pay higher premiums in order to retain your coverage?
Are premiums large enough to ever discourage you from enrolling (or renewing coverage for) your children?Are
copayments large enough to ever discourage you from obtaining care for your children?

Has your child’s coverage ever lapsed because you haven’t been able to pay CHIP premiums?

Are you aware that there is a 5 percent cost sharing limit for families? Has your family ever reached this limit? How
do you keep track of how much you have spent?

IV. Benefits and Access to Care
−
−
−
−

−

What do you think of the benefits covered by CHIP? Do they meet the needs of your children? Have your children
ever needed a service that was not covered by CHIP?
What do you think of your choice of primary care providers under CHIP? Is your child’s doctor close and
convenient? How easy has it been to find a doctor?
What about dentists? Did you have a good array of choices of a dental care provider for your child? And is the
dentist you chose close and convenient for you to use? How easy has it been to find a dentist?
Have you ever had difficulty accessing services for your child under CHIP? What kinds of services? Why was it
difficult? (For example, was it hard to get a timely appointment? Were there no providers of the type you needed?
Was transportation a problem? Other reasons?)
Do you believe your children are receiving high quality care under CHIP?

V. Overall Impacts on Daily Life
−
−
−

−

What do you think about CHIP, overall? What benefits (if any) do you see from having your children covered for
health care?
How do you feel, knowing your child has health insurance?
What do you feel are the strongest features of CHIP? What are the weakest?

What changes to CHIP would you like to see in the future?

and an enriched understanding based on group dynamics. They include questions that explore such
critical issues as the following.
• Outreach. How did parents first hear about CHIP and what were their initial
perceptions about what the program could do for their children? For parents of eligible
but uninsured children, have they ever heard of CHIP, and if so, have they ever tried to
enroll their children in it?
• Enrollment. What did parents think about the enrollment and renewal processes (for
CHIP and employer-sponsored insurance, as applicable)? Were they simple and
convenient or difficult and challenging? Why?
36

Chapter V: Case Studies

Mathematica Policy Research

• Cost Sharing. What do parents think about the cost sharing imposed by CHIP
programs? Are the costs so high that they discourage enrollment (in the case of
premiums) or service use (in the case of copayment)? Have overall out-of-pocket
expenses been significant enough to discourage renewal of coverage or do parents think
that cost-sharing levels are fair and affordable? Are parents of CHIP enrollees aware
there is a 5 percent cost sharing limit? If so, how do parents keep track of their
spending? For parents of uninsured children, how affordable is health care for them? For
parents of children with employer-sponsored insurance, do they pay premiums and/or
copayments, and if so, do these payments seem affordable and fair?
• Access to Care. What do parents think about their choices of primary care providers
and dentists? Have they found providers who are close and convenient? Do they
perceive that their children are receiving high-quality medical and dental care? Are some
services still difficult to obtain and, if so, which ones and why?
• Overall Impacts on Daily Life. What do they think about CHIP overall? What benefits
do they perceive from having their children covered for health services? What are the
strongest features of CHIP and what are the weakest? What changes to CHIP would
they suggest for policymakers? For parents of uninsured children and CHIP disenrollees,
would they be interested in signing up for a program like CHIP?
Recruitment. Eight participants is the optimal number for a focus group, but to ensure
adequate participation, we will recruit approximately 12 individuals per group. Recruitment strategies
will vary based on the different types of groups proposed, but we plan to enlist the help of
community-based organizations and providers, child health advocates, policy groups, and/or health
plans to gain access to potential participants in many of our groups. For other populations, we will
rely on enrollment and disenrollment files of appropriate state or county social services agencies. We
describe our recruitment approaches in more detail below.
One proven approach to recruitment that we have often used to good effect enlists the help of
local providers and community-based agencies as “partners” in recruitment. Specifically, this
approach entails developing a series of recruitment materials (for example, flyers announcing the
group, recruitment “scripts” that describe the purpose and process of the focus group, and sign-up
sheets), and asking local agencies or providers if they would be willing to recruit focus group
participants from among their clients. If they agree to help, administrative staff will use our
recruitment materials to either directly recruit from clients they are serving during their routine
course of business or telephone potential participants from a roster of clients. We instruct
administrative staff to emphasize to clients that participation is entirely voluntary. To help with
recruitment, we will offer incentives (for example, $50 cash or a gift card of equivalent value) and
also inform parents that light refreshments and child care will be provided during the groups. An
added benefit of this approach to recruitment is that local providers such as Federally Qualified
Health Centers are often willing to offer their conference or meeting rooms free of charge for focus
groups. We believe that this approach to recruitment is both effective and efficient for most of our
groups, in particular enrollees, eligibles but not enrolled, non-English speakers and members of
racial/ethnic minorities, and parents of children with special health care needs.
An alternative recruitment approach will be needed for the other populations of interest:
disenrollees, newly eligibles, and low-income families with ESI. For disenrollees, we will request
enrollment and disenrollment files from state or county eligibility agencies and then sample a pool of
potential participants from these rolls. Research staff will telephone these families directly and
37

Chapter V: Case Studies

Mathematica Policy Research

recruit them for the groups using a script similar to that employed for the other groups. For families
with ESI, we will need to develop a special recruitment strategy that could, for example, sample
families from the largest health plan operating in a local jurisdiction, thus giving us access to families
with a range of private policies. Alternately, we could decide to sample from a prominent employer
or two in the locality that provides coverage to its low-income employees, thus permitting us access
to families that would represent a significant portion of the low-income ESI population. Once again,
in these scenarios, research staff will work with the health plans or employers to develop a sampling
frame for potential participants, then telephone the families directly to solicit participation in our
focus groups. We will work with our federal project officers to discuss alternative approaches and
finalize our recruitment plans.
Conducting the Focus Groups. All focus groups will be scheduled for 1.5 to 2 hours and will
be facilitated by a senior member of the evaluation team. 6 Written informed consent will be
obtained from all participants prior to the start of the focus groups. Moderators will be supported by
research staff who will take extensive notes during the proceedings and digitally record the sessions.
During these discussions with parents and/or family members, we will ask about enrollment
processes, barriers to enrollment and retention, impressions of cost-sharing responsibilities, access
to and quality of care, and awareness and impact of outreach. As previously described, moderator
guides will be tailored to probe into specific issues relevant to each subgroup. For example, focus
groups with families of children with special health care needs will home in on access and quality of
care questions, particularly whether the scope of services available through CHIP is adequate to
meet their children’s needs. Similarly, groups held with non-English-speaking families will consider
the accessibility of program materials and the transparency of enrollment processes. Focus group
recordings will be transcribed verbatim and, along with notes taken during the groups, will be
analyzed and used to support and further illustrate findings from the case studies and quantitative
data analysis.
D. Analysis
This section describes our plans for analyzing the various sources of qualitative data we will
collect, including background materials, data obtained through site visits, and information from
focus groups. Analysis and synthesis of these rich sources of data will be aided by our use of
standardized coding schemes and ATLAS.ti software. The analysis will address a broad range of
evaluation questions regarding program design, implementation, and effects of the program on
intermediate outcomes.
The case studies will draw on data from background materials, site visits, and focus groups to
construct a comprehensive assessment of each state’s CHIP program. Additional cross-cutting
analyses will explore similarities and differences in the findings from these same data sources across
the states. The analysis of case study data in preparation for report writing will involve a series of
systematic steps to ensure our interpretation of findings is accurate and comprehensive.

Focus groups in the states led by Urban will be facilitated by the case study task leader or, for Spanish-speaking
groups, another Urban researcher who is fluent in Spanish and English. The focus groups in states led by Mathematica
will be facilitated by a senior survey researcher who is fluent in Spanish and English and who has been involved in
several previous evaluations of child coverage programs.
6

38

Chapter V: Case Studies

Mathematica Policy Research

We plan to use ATLAS.ti, a software program designed to facilitate the analysis of qualitative
data. The software will make it easier to organize the large amounts of information we expect to
gather from the different case study data sources so that common themes and contrasting points of
view can be identified and analyzed more readily. The primary structure for the coding scheme will
build on the interview protocol for the site visits and on the moderator guides for the focus groups,
along with the research questions the findings will inform.
For the site visits, the process will begin with a review of the clean version of the notes from
the first two site visits. A senior team member will construct a list of the topics and themes we
would want to capture with codes; other team members will review this list and add to it as
appropriate. In addition to coding topic areas and the content of responses, the coding scheme will
include codes for different types of informants. When the team comes to a consensus about the
coding scheme, two trained analysts will code the site visit notes. A senior team member will review
the first set of coded notes for quality and consistency across the two coders. Inconsistencies in the
coding will be discussed and resolved before the analysts code the notes for the remaining site visits.
The basic elements of this initial coding scheme likely will be applicable to interview notes collected
in subsequent site visits in other states. However, as each state is unique, we will revisit the basic
coding scheme as needed to add state-specific codes.
We will use a similar process to code the notes from the focus groups, using the moderator
guides as the basic structure for the coding scheme and adding codes as appropriate to capture
important themes and categories of perspectives. We will create theme tables for each state,
categorizing findings across the different focus groups conducted in that state; this will allow us to
compare and contrast results across the different types of groups within each state. To ensure the
coding is completed within the demanding timeline for the case studies, and because the content of
the site visit and focus group material will largely be distinct from one another, we will have separate
teams code the notes from the site visits and the focus groups. When the coding process is
complete, we will use ATLAS.ti to sort and query the data by topic or theme so we can identify
commonalities and points of dissent across different types of interview respondents.
While the coding process will facilitate the analysis for individual case study reports, it will also
help immensely in the cross-cutting analyses. The volume of information across 10 states will be
enormous; ATLAS.ti will greatly facilitate searching for key words and themes, and enable grouping
portions of text to allow for comparisons across states, groups of states, and regions. Using output
reports generated from ATLAS.ti, we will organize findings by theme and develop matrices that will
facilitate the synthesizing and summarizing of evidence gathered from the key informant interviews
and focus groups. This will help in developing cross-cutting reports that illuminate commonalities
and differences within the key research areas across states, program types, and types of informants.
E. Challenges and Limitations
Qualitative methods of data collection provide textured and nuanced findings that other
research methods are unable to capture, however, certain challenges are inherent to this method of
inquiry. Most notably, sample size is a limitation that will impact the generalizability of the evidence
gathered through interviews and focus groups. By their nature, key informant interviews and focus
groups obtain information from a relatively small number of individuals and thus cannot be
presumed to be representative of the entire population of interest. For key informant interviews, we
will work closely with well-known contacts at the state and local level to identify persons and
organizations that hold the greatest promise for providing us with exposure to a broad and
representative group of stakeholders. But as we will be limited to conducting only 20 to 30
39

Chapter V: Case Studies

Mathematica Policy Research

interviews, we may inadvertently miss important individuals and/or perspectives. In addition, with
regard to the focus groups, because we will rely on the assistance of community-based organizations
and providers in recruiting our participants, the parents with whom we speak might
disproportionately include those who are more active users of community service systems; thus,
again, they may not be representative of families as a whole. Still, our qualitative approach will allow
us to obtain a broad picture of the CHIPRA design, implementation, and impacts on families across
each state and in selected localities. Through cross-cutting analysis of these efforts, the research
team will identify common themes with regard to program implementation and perceived effects,
and synthesize the data in as useful and generalizable a manner as possible.

40

Mathematica Policy Research

VI. SURVEY OF STATE PROGRAM ADMINISTRATORS
To supplement the intensive assessment of program experiences in the 10 case study states, we
will conduct a census survey of CHIP program administrators in all 50 states and the District of
Columbia. 7 This telephone survey will complement other aspects of the qualitative analysis by
providing a larger context within which to interpret findings from the case studies. Going beyond
facts and basic descriptive information, it will gather insights about program accomplishments and
setbacks to date, changes since the last CHIP national evaluation, and pressing new issues requiring
attention in the future. To some extent, the survey will also provide a sense of how findings from
the case study states might be generalized to the nation as a whole.
A. Instrument Content and Development
We will design the survey instrument as a tool for collecting information consistently while also
allowing for nuances within each state’s program. Our approach will also make use of available stateproduced documents, other research reports, and skilled research staff to conduct semi-structured
interviews, recognizing that state CHIP administrators are busy and, since CHIP’s inception, have
already responded to many other requests for program information. As specified in the RFP, we
plan to conduct one interview per program (although we anticipate that more than one state
representative may be present during the interview in some states) and will ensure that the
instrument can be administered within a reasonable amount of time.
The survey will follow the case studies; therefore, survey instrument development will benefit
from insights drawn from the site visits and focus groups. As with the survey of enrollees and
disenrollees, we will incorporate those components of the instrument for the previous survey of
program administrators that are still relevant. Topics from the original survey that will be updated
for the current survey include:
• Outreach. Confirmation of types and timing of outreach activities, outreach targeted at
special populations, joint outreach with Medicaid, effective and ineffective strategies
• Enrollment and Retention. Enrollment trends overall and in subpopulations, changes
in enrollment and redetermination processes as a result of CHIPRA, eligibility and
renewal processes under CHIPRA, role of media and community-based organizations in
enrollment, differences in enrollment between CHIP and Medicaid, facilitators and
barriers to enrollment and retention, practices that facilitate retention and prevent
disenrollment
• Benefit Package Design. Benefit package under CHIP, changes in the benefit package
due to CHIPRA or for any other reason, strengths and weaknesses of the benefit
package, key differences in the CHIP benefit package compared to Medicaid and private
insurance, overall impressions of whether benefits meet the needs of enrolled children
Depending on the timing of the survey, it may not be advisable to conduct interviews with program
administrators in the 10 case study states. The current schedule has the survey of program administrators being
conducted within less than a year from the timing of the case studies. In the previous evaluation there was more time
between the case studies and the program administrator survey. We will discuss this issue with ASPE in the near future.
7

41

Chapter VI: Survey of State Program Administrators

Mathematica Policy Research

• Service Delivery and Payment Arrangements. Confirm brief description of the state’s
delivery and payment systems, primary differences between CHIP and Medicaid delivery
and payment systems, changes in number or array of health plans over time, quality
improvement initiatives for CHIP service delivery care coordination or medical home
features implemented
• Cost Sharing. Cost sharing policy changes in recent years; debates surrounding these
changes (if any); effect (if applicable) of cost sharing on utilization, enrollment, and
coverage retention/disenrollment
• Crowd-Out. Confirm whether crowd-out is a concern, strategies to deter this
phenomenon, changes in these policies over time, current debates surrounding crowdout versus when CHIP was first being formulated, determination of how states monitor
crowd-out, opinion on whether and how CHIP has affected private insurance markets
• Family Coverage. Confirm whether state extends CHIP coverage to parents of enrolled
children; impetus for family coverage in states where it exists; experiences implementing
family coverage (strategies, facilitators, barriers); monitoring effects of family coverage
on enrollment of children or use of services by children
• Future Plans and Issues. Recommendations for improving CHIP, and how the state is
preparing for health reform implementation.
Additional topics will include:
• Employer Subsidy Efforts. Decision to subsidize employer-based coverage under
CHIP; experiences with implementation of subsidies (if applicable); strengths,
weaknesses, and impact of incorporating employer subsidy arrangements into CHIP
• Financing. Size of state’s federal CHIP allocation, views on funding adequacy, current
aggregate spending to date; effects of limited/uncertain funding in recent years on state
policies (e.g., need to freeze enrollment, institution of waiting list, tightening of eligibility
requirements, cut provider payments, etc.); primary sources of state funding; current
state budget picture; effect of CHIPRA and PPACA passage on funding debates in state.
• Coordination. How states coordinate CHIP, Medicaid, and private insurance programs
(especially for mixed-coverage families); how coordination (or its lack) between Medicaid
and CHIP affect the enrollment of children in both programs
• Quality of Care. Whether states are pursuing quality improvements in CHIP service
delivery; if so, scope and progress of these efforts; what quality assurance monitoring, of
plans and/or providers, is conducted; whether states are using financial incentives, such
as pay for performance, to improve care quality and how that is working to date; whether
states have pursued care coordination or medical homes features and experiences to date
with those approaches
• Changes Since the Last National Evaluation. Nature of and rationale for any
changes in program design not already discussed, effects of key provisions of CHIPRA
and health reform legislation, effects of the economic recession and how states have
modified their programs in response to budgetary constraints
As with the other survey components, the survey of program administrators will require OMB
clearance prior to implementation (this survey will be cleared through OMB Package #2). To
42

Chapter VI: Survey of State Program Administrators

Mathematica Policy Research

estimate respondent burden and identify clarifications that may be needed in the survey instrument,
we will pilot the instrument in three non-case study states (so that we do not need to be concerned
with the timing of this instrument versus the timing of the case studies). Ideally, we would test the
instrument in one state with a Medicaid expansion program, one with a state-only program, and one
with a combination program. The materials included in the OMB submission will incorporate any
revisions to the instruments and approach that arise from the pilot test experience.
B. Data Collection Approach
After receiving confirmation from the TOO that we are cleared for data collection, a team of
experienced research analysts will conduct the telephone interviews. We will hold a training session
before fielding the survey to ensure that everyone understands the protocol and is prepared to
address different state circumstances and issues that may arise during the interviews. Training will
also give team members involved in this activity a common understanding of their roles and
responsibilities.
Before implementing the survey, we plan to gather and review available background
information to tailor the instrument for each state and develop a brief fact sheet to reference during
the interview. Fact sheet development will leverage background research conducted as part of Task
801 (Analysis of CARTS and SEDS Data). As discussed in Chapter VIII, Section A, by July 2011,
we will conduct an extensive analysis of state annual reports and state plans to prepare summary
tables on all states as well as more detailed descriptions of the 10 study states. The case studies will
make use of this background material and augment information for these 10 states as appropriate.
Subsequently, we will draw on these background materials for the development of fact sheets for the
survey of program administrators.
Fact sheets will include information on the following program features:
• State Context. Percentage of low-income children uninsured (trends over time);
Medicaid eligibility policy; financing (federal allotment and matching percentage, state
funding level and source of state funds)
• Program Design. State plan submissions and amendments; general characterization of
state approach to CHIP (including Medicaid or CHIP waivers employed); eligibility
policies; outreach approaches (methods employed, timing of campaigns); benefit design
and delivery system (provider networks, payment policies, similarities and differences
compared to Medicaid, premiums and cost sharing)
• Enrollment and Disenrollment. Estimated number of eligible children, number
enrolled to date
We will also review state-specific findings from the previous evaluation as a reference for the
current survey of program administrators. This step will help ensure that the questions directed to
program administrators are appropriate and the interview time is used productively. We will take the
same approach as we did for the case study visits and use a common template for extracting
information as well as, to the extent possible, using existing summary tables of state characteristics
and program features.
Less than a week before initiating the telephone survey of program administrators, we will mail
a letter to each state CHIP administrator, possibly co-signed by the ASPE project officer and the
Mathematica project director, which provides a brief overview of the evaluation and explains the
43

Chapter VI: Survey of State Program Administrators

Mathematica Policy Research

purpose and general topic areas for the telephone interviews. The letter will also explain the time
commitment involved and how information from the survey will be used. Sending a letter prior to a
telephone survey is a standard way to increase response rates and is one of the strategies we will use
to achieve a high participation rate. Roughly one week after we mail the letter, we will contact the
state CHIP administrators to schedule interviews. In some states, we anticipate needing to make
several calls to one or more people before finalizing a schedule.
The lead interviewer will be joined by an analyst, who will take notes during the call. The
interviews will last approximately one hour each (a range of 45 to 90 minutes). After the call, the
interviewer and note-taker will clean the notes, which will serve as textual data for analysis. Notes
will be saved on a secure LAN file. (Due to time and budgetary restrictions, verbatim transcripts will
not be used. Detailed notes proved to be sufficient data sources in the previous CHIP evaluation.)
In cases where a telephone interview cannot be scheduled, we will make arrangements for the survey
to be completed in another mode, such as email or a mailed questionnaire.
C. Analysis
Analysis of data from the telephone survey of state program administrators will identify
important state-specific contextual information; CHIP implementation experiences, changes,
challenges, and successes; and program administrators’ views on CHIPRA and health reform.
We will organize and synthesize survey findings in a manner similar to the methods described
above for case study and focus group data. We expect that the coding scheme for the survey of the
50 states and the District of Columbia will be largely the same as that used for the site visits, because
the two components will be designed to be complementary. After cleaning each set of survey notes,
we will import the data into ATLAS.ti and apply our standardized coding scheme to the notes. We
also will use ATLAS.ti to group findings by theme.
When data collection from all of the state surveys is complete, we will use ATLAS.ti to compile
and examine cross-cutting themes. Given the large amount of data we will collect through the
surveys, ATLAS.ti will greatly facilitate searching for key words and themes and grouping portions
of text to allow for comparisons across states, groups of states, and regions. Using output reports
generated from ATLAS.ti, we will assemble state matrices or theme tables for all 50 states and the
District of Columbia. Some of these tables will capture more descriptive or factual information, such
as specific population groups targeted by special outreach efforts and cost-sharing elements of state
programs. Other tables will capture insights or perspectives on, for example, the reasoning behind
state decisions and important factors influencing the direction the program is likely to take in the
future. These tables will form the foundation for writing the draft and final reports on the survey
results.
We will provide the TOO with a draft report of the findings by week 100, as requested in the
RFP. The report will address CHIP program features, with an emphasis on the accomplishments of
state programs to date, challenges and barriers states have faced, and programmatic and contextual
changes since the first national evaluation. After incorporating comments from the TOO, the final
report will be delivered by week 120.
D. Challenges and Limitations
We anticipate several challenges in undertaking the survey of program administrators, but will
draw on our experience in performing the first evaluation to minimize them. For instance, while we
44

Chapter VI: Survey of State Program Administrators

Mathematica Policy Research

anticipate that most administrators will be responsive to our efforts to contact them, the process of
scheduling interviews with these very busy individuals will be time-consuming and logistically
complex. To mitigate this challenge, we will use a systematic protocol for contacting administrators
and tracking contacts and follow-up attempts. In addition, we will use a centralized calendar to
ensure that schedulers have the most up-to-date information on interviews currently scheduled
versus yet to be scheduled. Another challenge will be managing the sheer volume of data we collect
through the stakeholder interviews. As noted earlier, we anticipate that ATLAS.ti will be a great help
in sorting through the data to help researchers identify themes and compare results across states.
Comparing information across states will pose challenges as well, due to variations in state
approaches to CHIP implementation, service delivery models, financing, and other characteristics.
The use of summary tables, as in the previous evaluation, will help researchers identify groups of
states with similar characteristics that may be grouped together feasibly for the purpose of making
comparisons.
We anticipate gathering a great deal of rich and nuanced information from state program
administrators, who are intimately familiar with their state’s approach to CHIP; nevertheless, state
program administrators’ perspectives necessarily are limited by their specific roles within the
program. For this reason, a limitation of the survey of program administrators will be the lack of
multiple viewpoints expressed during the interviews. However, other components of this evaluation,
particularly the case studies, will complement the survey of program administrators by including
perspectives from individuals with many different roles within CHIP. Analyzing findings across all
of the components of this evaluation will provide a more holistic picture of CHIP implementation.

45

This page has been left blank for double-sided copying.

Mathematica Policy Research

VII. SURVEY OF ENROLLEES AND DISENROLLEES
A survey of parents or guardians of children currently and previously covered by CHIP and
Medicaid will provide a detailed description of the characteristics of these children, their movement
in and out of the programs, and their experiences accessing and using health care. Using specially
trained telephone interviewers, we will administer a survey to parents and guardians of CHIP
children (in 10 states) and Medicaid children (in 3 of the states) to understand their experiences
navigating the application, enrollment, and renewal processes; the child’s health status (measured
along several dimensions); access to and use of health care services; experiences with the care
process; provider communication and coordination of care; barriers and unmet needs; and the
perceptions of parents and guardians of the value and quality of their child’s health care. These data
will be linked to child and family demographic characteristics, as well as to key program features
measured in the qualitative components of the evaluation.
This chapter describes the survey, including the sample and instrument designs, the data
collection approach, and core elements of the analysis. In each of these sections, we discuss the
challenges we may confront and how we plan to minimize any problems or limitations associated
with these challenges.
A. Sample Design
The target populations for the CHIP survey will be new enrollees, established enrollees, and
recent disenrollees in 10 selected states, defined as follows:
• New enrollees include children who have been enrolled in CHIP for at least 60 days
(two months), but less than 90 days (three months), at the time of sampling. In addition,
the children must have been disenrolled for more than one month prior to their current
new enrollment.
• Established enrollees include children who have been enrolled for five months or
more at the time of sampling.
• Recent disenrollees include children who have been disenrolled from CHIP for at least
60 days (two months), but less than 90 days (three months), at the time of sampling. In
addition, the children must have been previously enrolled for at least two months prior
to their current disenrollment.
In addition, to be part of the target population, an individual must be age 18 or younger, in the
case of the two enrollee domains, or age 19 or younger in the recent disenrollee domain. (Including
19-year-olds in the sample allows us to capture children who lost eligibility because of age
restrictions.) We also require that the individual live in the selected state at the time of sampling.
The definition for each of the three sample domains is the same as the one used in the prior
evaluation and, in each case, reflects a balance of sometimes competing considerations. For example,
the new enrollee definition balances the need for a period sufficiently long to reflect a true period of
coverage and a period sufficiently short for the respondent to successfully recall their experience
prior to enrolling. In addition, by including new enrollees who return to CHIP after some kind of
gap in coverage, we appropriately reflect the cross-section of children who enter CHIP, some of
whom will have had prior public coverage experience and some of whom will have not. Likewise,
the disenrollee definition balances having a period of disenrollment sufficiently long for a
47

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

respondent to describe their coverage status after leaving CHIP and sufficiently short to successfully
locate and interview a sizeable fraction of a domain that may be highly mobile. In addition, by
including children who had coverage for as little as two months, the disenrollee definition reflects
the fact that some children leave CHIP after quite short periods of coverage, though the vast
majority remain enrolled for some time. As with the prior evaluation, exceptions may need to be
applied to these definitions. For example, in the prior evaluation, the adoption of presumptive
eligibility in New York led us to extend the definition of new enrollment from two to three months
for cases sampled with a presumptive eligibility indicator. To the extent such exceptions arise in this
study, they will be described in a subsequent memorandum that details the specific sampling criteria
in each state, submitted after we have obtained and assessed each state’s eligibility data.
One complication with the sample design that arose in the prior evaluation is that we had a
sizeable fractions of interviews completed with respondents who reported start and end dates of
coverage that were not at all close to what was shown in the administrative records. For two of the
sample domains, new enrollees and disenrollees, such confusion greatly limits the value of the
information provided. For example, when respondents of new enrollees fail to identify the child’s
new enrollment, they are unable to provide meaningful information about their pre-CHIP coverage
or pre-CHIP experiences accessing and utilizing care. Likewise, when respondents of disenrollees
fail to identify the child’s disenrollment, they are unable to provide meaningful information on the
factors that contributed to their disenrollment or their coverage or other experiences since they left
the program. For these reasons, we plan to end the survey of new enrollees and recent disenrollees if
the interviewer determines that the period of reported coverage is far different than what the
administrative records show. (A subsequent memorandum will specify the decision rule that we will
adopt for this termination, pending further survey piloting after OMB approval). For established
enrollees, however, this disconnect is less problematic as the data reported on their recent access,
use and other health care experiences still reflects their true period of coverage no matter what their
self-reported coverage status. Indeed, dropping cases that show a disconnect between self-reported
and actual (administrative) coverage periods would risk biasing the findings for established enrollees,
as the sample ultimately interviewed in this domain would not accurately reflect the outcomes of the
population that was actually covered by the program.
The disconnect between the self-reported CHIP coverage period and the period shown on the
administrative files arose most often for two groups in the prior evaluation. The first are new CHIP
enrollees that had either experienced a short gap in CHIP coverage or who had transferred from
Medicaid, both of whom often reported a period of CHIP coverage far longer than indicated from
the administrative records (presumably because they never recognized the transition to CHIP). The
second are CHIP disenrollees that subsequently either returned to CHIP after a short gap in
coverage or who had transferred to Medicaid, both of whom often reported never having
disenrolled (again presumably because they never recognized the transition from CHIP). To
minimize the need to drop either of these cases at the time of the interview, we anticipate greatly
reducing the proportion of these cases that are actually sampled for the study in each state (and
perhaps omitting them altogether). However, before we can commit to this sampling approach and
apply a specific decision rule for cases to which this approach applies, we need to acquire each
state’s data and be sure we can identify these cases successfully. (This is particularly true of the cases
reflecting transitions to and from Medicaid, which will require linking data from two entirely
different eligibility systems). Thus, as with any exceptions we adopt for our sample domain
definitions, the specific sampling rule we adopt for these cases will be shared in a subsequent memo,
completed after we have had an opportunity to asses each state’s data.

48

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

We define enrollment for each sample domain based on when we expect the parent would
consider the child enrolled—a date that might differ from that on which the state actually began
paying for services. For example, some states retroactively enroll children as of the first day of the
month in which the parent applied for CHIP, but they might not determine the child to be eligible
until one or more months after the application was received. As a result, the date services began to
be covered by the state might be a month or more earlier than the date the parent is notified of the
child’s enrollment. In this instance, we would define enrollment from the date associated with the
notification of coverage to the parent, as opposed to application date or the retroactive coverage
date.
Sampling Frame. The sampling frame for a given domain is the population of enrollees and
disenrollees in each state meeting the definition of the target population summarized above. This
frame will be constructed for each state using data from its administrative files. Constructing the
sample frame as quickly as possible will be essential for this survey, particularly with respect to the
populations of new enrollees and recent disenrollees, for whom risks of recall bias and survey nonresponse increase with time. 8 One key step in assuring timely frame construction is receiving
accurate administrative data from the states on a regular basis. In our discussions with the program
and technical staff in each state, we will request delivery of data within two weeks of the specified
data extract cutoff date.
Using information about the children and families that is contained on the sample frame, we
will have the option of oversampling particular groups within each sample domain. 9 Examples of
such information include the income level or eligibility classification of the household, the age of the
child and the prior coverage of child – ideally in both CHIP and Medicaid. While oversampling can
result in reduced precision for the full sample (due to design effects from weighting) it can have a
couple of benefits that may outweigh these costs. First, it can result in a sample that is larger for
relatively small but important subgroups, such as children in upper income households that may be
subject to relatively high co-payments. Second, its converse (undersampling) can result in fewer
cases that may offer marginal analytic value to the study, such as new CHIP enrollees with recent
public coverage experience who (as noted above) will often not even be reported as having newly
enrolled.
Given these benefits, we do anticipate using the administrative data to oversample children on
the sample frame in at least some sample domains. However, a final decision on whether and how to
conduct this sampling will need to wait until we have the actual eligibility files from the states and we
can assess the quality and content of the available data elements on which such oversampling would
take place. (For example, in order to consider any sampling based on transitions to or from
Medicaid, we must know that we can reliably link the Medicaid and CHIP data in each state -information that we will not have until the MOUs with states have been signed and the enrollment
8 Delays in construction of the frame could necessitate extending the definition of the new enrollee and recent disenrollee
sample domains to include longer periods on or off the program.

9 A related option is to use data obtained from a screener at the start of the survey interview to overrepresent
groups of particular analytic interest that cannot be identified from the frame, such as those with special health care
needs. This approach can be valuable for obtaining precise measures for relatively small subgroups, particularly those
defined by the child’s health; however, it is also very costly, as the initial contact amounts to a screening interview that
for some respondents (not meeting the criteria for oversampling) results in a non-completed survey. Thus, we do not
expect to adopt this approach.

49

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

data made available). Once we have completed this assessment, we will submit to DHHS a
memorandum detailing the exact sampling strategy for each of the three sample domains, including
the numbers of cases to be sampled within each stratum defined for purposes of oversampling.
Sampling Approach. Our sampling approach will use an innovative version of the classic subsampling for non-response follow-up design. The advantages of this approach are that it minimizes
data collection costs while maintaining the desired response rate. It has two independent
components:
• A multi-stage, clustered sample will be interviewed by telephone, with face-to-face
follow-up of unlocatable and nonresponding households. 10 Use of face-to-face (field)
follow-up is more costly than telephone alone and requires the less efficient clustersampling approach, but it results in high response rates and improved population
coverage. Without field follow-up of unlocatable and nonresponding households, we
would miss some parents of CHIP children who belong to minority or other sub-groups,
especially Hispanics, Native Americans, and African Americans (Cybulski et al. 1999).
• A stratified, unclustered random sample representing the same population as the
clustered sample will be interviewed by telephone only. Besides reducing costs, the
telephone-only sample design benefits from increased statistical efficiency associated
with unclustered designs.
In both sampling components, we will draw and field up to two rounds of samples for each
sample domain in each state, allowing two months between each sample draw. 11 This staged fielding
will be particularly important in reducing the time between sample frame construction and the
collection of survey data, since the fielding period will be as close as possible to the time when the
administrative data are provided by the states and cleaned by Mathematica. In addition, for states
with the smallest populations of CHIP enrollees, these multiple draws may be needed to ensure that
sample sizes are sufficient for certain domains (most notably, recent disenrollees). We will draw
these samples in such a way as to avoid sampling more than one child from the same household or
sampling the same household for more than one draw.
Each sample draw will be derived from the universe that exists at the time of sampling but will
take into account whether a household was in the sampling frame or in the sample of the prior
draw(s). To speed up the sampling process and ensure timely fielding, we will request an advance
test data file from each state to check the database and our sampling algorithms. In addition, our
design assumes that the state will send the database of enrollees and disenrollees on two different
occasions, each two months apart. On each occasion, we will classify the enrollees based upon
month of initial enrollment, with disenrollees in a separate category, so as to create the enrollee
domains for use in sampling. Then, we will determine how quickly each of the ten states can deliver
enrollee data for a particular month and set the exact domain definitions.
10 Unlocatable households may be more accurately described as “households that cannot be located from the central office.”
This group also includes households without any type of telephone service. Households that cannot be located from the central office
generally have current unlisted numbers that are not recorded in the CHIP enrollment files, or have numbers that have been changed
to new numbers that cannot be determined. This is more likely to occur when the new number is a cell phone.
11 For the prior CHIP survey, there were a total of 20 sample draws—two states with one draw; six states with two draws, and
two states with three draws.

50

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Some sample members may change (or at least report a change in) classification between the
time of sampling and interview; for example, a recent disenrollee sample member may return to
CHIP by the time of interview, effectively becoming a new enrollee. (This type of transition is most
likely to occur when locating or face-to-face follow-up activities are required, extending the time
between sampling and interview.) As with the prior CHIP survey, we will address these transitions
by allowing sample members to respond on the basis of whatever domain they consider themselves
to be in. Because our approach to analyzing the survey data may be affected by such transitions, we
will identify them as part of the data and maintain a frequency count for them over the course of
fielding the survey.
Multistage Clustered Sample Selection. For the clustered sampling component with face-toface followup, the first step in sample selection will be defining primary sampling units (PSUs) for
each state. These PSUs will be defined based upon ZIP codes or combinations of ZIP codes that
provide a specified minimum number of enrollees and disenrollees. The same set of PSUs will be
used for all sample draws. A composite size measure will be developed for each PSU in the frame
that reflects the desired total state sample of new enrollees, established enrollees, and recent
disenrollees (Folsom et al. 1987). We will select a total of 30 PSUs from each state, with probability
proportional to this composite size measure and with minimal replacement using Chromy’s (1979)
sequential sampling procedure. In selecting the 30 sample PSUs from the frame of N1 ( h ) PSUs in
state h, Chromy’s procedure partitions each state’s N1 ( h ) total PSUs into 30 zones of equal size,
based upon the size measure S ( h, i , + ) . Exactly one PSU is selected from each zone. The zones are
defined so that all pairs of PSUs have a chance of appearing together in the sample—a requirement
for unbiased estimation of sampling variances. Using a controlled ordering of the PSUs, this zoned
sequential selection makes possible an implicit stratification of PSUs that ensures they are as
representative as possible of selected variables of interest. To ensure selection of both urban and
rural PSUs and the distribution of the sample across each state, candidate variables for ordering the
PSUs in the frame before sampling will include urbanicity and the geographic location of the PSU.
We will also use a composite size measure to ensure that the desired sample sizes are achieved
for the domains of interest (new enrollees, established enrollees, and disenrollees). With this
procedure, we will be assured of equal selection probabilities within states for children in each
domain. The composite size measure will be defined as
(1)
where
fd ( h ) is

Cd ( h, i , j )

D
S ( h , i , + ) = ∑ S ( h , i , j ) = ∑ ∑ f d ( h )C d ( h , i , j ) ,
j
d =1 j

is the number of children in domain d of household j of PSU i from state h and

the desired overall sampling rate for domain d in state h.

Prior to selection of households, as with the selection of PSUs, we will use a controlled ordering
procedure of households within each PSU. Variables for ordering will be the sampling domains and,
when available, the race of the children in the households. For each selection of the ith PSU from
the hth state, we will select n2 ( h ) households, with probability proportional to size. When multiple
enrollee domains are present within a household, we will randomly determine the enrollee type to
interview using differential probabilities based upon the desired state h sampling rates fd ( h ) for
domain d. If multiple children are present in the sampled household for the selected enrollee
domain, we will randomly designate one child to be interviewed. Using the composite size measure
for each household will enable us to oversample households with multiple eligible children while
51

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

ensuring that the selection probabilities are equal within enrollment domains, regardless of
household size.
Stratified, Unclustered Sample Selection. For the unclustered, telephone-only sampling
component, we will first sample households. To ensure representation throughout each state, we will
explicitly stratify households by a geographic measure specific to that state. As with the clustered
design, if the household includes children in two or more domains, we will randomly determine the
domain for which a child will be selected and, finally, select the child within it. For households with
multiple children eligible for interview, we will randomly select one for interview. Prior to sample
selection, we will sort the households by the various combinations of enrollment domain(s) to which
their eligible children belong (recent enrollee only, recent enrollee and established enrollee, recent
enrollee and recent disenrollee, established enrollee only, and so forth). Then, within each
combination, we will further sort the households to create an implicit stratification of households.
Candidate variables to use will include race and ethnicity, metropolitan status, and geographic area.
Households will be selected with probability proportional to their composite size measures. For
sampled households with multiple survey-eligible children, we will randomly sample one child for
interview using the desired sub-sampling rates for the enrollee domains. This composite size
measure approach will ensure that we achieve equal selection probabilities within each state for each
enrollee domain, regardless of the household size. Similar to the approach used for the clustered
sample, the selection process for the unclustered sample will prevent selection of the same
household in multiple draws.
Weighting Procedures. For this survey, we will calculate sampling weights within each sample
(clustered and unclustered) based upon the inverse of the probability of selection across all draws.
Each eligible household has a probability of being selected for the clustered and unclustered
sampling components, as each sample represents the full population. We will first calculate designspecific sampling weights for each component (clustered and unclustered), for each sample draw and
state, using the product of the sampling weight of the household and the conditional sampling
weight of the child, given that his or her household was selected. We will then combine the designspecific sampling weights across draws to create a single base sampling weight for each sampled
child for each design for each state.
We will pursue households that were unlocatable by the central office only when they have been
selected for the clustered sampling component, essentially having sub-sampled them for nonresponse follow-up. For the unclustered sampling component, we will consider households
unlocatable by the central office as nonsampled nonrespondents. The following table shows how the
different sample components are dealt with in the composite weights, to be used when combining
both sample components.
Unclustered Sample (sampling weights sum to W)

Located by Central
Office (sampling weights
sum to A)
- located -

Composite weight C1 =
sampling weight times
(1-lambda)
Represents locatable
population

Clustered Sample (sampling weights sum to W)

Unlocated by Central
Office (sampling weights
sum to W-A)

Located by Central
Office (sampling
weights sum to B)

Unlocated by Central
Office (sampling weights
sum to W-B)

Composite weight = 0

Composite weight C2 =
sampling weight times
lambda

Composite weight =
sampling weight times
(W – (C1+C2))/(W-B)

- not pursued in field -

- located -

Represents locatable
population

52

- pursued in field -

Represents unlocatable
population

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

To compute a survey estimate, Est(Y), using information from both samples, one cannot simply
combine the two samples without adjusting the weights, since the clustered and unclustered located
samples represent the same target population. Separate estimates can be computed from each
sample and combined using the equation
(1) Est(Y) = λ Y(clustered) + (1 - λ) Y(unclustered)
where Y(clustered) is the survey estimate from the clustered sample, Y(unclustered) is the survey
estimate from the unclustered sample, and λ is an arbitrary constant between 0 and 1. Any value of λ
will result in an unbiased estimate of the survey estimate, but not necessarily an estimate with the
minimum sampling variance. We used an approach that calculates a single lambda using sample sizes
and design effects due to unequal weighting for the two samples. In particular, λ acts as a weighting
factor, with more weight given to the larger sample, with the sample sizes adjusted by the design
effect due to unequal weighting. The formula for λ is given by:
n(clustered ) / deff (clustered )
(5) λ = n(clustered ) / deff (clustered ) + n(unclustered ) / deff (unclustered )

where n(clustered) and n(unclustered) are the sample sizes of the clustered and unclustered
central office-located samples respectively, and deff(clustered) and deff(unclustered) are the design
effects due to unequal weighting for the clustered and unclustered central office-located samples,
respectively.
The clustered unlocated households are ratio adjusted so that they add up to the estimate of
unlocatable households in the population, represented by themselves and the comparable
households in the unclustered sample that were not pursued. This adjustment is comparable to that
done for a standard subsample among nonrespondents.
The next step will be to implement within-state non-response adjustments among located
households (or clustered cases that were not located despite field efforts) to account for nonresponse to eligibility screening and to the interview. First, we will conduct a non-response analysis
to assess the response patterns for the samples, using data from the sampling frame, such as age and
race of the sampled child, along with county-level information from the Area Resource File (ARF),
such as the percentage of children living in households with family incomes under the poverty level,
the percentage of households headed by females, and urbanicity. Based on the results, we will
develop logistic regression models to compute response propensity scores to compensate for nonresponse. We will develop separate models for each sample component (clustered and unclustered),
for each domain (recent enrollees, established enrollees, and recent disenrollees, as defined on the
frame), and for each state. Finally, we will use the estimated population counts in each state and each
domain to post-stratify within each state based on enrollment status at the time of sampling of the
child. The final weight will consist of the product of the combined-draw base weight, the inverse of
the response propensity score, and the post-stratification adjustment.
Response Rates. We will calculate weighted and unweighted response rates for this survey
following the procedures outlined by the American Association for Public Opinion Research

53

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

(AAPOR, 2008). 12 When combining the unclustered and clustered samples, only the weighted
response rates will be calculated, as the sampling weights and composite adjustments properly
account for overlap between the two samples and for nonresponse subsampling as described in the
weighting section above.
Based on our experience in previous studies with a similar sample design, we expect that about
90 percent of the sample in both the unclustered and clustered samples will be locatable by the
central office, and that about 85 percent of these located cases will complete the interview. For those
in the clustered sample and among the 10 percent not able to be located by the central office, 13
based on past experience we expect 65% to be ultimately located and to respond to the interview
after field followup. Combining the various sample components, the cumulative response rate for
the entire sample would be about 75 percent (77 percent for the central office located and 65%
percent for those initially unlocated but pursued vigorously in the field).
Sample Allocation and Sample Sizes. To allocate the sample across the clustered and
unclustered sampling components, we adapted the optimum allocation procedure described by
Hansen and Hurwitz (1946) for mail surveys with telephone follow-up, where the optimum
allocation yields the specified precision for minimum costs. Our application of this procedure
suggested a sub-sampling rate of 50 percent for unlocatable households. To achieve this result, we
will allocate 50 percent of the total completed interviews to the clustered sampling component and
50 percent to the unclustered, telephone-only component.
Although a proportional allocation of interviews across states would produce the smallest
design effects, it would result in sample sizes that are too small in some states to generate sufficiently
precise estimates. For this reason, we recommend using a common sample size in each state (of 500
survey completes in each domain, or 1,500 completes total; see below). The target sample size for
the study will therefore be 15,000 completed interviews. Once the states have been chosen for the
survey, we will work with ASPE to determine the exact allocations of completed interviews across
the states and sample domains that best address research priorities.
To meet the target sample size, we will initially select an estimated 21,538 CHIP enrollees and
recent disenrollees from the sample frame, calibrating their allocation to the clustered and
unclustered sampling components so that the number of completed cases turns out roughly equal
between the two components. 14 Because unlocatable households in the unclustered sample are
considered non-subsampled nonrespondents, we expect that only 80 percent of these sampled
children will be eligible. Allocation of 11,538 of the total selected from the sample frame to the
unclustered sample and 10,000 to the clustered sample, then, will yield 19,230 eligibles—9,230 in the
unclustered sample and 10,000 in the clustered sample. For each sampling component, we anticipate
completing interviews with 65 percent of the total sample (including non-subampled unlocatables)
by telephone and an additional 10 percent for the clustered sampling in person. This will yield 7,500
completed interviews under each sample component, resulting in a total of 15,000 completed
12

RR AAPOR = # of completed interviews / (# of sampled cases – estimated # of ineligible cases).

13

As explained above, these cases represent the 10 percent unlocatable in the unclustered sample.

14 For both components, a child will be deemed ineligible for the sample due to relocation out of the state or death. Because the
time elapsed between frame construction and data collection will be short, the number of these children will be negligible. Hence, in
the clustered sample, nearly all sampled children will be eligible for the study.

54

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

interviews. All 7,500 interviews in the unclustered sample will be completed by telephone, while an
approximate 6,500 in the clustered sample will be completed by telephone, with the additional 1,000
completed by field interviewers (most often by giving the respondent a cell phone to contact the
telephone interviewer).
Precision. In order to examine the precision of the target sample size and distribution across
the states, we first estimated the likely design effects associated with clustering and non-response
adjustments, and the unequal weighting arising from the (equal) allocation of the sample sizes across
states. Based on our experience fielding the prior survey, we looked at a range of possible design
effects, and determined that a design effect of 1.5 would be reasonable for this exercise for analyses
within states. 15 When pooling data across states, the design effect due to unequal weighting is 2.26,
resulting in a design effect of 3.39. When making comparisons using subgroups, these design effects
are slightly reduced when the cluster size is reduced.
Next, using these design effects, we analyzed the confidence interval (CI) half widths for a
series of descriptive statistics, calculated for different combinations of states, domains and
subgroups. In Table VII.1, we illustrate the findings from this analysis for the CHIP survey, focusing
on three illustrative proportional outcomes having sample means: (1) 50 percent; (2) 25 percent (or,
equivalently, 75 percent); and (3) 10 percent (or, again equivalently, 90 percent). In each row of the
table, we display for each illustrative outcome the associated CI half width for a specified sample
size and sample composition of interest.
The half-width estimates illustrated in Table VII.1 show that there is clearly sufficient precision
for anticipated outcomes when pooled across states. This is true whether the outcomes focus on the
full population or on subgroups. For example, for outcomes measured for a full sample domain of
roughly 5,000 respondents (shown in row one of the table), the half widths are 2.6 percentage points
for a 50 percent proportion, 2.2 points for a 25/75 percent proportion, and 1.5 points for a 10/90
percent proportion. Half widths naturally rise when focusing on subgroups. However, even for a 25
percent subgroup within a domain, the half widths for the illustrative outcomes are less than 5
percentage points. 16

15 Based upon our experience in the previous iteration of this survey, we assumed an intracluster correlation coefficient of 0.03
and a design effect due to nonresponse adjustments of 1.32.
16Based on findings from the earlier study, a 25 percent subgroup approximates many of the focal subgroups for the evaluation,
including children with elevated health care needs, children in low-education households, and children in Spanish-speaking
households.

55

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII. 1. Confidence Interval (CI) Half Widths for Illustrative Outcomes Given Equal Allocation of
the CHIP Sample Across States
Estimated CI Half Widths for Illustrative Proportions
(shown in percentage points)
X =50%

Sample Size [Composition]
10 States Pooled
5,000 [full sample domain]
2,500 [50% domain subgroup]
1,250 [25% domain subgroup]
Single State
500 [full sample domain]
250 [50% domain subgroup]
Note:

(E.g., Had Recent
Preventive Visit)

X =25% (or 75%)

(E.g., Has Elevated
Health Care Need)

X =10% (or 90%)

(E.g., Has Unmet
Dental Need)

2.6
3.5
4.9

2.2
3.0
4.2

1.5
2.1
2.9

5.4
7.3

4.6
6.4

3.2
4.4

The confidence interval half width is equal to the standard error of an outcome multiplied by
the standard normal deviate used in a 95% confidence interval, 1.96. Standard errors have
been adjusted to reflect the expected design effect under a compromise allocation of sample
members to states (see text for details).

While the primary focus of the evaluation will be on outcomes and subgroups defined across
states—an approach we adopted successfully for the prior study—we naturally plan to explore these
outcomes at the state level as well. As seen in the lower rows of TableVII.1, precision falls when
focusing on state-specific outcomes. For a full sample domain, the largest half width shown (for a
proportion of 50 percent) is 5.4 percentage points. This number increases to 7.3 percentage points
under a 50 percent subgroup.
An alternative to the planned equal allocation of sample members across states is proportional
allocation, whereby the number of sample members in each state would be approximately
proportional to the total population of CHIP recipients in that state. Under this scenario, weights
across states are approximately equal, substantially reducing the design effects and improving
precision when states are pooled. For example, the half width for a proportion of 50 percent
decreases from 2.6 to 1.7 percentage points (not shown). However, proportional allocation can be
expected to dramatically reduce the sample size for states with relatively small CHIP populations,
substantially reducing precision of these states’ statistics. For example, based on the ten states
currently recommended for the evaluation, the half width for a relatively small state like Oregon is
estimated to be over 12.4 percentage points for a proportion of 50 percent (not shown).
Another alternative to equal allocation is a compromise allocation, which applies an initial
proportional allocation but then reallocates some of the sample members from large states to small
states. The benefit of compromise allocation is that it can improve precision (relative to equal
allocation) when pooling samples across states without precluding a sufficiently-powered analysis in
small states. As an example of compromise allocation, Table VII.2 shows half widths given a
maximum of approximately 800 sample members (per domain) in large states and 400 sample
members in small states. Under this scenario, the design effect due to unequal weighting across
states is again reduced, though less substantially, from 2.26 to 1.60. While the half widths for all
specified sample sizes and compositions are below those estimated under equal allocation (shown in
TableVII.1), the reduction is particularly evident for the subgroup of 25 percent. For example, the
half width for a proportion of 50 percent declines by nearly one full percentage point, from 4.9
points under equal allocation (Table VII.1) to 4.0 points under compromise allocation (Table VII.2).
56

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

This gain does come at the cost of reduced precision for a small-state estimate, but the reduction is
not severe. For example, the half width for a full-domain sample estimate in Oregon would increase
from 5.4 percentage points under equal allocation (Table VII.1) to 6.0 points under compromise
allocation (Table VII.2).
Table VII.2. Confidence Interval (CI) Half Widths for Illustrative Outcomes Given a Compromise
Allocation of the CHIP Sample Across States
Estimated CI Half Widths for Illustrative Proportions
(shown in percentage points)
X =50%

Sample Size [Composition]
10 States Pooled (CHIP Sample)
5,000 [full sample domain]
2,500 [50% domain subgroup]
1,250 [25% domain subgroup]

Individual State
800 [domain in larger state; e.g. CA]
400 [domain in smaller state; e.g. OR]
Note:

X =25% (or 75%)

X =10% (or 90%)

(E.g., Had Recent
Preventive Visit)

(E.g., Has Elevated
Health Care Need)

(E.g., Has Unmet
Dental Need)

2.1
2.9
4.0

1.9
2.5
3.5

1.3
1.8
2.4

4.2
6.0

3.7
5.2

2.5
3.6

The confidence interval half width is equal to the standard error of an outcome multiplied by
the standard normal deviate used in a 95% confidence interval, 1.96. Standard errors have
been adjusted to reflect the expected design effect under a compromise allocation of sample
members to states (see text for details).

Finally, to assess the available precision when comparing outcomes among samples (for
example, between new and established enrollees) or among sub-groups (for example, defined by race
and ethnicity or other demographics), we estimated minimum detectable differences, or “MDDs,”
for alternative sample sizes. We calculated all MDDs with powers of 80 percent for two-tailed tests
of significance with 95 percent confidence.
Table VII.3 shows MDDs for comparisons of two sample domains for a pair of illustrative
proportions under an equal allocation per state. When pooling the 10 states’ data and comparing
outcomes between two full sample domains (top panel; row one), we have sufficient statistical
power to detect differences of 5.2 percentage points for a proportion of 50 percent and differences
of 4.5 percentage points for a proportion of 25 (or 75) percent. These differences are relatively
modest—both equivalent to effect sizes of just over 10 percent (not shown), which is commonly
considered “small” in social science research (Cohen 1988). MDDs naturally increase for
comparisons of subgroups, but they remain around levels that can detect meaningful differences at
desired power. For example, for a comparison between domains for a 50 percent subgroup (top
panel; row two), the MDD on a 50 percent proportion is 7.1 percentage points, again equivalent to a
“small” effect size. Perhaps not surprisingly, comparisons between domains or other subgroups
within a single state have relatively weak statistical power. For example, a comparison of two
domains within a state (top panel; row four) has an MDD of 10.9 percentage points on a 50 percent
proportion—a large jump even from the sub-group comparisons across states. We assume that the
study of such within-state differences will be a relatively low priority for this study, as it was for the
prior CHIP evaluation.

57

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.3. Minimum Detectable Differences (MDDs) for Illustrative Outcomes Given Equal
Allocation of the CHIP Sample Across States
Estimated MDDs for Illustrative Proportions
(shown in percentage points)
X =50%

(E.g., Had Recent
Preventive Visit)

Sample Size [Comparison/Composition]

Comparisons of two domains with equal (1:1) sample sizes

Ten States Pooled
5,000 : 5,000 [full domain vs. full domain]
2,500 : 2,500 [50% subgroup comparison]
1,250 : 1,250 [25% subgroup comparison]
Single State
500 : 500 [full domain vs. full domain]
250 : 250 [50% subgroup comparison]

Single State
500 : 250 [full domain vs. half domain]
250 : 125 [50% subgroup comparison]
Note:

(E.g., Has Elevated
Health Care Need)

5.2
7.1
9.9

4.5
6.1
8.5

10.9
14.8

9.4
12.8

Comparison of two domains with unequal (2:1) sample sizes

Ten States Pooled
5,000 : 2,500 [full domain vs. half domain]
2,500 : 1,250 [50% subgroup comparison]
1,250 : 625 [25% subgroup comparison]

X =25% (or 75%)

6.3
8.6
12.1

5.5
7.5
10.5

20.0
28.3

17.3
24.5

The MDD is equal to the smallest difference between two samples that can be detected for a
specified level of power and statistical significance. (We calculated the MDD above under
standard assumptions of 80% power and 95% statistical significance, two-tailed test).
Standard errors for calculating the MDD have been adjusted to reflect design effect that we
expect for the different sample compositions shown, based on the results from the prior CHIP
survey (see text for details).

For certain comparisons, we anticipate that we may not be able to have an unequal number of
sample members in the two domains. This is particularly likely in the case of the comparison of
established and new enrollees in support of the impact analysis, where a sizeable fraction (perhaps
half) of the new enrollee domain may have to be excluded because it serves as a poor
counterfactual. 17. Naturally the MDDs rise appreciably under such a comparison. For example, for
sample comparisons involving the full sample domain (row one of each panel), the MDDs rise
roughly one percentage point. Perhaps more notably, comparisons within states are no longer
meaningful, since only very large differences can be detected with appropriate power. For example,
for a proportion of 50 percent, the MDD under the “unequal comparison” rises to 20 percentage
points -- far too large to support a credible analysis.

For this analysis, the reported outcomes of new enrollees prior to CHIP coverage will serve as the counterfactual
for the reported outcomes of established enrollees during coverage. However, because CHIP children cannot be eligible
for Medicaid at the same time, the counterfactual must exclude new enrollees who report Medicaid coverage prior to
enrolling in CHIP. Conservatively, we estimate that this exclusion could lead to loss of half the new enrollee sample for
purposes of this analysis, resulting in a sample size ratio of 2:1 between established enrollee and new enrollee domains.
17

58

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.4 shows an equivalent set of 10-state comparisons as Table VII.3 given the
compromise allocation discussed above. As with the half width estimates, the MDDs improve with
this approach compared with equal allocation. For example, for a proportion of 50 percent, the
MDD declines by roughly a full percentage point, from 5.2 points (Table VII.3) to 4.3 points (Table
VII.4) given a comparison of two full domains (5,000 : 5,000) and from 6.3 points (Table VII.3) to
5.3 points (Table VII.4) given a comparison of a full and half domain (5,000 : 2,500). This difference
becomes even larger for subgroups; for example, the decline for this same proportion given a 25
percent subgroup is close to two points, from 9.9 points to 8.2 points for a comparison of two full
domains. The tradeoff with compromise allocation is again the loss of detection power for smaller
states -- which were already marginally sufficient at best in the case of equal allocation. For example,
for smaller state in which we have just 400 sample members, the MDD for a comparison of two full
domains with a proportion of 50 percent is 11.7 points, arguably too large for a meaningful test of
differences in outcomes, but not much worse than that given with an equal allocation (10.9 points,
as shown in Table VII.1).
Table VII.4. Minimum Detectable Differences (MDDs) for Illustrative Outcomes Given a Compromise
Allocation of the CHIP Sample Across States
Estimated MDDs for Illustrative Proportions
(shown in percentage points)
X =50%

X =75% (or 25%)

(E.g., Had Recent
Preventive Visit)

(E.g., Has Elevated
Health Care Need)

Ten States Pooled
5,000 : 5,000 [full domain vs. full domain]
2,500 : 2,500 [50% subgroup comparison]
1,250 : 1,250 [25% subgroup comparison]

4.3
5.9
8.2

3.8
5.1
7.1

Ten States Pooled
5,000 : 2,500 [full domain vs. half domain]
2,500 : 1,250 [50% subgroup comparison]
1,250 : 625 [25% subgroup comparison]

5.3
7.3
10.1

Sample Size Comparison and Composition

Comparisons of two domains with equal (1:1) sample sizes

Comparisons of two domains with unequal (2:1) sample sizes

Note:

4.6
6.3
8.7

The MDD is equal to the smallest difference between two samples that can be detected for a
specified level of power and statistical significance. (We calculated the MDD above under
standard assumptions of 80% power and 95% statistical significance, two-tailed test).
Standard errors for calculating the MDD have been adjusted to reflect design effect that we
expect for the different sample compositions shown, based on the results from the prior CHIP
survey (see text for details).

B. Instrument Content and Design
1.

Substantive Content of the Instrument

The survey instrument is designed to capture data on outcomes in the following areas: (1)
application and enrollment; (2) access, use, content of care and satisfaction; (3) program retention,
renewal and disenrollment; and (4) relationship between CHIP and other coverage. Additional
sections obtain data on various characteristics of children and their families that will be used to
support a range of descriptive and multivariate analyses, including age, income, language, and other
demographic and socioeconomic characteristics; health status and chronic conditions; and parental
attitudes about the efficacy of health care. Separate modules will be developed for the three types of
59

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

sample members (new and established enrollees, and disenrollees). The concepts covered will be
largely the same across the different modules but the reference time period will depend on the
enrollment or disenrollment status of the child at the time of sampling. The amount of time required
to complete the survey will be 30 to 40 minutes. Key issues addressed in the survey will include:
• Application and Enrollment. We will ask how families learned about the program,
their experiences applying and enrolling, and, as applicable, their experiences with the
renewal and disenrollment processes. Questions will cover failed application attempts;
mode of application; problems encountered in applying to the program, such as
supplying the required documentation; language and other types of assistance provided
during the application process; waiting periods; notification of impending program status
changes; and the relationship to the child of the person who applied for the program.
• Access, Use, Content of Care, and Satisfaction. This section will focus on
experiences accessing and using health services, including questions about receipt of
specific types of services (such as well-child visits, emergency room visits, dental visits,
prescription drugs, and so on), whether care was delayed or they did not receive a service
they thought they needed, and the reasons why this happened. Expanding on the prior
survey, we plan to include a new emphasis on process and outcome measures,
particularly for children with asthma, and more detail on the content of care. While the
final content has not been decided, we hope to explore whether families receive
anticipatory guidance on, for example, diet and exercise, and whether the child receives
recommended screenings and preventive care, including flu shots. Additional questions
will ask about the child’s usual source of care for medical and dental care and the
caregivers’ experiences and satisfaction with this care. Patient-centered medical home
concepts will be explored, including how providers communicate with families and how
care is coordinated across different settings. Access issues explored will include
experiences finding a provider and obtaining timely appointments, travel times and
office hours (general and dental), whether needed services are adequately covered, and
the financial burdens families face in meeting the child’s health care needs.
• Program Retention, Renewal, and Disenrollment. Questions in this section will
address awareness of program renewal requirements, and experiences with the renewal
process. We will explore why children remain enrolled or become disenrolled from the
program and their coverage experience following CHIP.
• Health Insurance Coverage. This section of the survey will examine coverage
transitions and assess the extent to which CHIP appears to be displacing employer
coverage (“crowd-out”). We hope to expand the previous evaluation’s analyses and
estimates by drilling down by income group and by taking into account the availability
and affordability/generosity of dependent coverage through employer-sponsored
insurance (ESI). We will also explore how the costs and benefit packages of employer
plans compare with CHIP in light of ACA’s provisions regarding interactions between
employer coverage and CHIP. In addition, we will examine how the transitions between
public coverage, private coverage, and uninsurance, as well as crowd-out estimates, vary
across states and, to the extent possible, we will explore the association between statespecific CHIP policy choices and these outcomes.
• Child and Family Characteristics, Including Child Health. Questions in this section
will target the characteristics of covered children and their families, such as age, race and
ethnicity, gender, primary language, household income, family structure, parental
60

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

employment and coverage status, among others. Questions about the health of the child
will assess the presence of special health care needs and chronic health conditions,
including asthma and behavioral health conditions, and whether the child is limited in
any way because of health conditions. Parent attitudes toward health and health care will
cover questions related to the importance of being covered by health insurance, attitude
toward risk-taking behaviors, current health status of respondent, and the importance of
regular checkups.
The survey instrument from the first ASPE-sponsored CHIP evaluation is the foundation for
the new instrument. We modified prior questions to improve wording as needed and added
questions to address new topics of interest, using other validated survey questions on child health
and coverage as the first source for any new questions. For example, questions about the concept of
a patient-centered medical home were not included in the first survey, but given the importance of
this topic we include questions in the current survey that will allow us to characterize the medical
home-related aspects of the care setting and process of care, using existing instruments that offer
validated questions on the topic). 18 The surveys we examined include the National Survey of
Children’s Health, the National Health Interview Survey, the Medicaid Expenditure Panel Survey,
and several surveys of Healthy Kids programs in California fielded by Mathematica during the past
decade.
2.

Instrument Development Process

As a first step in developing the survey instrument, we developed a matrix showing the
domains, subdomains and topics being considered for possible inclusion. The matrix included all of
the content areas from the previous enrollee/disenrollee survey along with new content areas being
considered for the current evaluation. The matrix has columns for capturing input on whether
questions on a given topic should be added, modified, or dropped. For new or modified content, the
matrix notes survey instruments to examine for possible questions or wording changes. The content
matrix and supporting materials along with a preliminary set of recommendations were shared with
ASPE and federal workgroup members and we met to get their feedback in mid-February 2011. We
then developed an initial draft of the survey instrument for the pretest. We shared the pretest
version of the draft instrument with ASPE and the federal workgroup and will address their
comments along with input arising from the pretest interviews. A final draft instrument will be
submitted with the final OMB clearance package submitted in mid-May 2011. The pretested version
of the instrument is included as an attachment to this report.
3.

Instrument Design Challenges

We faced three design challenges in developing the survey instrument: (a) adjusting for program
status changes that occur after sample selection, (b) anchoring questions to optimize recall, given
program status changes, and (c) minimizing measurement error through rigorous use of survey best
practices, such as pretesting and use of question-by-question specification during training.
a.

Adjusting for Program Status Changes

18 For example, both the National Survey of Children with Special Health Care Needs and the National Survey of
Children’s Health have tested medical home components.

61

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

At the time of sampling, the sample children will be selected into one of three enrollment-based
strata: new enrollees, established enrollees, and recent disenrollees. Members of each stratum will be
assigned a primary recall period based on their enrollment status: for new enrollees, this will be the
12 months before their enrollment dates; for established enrollees, the 12 months prior to the date
of interview; and for recent disenrollees, the 12 months before their disenrollment dates. The survey
goal is to interview the respondent (parent/guardian) for each sample child as close to selection time
as possible in order to minimize program status changes and maximize recall of events. However, as
the fielding period for each of the two projected sample releases is estimated to require eight months
to allow for both CATI and field data collection, inevitably some sample members will change
program status during this fielding period. While the primary recall period will remain unchanged for
all sample children, the instrument will be designed to capture changes in program status and adjust
secondary recall periods accordingly. The different recall periods are described below and illustrated
in Figures VII.1 to VII.3.
Figure VII.1 Illustrative Recall Periods by Program Status at Interview for New Enrollees Sample

Figure VII.2 Illustrative Recall Periods by Program Status at Interview for Established Enrollees Sample

62

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Figure VII.3 Illustrative Recall Periods By Program Status at Interview for Sample of Disenrollees

• New enrollees. New enrollees will be asked about events and experiences during their
primary recall period. They also will be asked about their enrollment experiences and
about program events and experiences from enrollment until the date of the interview.
Some new enrollees may leave the program before we can locate and interview them.
When we interview them, we will ask them about their experiences and events during
their primary recall period, during program participation, and with the disenrollment
process ending at date of disenrollment. Sample members will not be excluded based on
change of status.
• Established enrollees. Established enrollees will be asked about events and their
experiences participating in the program during their primary recall period. Some
established enrollees may have left the program after selection into our sample. In those
cases we will ask them about program events and experiences in the 12-month period
before the date of disenrollment, during the disenrollment process, and in the time
between the disenrollment date and the date of the interview.
• Recent disenrollees. Recent disenrollees will be asked about events and experiences
while participating in the program and while disenrolling from it during their primary
recall period. We will also ask about events and experiences between the date of
disenrollment and the date of the interview. Some recent disenrollees may have
reenrolled subsequent to their selection into the sample. In those cases, we will ask them
about program events and experiences in the primary recall period, during the
disenrollment process, and in the time between the disenrollment date and the
reenrollment date.
b. Anchoring the Questions
The fact that states have different eligibility rules means that no single set of questions or
probes will suffice for anchoring respondents to events that determine the beginning and end of the
primary recall period and other recall periods. Correct anchoring will be crucial for the validity and
reliability of the data. For questions related to access and experiences with services before and after
enrollment in the program, respondents will have to recall correctly when their child became eligible
63

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

for services. In states with presumptive eligibility, or where children become eligible for services on
the first day of the month after they are enrolled in the program, this may not be the date the parent
received the child’s CHIP or Medicaid enrollment card. In some cases, we may have to include
additional questions or probes for respondents with specific eligibility and disenrollment codes, or
limit certain questions to respondents in specific states.
Ensuring that respondents understand which program the questions refer to may necessitate
another important set of clarifying questions. Recipients may know the program by its state-specific
name, rather than as CHIP. State CHIP and Medicaid program names (often in both English and
other languages) will be imported into the CATI software and pre-filled based on the state. State
health care plans may also offer CHIP and Medicaid services under different names. We will
ascertain the variability in discussions with states during the first year of the evaluation add plan
names if needed to the CATI software to ensure that respondents know what we’re talking about.
c.

Minimizing Measurement Error

Measurement error may occur randomly or systematically. While random error affects data
provided by individuals, it has no biasing affect on the data from the entire sample. Systematic
measurement error can result in bias in the data. Almost all data for the 10-state survey will be
collected exclusively from the sample child’s primary caregiver. Respondents may introduce
measurement error into the data as a result of faulty recall of service use or enrollment status
(discussed above). Other error may occur when respondents overreport socially desirable outcomes
(use of dental care, well-child checkups) or underreport socially undesirable outcomes (need for
mental health care). We plan a number of proactive steps to avoid the biases caused by measurement
error.
• Through carefully constructed screening questions, we will identify and seek to interview
the primary caregiver who can best answer questions about the child’s receipt of health
services and program participation events. The relationship of this respondent to the
child will be recorded so that, if necessary, analysts can later control for the respondent’s
degree of familiarity with the child’s health care and program participation.
• To the extent possible, we will rely on modules from the prior CHIP survey instrument
and will incorporate new modules from existing instruments that have been successfully
used on other surveys.
• To minimize potential systematic mode effect (which could play a role in bias), we will
conduct all surveys using the same CATI software whether an interviewer at our phone
center calls the respondent or a field locator calls into the center to have the interview
conducted by a CATI interviewer. All interviews will be conducted on the telephone by
the same extensively trained interviewers.
• We will conduct two separate pretests of the instrument to ensure that we are using
unambiguous, clear language. A special focus during the first pretest will be on exploring
the language that will make the primary and secondary recall periods as salient as
possible to all respondents. The first 50 to 100 completed cases will constitute a second
pretest. The pretests are described in more detail below.
• We will train all interviewers extensively on the use of the survey instrument, focusing on
question-by-question intent and appropriate, neutral probes for each. Ten percent of all
interviews will be monitored using equipment that allows the monitor to hear both the
64

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

interviewer and respondent as well as see the keystrokes the interviewer enters. While 10
percent of all interviews are monitored, the bulk of monitoring will occur during the first
two weeks of interviewing in order to catch and correct any errors as quickly as possible.
Note that interviewers bilingual in Spanish will be given additional Spanish-specific
training in question-by-question intent and probing.
• We will conduct a frequency review at the end of the second pretest and after 20 percent
of the cases are complete. Early frequency reviews will identify any systematic errors in
programming so that they can be quickly corrected and tested.
• In addition, we will examine the validity of the estimates from our survey by comparing
our findings to those from other comparable studies such as National Health Interview
Survey, the California Healthy Kids surveys, and the National Survey of Children’s
Health.
4.

Translation and Interpretation

In order to give voice to non-English speakers who constitute significant proportions of the
CHIP/Medicaid population, our in-house Spanish translation team will translate the instrument into
Spanish, maintaining the original meaning as closely as possible. Our translation capabilities are
consistent with procedures recently established by the Bureau of the Census. English and Spanish
versions of the instrument will be programmed into the CATI software (Blaise) so that bilingual
interviewers will be able to “toggle” between English and Spanish versions as required. All advance
materials (letter, brochure) will also be translated into Spanish.
Because we cannot know beforehand the number of sample members who will speak languages
other than English and Spanish, programming the instrument into languages other than English and
Spanish would be cost prohibitive. We will meet this challenge by using an interpretation service
vendor we have used on numerous previous studies and with whom we have a good working
relationship. As a new language is identified, the vendor will translate the advance letter and
brochure into that language. (We will independently verify the translation using a second vendor.)
Mathematica has developed culturally sensitive trainings based on characteristics of different ethnic
groups. We will work with the vendor to identify critical and/or culturally sensitive questions and
have them translated (and independently verified) before any interviews are conducted. We will use a
module we developed for a previous study (for Healthy Start) to train interpreters in basic
interviewing skills (for example, maintaining a professional relationship, reading the question only as
written, and using only approved probes). Specially trained Mathematica interviewers will work with
the vendor’s interpreters to contact the respondents and convince them to participate in their own
languages. The interviewer will ask the question in English; the interpreter will interpret into the
target language and then translate the responses for the interviewer, who will record the response in
the questionnaire.
5.

Pretesting

To be valid and reliable, the survey instruments must effectively communicate their questions in
the same ways to different subpopulations or types of respondents. Thorough pretesting ensures the
instruments reflect and properly measure the outcomes and subjective experiences we want to
measure. To achieve this, we will conduct two pretests:
First pretest. The schedule of the first pretest is shown below in Table VII.5.
65

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.5 Schedule of First Pretest
Activity

Start date

Recruit pretest respondents
Pretest the survey instrument
Prepare pretest report
Send pretest report to TOO for comment

Tue 3/21/11
Wed 4/6/11
Mon 4/20/11
Fri 5/6/11

Completion date
Fri 3/25/11
Tue 4/19/11
Thur 5/5/11

The first pretest of nine respondents having characteristics similar to our sample members (recent
CHIP enrollees, recent CHIP disenrollees, some English-speaking, some Spanish-speaking) will be
conducted by telephone interviewers and monitored by the survey director and associate survey
director. It will take place during the interval between publication of the 60-day federal register
notice (FRN) and the 30-day FRN/submission of the supporting statements to the Office of
Management and Budget (OMB). We will pretest the survey, advance letter, and brochure, and
conduct a debriefing with the respondents at the end of each interview. We will evaluate respondent
comprehension of individual survey items and determine the length of the survey (our goal is a 35minute survey). A special focus of the pretest will be on new items that have not been used in either
the prior CHIP or other previous surveys. For recall questions, we will examine whether the
reference periods in the questions allow for reliable recall and whether the anchor points are
unambiguous and clear. Importantly, during instrument development we have carefully crafted
subjective questions and questions about events and behaviors to make cognitive sense to the
respondents in light of their own experiences; we will probe during the pretest to examine whether
specific terms convey the intended meaning to respondents. Based on the pretest outcome, we will
submit a pretest report to the TOO, describing any problems identified and recommending
improvements for consideration. The report will also describe the average length of interview so that
this information can become part of the OMB public burden statement. With the TOO’s approval
we will implement any needed changes in the instrument and prepare it for the final OMB package
to be submitted at the time of the 30-day FRN.
Second pretest. The first 50 to 100 completed interviews will constitute a second “live”
pretest. At the end of 100 cases, we will stop interviewing in order to review the data frequencies to
identify any software errors. We also will debrief the interviewers to identify and seek suggestions
for revising language or solving procedural issues. We will then submit a second, brief report to the
TOO on any problems encountered in fielding the survey, proposing solutions. We will then assist
the TOO in communicating any substantive the changes to OMB. Finally, with TOO and OMB
approval we will make any needed changes to the Blaise program and test them thoroughly before
resuming data collection. Depending on the magnitude of problems and the types of corrections
needed, we may stop work for as long as one week.
6.

CATI Programming

CATI technology will automatically route respondents through the primary and secondary recall
periods. It will fill subsequent questions based on responses to earlier ones, thus promoting a
seamless administration of the survey without interviewers having to worry about the sample
members’ enrollment status during the interview. Mathematica implements its survey instrument in
Blaise software. Blaise is versatile and readily integrates with Mathematica’s Survey Management
System (SMS) and Call Scheduling software. Together these three systems give the interviewers
access to the instrument on a controlled schedule, track the outcome of each case, provide daily
production reports based on structured status codes, and keep track of all locating information.
Range checks built into the CATI program identify out-of-range responses when they are recorded
66

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

and prompt the interviewer to probe whether the responses are correct. The program ensures that
correct routing is followed throughout, depending on sample stratum and on responses to earlier
questions. The software displays all prompts and interviewer instructions for more complex
questions. The software will also fill state Medicaid and CHIP names according to the respondent’s
state name. The interviewer will be able to access the question-by-question instructions via Blaise as
needed.
C. Data Collection Approach
1.

Sample Release Schedule

Our experience suggests that the effort required to locate and contact sample members will be
reduced by releasing the sample in stages so that we can begin making calls soon after state data are
received. Releasing in stages will help to ensure that contact information for sample members is still
relatively fresh.
Releasing the sample at all sites on two separate occasions will result in 20 individual releases.
The sample (total = 22,220) would be released in two waves of 11,110 each. This will require a total
field period of approximately 10 months. By using this method of release, we will be more likely to
make contact with all of the sample members than if we were to release the entire sample all at once;
recall problems also will be reduced. While releasing the entire sample at one time would reduce the
field period to a single eight-month time period and minimize the statistical and sample management
time required, it also would mean that we would be contacting many sample members toward the
end of the year, which could affect recall adversely. Moreover, the recall period would be variable
for different sample members, depending on when we contacted them relative to when they enrolled
or disenrolled. Conversely, releasing the sample three or more times during the year may be cost
prohibitive, as it would require increased statistical sampling and sample management time. Thus,
releasing the sample in two stages provides a good balance between keeping recall periods short and
not increasing management costs excessively.
2.

Optimizing Contact Information, Locating Sample Members, and Scheduling
Interviews

To achieve a high response rate, we must optimize contact information by beginning locating
soon after we receive the sample from the state, using a variety of national databases to locate
sample members, and coordinating the initial telephone calls to them with the mailing of the
advance letter.
a.

Optimizing Contact Information

The success of the data collection will depend greatly on keeping the time between sampling
and data collection as short as possible. Ideally, we would like to interview respondents while they
have the same program status they had at sampling so that retrospective recall will not be influenced
by changes in program status. However, because we are aware of some movement in and out of the
program, it is important that we expedite the acquisition and processing of the files for sampling and
fielding, and contact and interview respondents as quickly as possible.
Immediately after receiving the sample from the states, and prior to sending out the advance
letter, we will begin locating sample cases to maximize the number of cases we contact. Once we
mail out the letter, we will use the U.S. Postal Service (USPS) Return Service Request to obtain new
67

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

addresses for sample members who have moved. Mathematica’s locating module is part of our
Survey Management System (SMS), which allows us to document and organize contact information
and locating histories in electronic form. Locators will enter new contact information numbers into
the automated system for transfer to computer assisted telephone interviewing (CATI). If the
numbers prove to be invalid, interviewers will transfer the cases back to the locating group
electronically.
b. Locating Sample Members
In preparing the sample for fielding, we will use commercial databases to increase the accuracy
of the information obtained from the state files and add additional contact information that may not
be available in the files. To make locating operations as cost effective as possible, we will begin our
efforts by searching major national databases, such as Accurint, that supply cell phone as well as land
line numbers. Using Accurint, locators can search for sample cases, matching by name, Social
Security number, or date of birth. Mathematica uses a range of other databases to supplement
Accurint, including the following:
• Social Security death index, to identify sample members who may have died or
confirm deaths reported by informants
• Military locator database, which contains name, address, branch of service, and
assignment for all U.S. military personnel
• Property/deed transfer records, which are indexed by name and contain addresses and
sometimes telephone numbers
• State, county, and civil court records, which contain information about lawsuits,
marriages, divorces, and bankruptcy filings
• Agency databases, including motor vehicle, social service, and correctional agencies, as
well as city and county tax assessors, local employment security and welfare offices,
schools, and voter registration lists; although response times and relevant local
regulations vary and so must be reviewed, these sources have proven very useful in
previous surveys
• Other locating databases—Mathematica has direct links to the MetroNet system,
DTec Search, Lexis-Nexis, and residential telephone listings available on the Internet.
Each provides specialized locating capabilities
c.

Contacting Respondents to Schedule Interviews

Our first contact with each sample member will be by USPS mail. The letter, on HHS/ASPE
letterhead, will convey the importance and legitimacy of the survey and assure sample members of
confidentiality. It will also indicate that they will receive $20 for their participation in the survey and
assure them that the $20 will not affect their benefits from Medicaid, CHIP, or any other program.
It is crucial that we interview respondents as soon as possible after we draw the sample, when
the events covered in the interview are still fresh in their minds. Advance letters will be sent to all
sample members and timed to maximize the impact of the letter when we first call. For example, to
ensure that the letter arrives just before telephone contact, we have found it most efficient to send
letters on Fridays and begin CATI interviewing on the following Tuesdays.
68

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

For all cases, an automatic call scheduler will follow a protocol of calls in different time slices
and days of the week, with "no answer" cases going to locating after a set number of calls and on
different days of the week. Each nonresponding sample case will be reviewed to determine whether
to continue with the case or to discontinue the effort to reach the family by phone.
3.

Survey Respondents

The key characteristic of the respondents is that they are eligible for CHIP or Medicaid and are
thus low income. The respondent to the survey will be a primary caregiver of the child who is
familiar with the child's health and health care. In most cases, the respondent is expected to be the
mother of the child. For a small percentage of the cases, however, the respondent may be the father,
a grandparent, or another person in the household familiar with the child's health care. We expect
respondents to live in the same household as the child. Some adult or emancipated minor children
will act as respondents for themselves.
4.

Conducting the Interviews (Unclustered and Clustered Samples)

Using CATI, we will begin collecting survey data from enrollees and disenrollees in both the
clustered and unclustered samples at Mathematica’s Survey Operations Center (SOC). Professional
and experienced interviewing staff will administer the survey interviews. In-person field locators will
supplement the CATI interviewing to maximize the response rate and will help to ensure the
representativeness of those who complete the interview.
When a sample member speaks a language other than English or Spanish, another adult living
with the sampled child may be asked to complete the interview, or translate. If no such adult is
present, we will schedule an interview with the respondent at a time when an interviewer fluent in
that language is available. When we anticipate languages other than English and Spanish, interpreters
trained in the interviewing conventions for this study will attempt the interview.
If in-house locators have found no identifiable telephone numbers for respondents within the
clustered sample (expected to be approximately 20 percent), the field locators will search for them,
as well as CATI nonrespondents. Upon finding the correct sample member, the locator will ask him
or her to call the SOC on a Mathematica-provided cell phone to conduct the interview. If necessary,
the locator can instead conduct the interview in person, using a paper questionnaire.
For the unclustered sample, we will make every attempt to contact and interview members by
telephone. We will follow up in person only if their locations fall within the same area as the
clustered sample in their state. We expect to close out cases in the unclustered sample that cannot be
located after 15 attempts.
To make sure that the criteria for closing cases not located by telephone are the same in the
clustered and unclustered sample at the end of the telephone phase of the study, we will ensure that
the level of effort in finding families by phone is the same for both samples. We will avoid any
differential treatment in locating cases during the telephone phase of the study by ensuring that
interviewers, telephone locators, and supervisors do not know whether a case belongs to the
clustered or unclustered sample.

69

Chapter VII: Survey of Enrollees and Disenrollees

5.

Mathematica Policy Research

Minimizing Nonresponse

To achieve a 75 percent response rate for this study, we will address two sources of
nonresponse: non-contact and non-cooperation. Mathematica has sustained high response rates in
its nationwide and special population surveys—even in the more challenging data collection
environment we have faced in recent years—through extensive follow-up efforts and
implementation of innovative methods. These include providing specialized interviewer training,
contacting households at different times of the day, and attempting to reach all households within
the first few days of calling to establish refusals and begin attempts at conversion as early as possible.
Interviewers will be trained to leave messages identifying their calls as part of a legitimate, important
research study and stress that they are not selling anything or asking for a donation, thereby reaching
people who otherwise screen their calls. In addition, as soon as we receive the sample from the
states, we will run all cases through Accurint to obtain the most updated contact information for
each case (described in more detail above under “Locating Sample Members”).
An important factor in reducing nonresponse is understanding that, for various reasons, sample
members do not participate in surveys, so solutions to preventing nonresponse should address those
reasons most relevant to a particular respondent. These may include the following:
• Social environmental factors. Households inundated with unwelcome telephone
solicitations may have difficulty in distinguishing the initial survey contact from a
telemarketing call. Mathematica has addressed this concern successfully by sending wellwritten and persuasive notification letters, which have been shown to have a positive
effect on response to the subsequent telephone calls (Link and Mokdad 2005; Redline et
al. 2004). We will pay special attention to training interviewers in conveying the most
important messages about the study in the first few seconds of their calls and in leaving
effective answering machine messages. We will establish a dedicated toll-free number
that sampled households can call to verify the legitimacy of the survey, discuss concerns,
or complete the interview. We will include this number in the advance letters, and
interviewers will leave the number in answering machine messages.
• Household characteristics. Sometimes respondents are still reluctant to participate
because they are concerned about the confidentiality and privacy of their information.
To address these concerns, our advance letters emphasize that maintaining
confidentiality is the cornerstone of our work, and we train interviewers not only to
address confidentiality routinely in their introductions but also to recognize specific
confidentiality concerns that can lead to refusals if left unaddressed. As part of their
training, interviewers role play these scenarios and thus can readily reassure respondents
about our data security procedures. Mathematica has also developed strategies to
accommodate respondents under time pressures—for instance, training interviewers to
offer to administer the survey in segments rather than in one session.
• Refusal conversion. Refusal conversion efforts will be critical in achieving a high
response rate. Interviewers will be trained in refusal conversion techniques and refusals
will be flagged in the CATI scheduler as they occur. Such sample members will be sent a
customized letter that addresses the member’s reasons for refusing and emphasizes the
importance of the study. Interviewers highly skilled at converting refusals then will
contact the sample member. For the clustered sample, sample members who refuse a
second time will be assigned to a field locator for an in-person follow-up. At
Mathematica, refusal converter staff are supervisors or senior interviewers who receive
70

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

specialized training and have considerable experience in working with difficult-topersuade households. The same interviewer who generated the initial refusal will not
make the refusal conversion attempt. The converter will address the respondent’s reason
for refusing and stress the importance of the survey.
6.

Staffing, Training, and Monitoring for QA

We expect to train enough staff to have sufficient telephone interviewer and locator coverage at
different times of day and days of the week, and to have enough field locator coverage in the
clustered sample areas to attend appointments. We will train all data collection staff collectively
regarding the study and will use individualized training modules for different staff roles. Monitoring
will help to ensure adherence to all study protocols and achievement of interviewer performance of
the highest quality.
a.

Staffing

We expect to train 43 telephone interviewers, 17 telephone locators, and 28 field locators. This
level of staffing will ensure that we will be able to call respondents on different days of the week and
at different times of day. It will also ensure that field locator staff is available to attend appointments
with respondents and that there are enough bilingual interviewers.
b. Training
For interviewers and field locators, Mathematica requires two types of training: general
interviewing and locating training and project-specific training for all surveys. Our general training
introduces our interviewing and locating staff to survey research and Mathematica. Interviewers
become proficient in using the Blaise CATI technology before learning interviewing techniques.
During the latter portion of the training, they develop the tools they need to collect accurate and
complete data: an understanding of the concept of samples, the importance of reaching the correct
respondent, and confidentiality; and skills in listening, neutral probing, persuasion, and recording
responses carefully and completely. Before the interviewers are certified or recommended for project
work, they must demonstrate the knowledge and skills needed for telephone interviewing. Locators
become proficient in the different databases and other sources of information that may help us
locate sample members. Field locators must also display proficiency in asking questions of neighbors
and relatives that will help us to locate the sample members we would like to contact. At
Mathematica, we conduct this type of intensive interview and locator training in person at our SOC.
For the study-specific training, we will produce manuals and other training materials for
interviewers, locators, monitors and supervisors, and data coders. The core training modules will
cover the purpose of the study, its funding, planned use of the data, characteristics of the sample
members, and an outline of data collection activities. The survey director and deputy director will
present all of this material at the training session, which ASPE staff will be invited to attend. All data
collection staff will participate in this portion of the training, which may be conducted using a webbased training technology―Remote Training Site (RTS); this will avoid costly travel to a centralized
training site. We currently are using RTS on three field projects and have found it effective in
providing training, offering trainees the opportunity to practice skills, and serving as a vehicle for
quality review of trainees’ performance. The RTS also serves as a library and resource for staff
during the data collection period, as all training-related documents are available to staff at all times.

71

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

In addition to the common modules of the training that pertain to all staff, we will offer some
training modules specific to staff with certain roles in the survey. For example, interviewer modules
will cover each item of the questionnaire in detail, followed by structured practice interviews. We
will require interviewers to practice until the trainers ascertain that they have mastered the material.
Trainers will offer telephone locators the core modules, plus refreshers on the latest telephonic, web,
and electronic searching techniques. Field locators will be trained similarly but will receive additional
training modules that focus on locating techniques, cell phone handoff protocols, and reporting.
They will be given role-play scenarios to use in person to communicate the legitimacy of the data
collection effort and our organization, convey the importance of the study, and encourage
participation. They will also be briefed on confidentiality protections and locating techniques.
Coders will be trained on the purposes of specific questions and the appropriate coding rules.
Because interviewers, locators, and coders will have different roles, we will individualize these
training modules to accommodate their needs.
In addition to the training manual documenting procedures to be used for data collection, we
will provide interviewers and locators with hands-on training that gives them an opportunity to
become fully proficient with the CATI instrument before beginning data collection. We also will
devote time to a discussion of the respondent population and procedures designed to ensure that
Mathematica reaches the designated respondents. In addition, we will also cover the importance of
reassuring sample members that their benefits will not be affected by their participation in the survey
(or by the incentive payment), and that we will keep their information confidential.
Finally, we will provide question-by-question instruction on the instrument, along with a
discussion of commonly asked questions and approved responses. Supervised, carefully scripted role
plays will be incorporated into the training to enable interviewers to practice contact procedures
(including locating the correct respondents), methods of persuasion and refusal avoidance, and the
full administration of the CATI instrument. Under our training program, use of the systems,
movement between screens, and other aspects of CATI become so familiar to interviewers that they
can spend their time and attention listening, recording, and responding to concerns without
experiencing technical distractions. They learn to record verbatim open-ended responses carefully,
select appropriate precoded categories, and assign the correct outcome code to each contact attempt
(which is key to proper sample management). Supervisors at the SOC conduct role plays and CATI
training in person.
c.

Monitoring for Quality Assurance.

Mathematica routinely monitors 10 percent of all completed interviews, using a more intense
monitoring schedule for the first weeks of the study. This monitoring system enables supervisors to
listen to interviews without the interviewers being aware of it. The system also allows supervisors to
view the interviewers’ input screens to monitor their accuracy in recording responses. Monitoring
during the first few weeks is very important to ensure that each telephone contact with a sample
member is professional and productive, and that interviewers are reading and recording accurately.
During this period, monitors are particularly interested in evaluating how the interviewers initiate
contact procedures and present information about the study; screening to ensure that interviewers
have reached the appropriate respondent; ensuring interviewer competency with the CATI
instrument; and monitoring interviewer accuracy in reading questions, providing neutral probes,
providing appropriate feedback to respondents, and accurately recording responses.
Monitors complete reports for each interviewer for each session; these are available to project
staff and the interviewer. Interviewers with high refusal or low productivity rates are given
72

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

immediate feedback about their performance and taken off the study if their performance does not
improve immediately. Monitors on this study will be required to attend the project-specific training
and may participate in debriefings and project meetings as appropriate. Supervisors with Spanishlanguage capabilities will monitor interviews conducted in Spanish.
7.

Tracking the Data Collection Effort

We will use a tracking and reporting system that will detect problems as soon as they occur and
enable us to make timely adjustments to sample releases and data collection procedures when
needed. Because we will draw the sample from 10 states, with each state including a clustered and an
unclustered sample of new and established enrollees and disenrollees, to be released at two different
time points, we will need to track required sample sizes and response rates for 120 separate sample
components. In addition, we will also have both a telephone and a field data collection phase.
Depending on our focus, we may group the total sample by any of these variables (state,
clustered sample, unclustered sample, new enrollee, established enrollee, disenrollee) to monitor
survey progress at any time during the survey field period. For instance, we may monitor first-time
refusals by enrollment status so that we can pay closer attention to issues related to differential
refusal rates.
We will use the survey progress monitoring process to target our resources efficiently during
data collection. For example, we might track outcomes for the clustered sample to evaluate whether
certain geographic areas need more staff than others or ascertain whether additional training of
locators or some adjustment in the introductory script is necessary.
8.

Challenges

Although we have designed a robust, mixed-mode survey design, there still are potential
challenges to obtaining a high response rate, which we will address proactively to ensure a successful
data collection effort. In particular, we foresee challenges pertaining to the representativeness of the
respondents and locating respondents.
a.

Challenge of Ensuring the Representativeness of Respondents in Relation to the
Sample Population

Although we will strive for as high a response rate as possible, we should expect some
nonresponse, and the level of nonresponse may vary for different subgroups. We expect that this
variation can be corrected through weighting adjustments after the data are collected, as is
customary. However, if the differences are substantial, we may need to increase our efforts to
convert refusing sample members into respondents and lengthen the data collection period to obtain
enough respondents.
To address the challenge of representativeness, we will examine the response rate overall at
regular intervals during the data collection period, as well as response by key sample characteristics.
This may include characteristics such as state, clustered vs. unclustered sample status, and
enrollment status. For example, we may find that disenrollees respond at lower levels than new and
established enrollees. We will compare the distributions of respondents to those in the sample
population. If there are large differences in response rates by these key characteristics, we will focus
our resources on increasing response among those groups of sample members with lower rates. In
73

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

this example, we could increase the saliency of participating in the survey by tailoring the request to
participate.
b. Challenges Pertaining to Locating
As the members of our sample may be particularly vulnerable to recent economic events, such
as high unemployment and the housing bubble crash, some may have lost their homes or become
more mobile as they search for work. For these reasons, it will be particularly important to begin
locating as soon as we receive samples from the states. We also believe that scheduling the sample
release in each state in two waves instead of one will be important so that we can increase the
probability of contacting sample members in a timely manner.
Given our experience and the well-tested procedures described above, and based on our success
in conducting other telephone interviews with field followup, we believe a response rate of 75
percent is achievable for these surveys. On the National Beneficiary Survey, for example (a survey of
SSI beneficiaries), we achieved a response rate of 82 percent, and approximately three-quarters of
the completed interviews were obtained by CATI. We expect a similar proportion of completed
CATI interviews in this survey and that the locators will obtain the remaining interviews by
searching for and finding sample members at their current addresses.
D. Descriptive Analyses of CHIP Enrollees and Disenrollees
Our analysis of the survey data will begin by exploring descriptively the characteristics and
experiences of CHIP enrollees in the 10 study states. 19 The objective of the descriptive analyses will
be to provide a comprehensive picture of CHIP children and their experiences at each stage of the
CHIP life cycle—during the period prior to their program start, through their CHIP application and
initial enrollment in the program, on through the period of established enrollment, and, finally,
during and after their eventual disenrollment from the program—based on a single cross-section of
CHIP enrollees and disenrollees. Over these stages we will study a wide range of descriptive topics,
such as how families hear about CHIP and become enrolled, the characteristics of covered children
and their families, how children are faring in different states and programs, why children and
families leave the program, and what their insurance status is after leaving.
A key component of the descriptive analyses will be to examine whether there are large and
systematic differences in the various outcomes related to each stage of the CHIP life cycle across
states, programs, and particular subgroups of children, such as children with special health care
needs and children in racial and ethnic minorities.
1.

Research Questions

The descriptive analyses will address a broad range of research questions that can be grouped
into three categories:

We will also conduct a parallel analysis of the Medicaid survey of enrollees and disenrollees in three of the 10
CHIP survey states. The Medicaid analysis will include the same presentations of measures and analytical approach as
the descriptive analyses of the CHIP survey described in this section.
19

74

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

• Characteristics of CHIP enrollees. What are the socioeconomic and demographic
characteristics of CHIP enrollees and their parents? What proportion of children
enrolled in CHIP have special health care needs or an elevated need for medical care?
What type of insurance coverage, if any, do children have before entering the program?
Are their parents insured, and is dependent coverage available? How costly are coverage
alternatives?
• Experiences with the program. What are families’ experiences enrolling in CHIP?
What sources of information are important to families in deciding to enroll their children
in CHIP? Do they receive information on renewal processes upon enrollment? How
well is CHIP meeting the health care needs of children enrolled in the program? To
what extent are enrollees using health services, such as doctor visits, specialist visits,
and prescription drugs? Are enrollees receiving well-child care, flu shots, dental checkups, and anticipatory guidance on healthy behaviors and risk factors? Do they have a
usual source of care and how does the care they receive align with core medical home
concepts? What access barriers do enrollees encounter? Have they been able to find a
doctor and get timely appointments for needed specialty and dental care? Why do
families disenroll from CHIP, and what coverage, if any, do they obtain after leaving the
program? Among those who become uninsured, what share might still be eligible for
coverage through CHIP?
• Differences across individual subgroups, states, and programs. How do enrollment
experiences vary by race/ethnicity, parent education, family income, and other
sociodemographic characteristics? Do enrollment experiences differ by prior insurance
coverage or across states and program types? Which families are more likely to use
application assistance? Is the program meeting the health care needs of some children
better than others? To what extent do access to and use of different types of health
services vary across types or subgroups of enrollees, or across states? Which groups of
enrollees are more likely to disenroll from the program? Are certain types of disenrollees
more likely to obtain coverage after leaving the program, or to remain uninsured?
2.

Data and Measures

To explore the wide range of topics that will be examined in the descriptive analyses,
information from the survey will be used to construct an extensive set of outcomes measures that
together span the main research topics to be addressed by the evaluation. The measures,
summarized in Table VII.6, fall into four broad categories: (1) application and enrollment; (2) access,
use, content of care, and satisfaction; (3) retention and disenrollment; and (4) the relationship
between CHIP and other public and private coverage. Most of these outcomes will be constructed
for each of the three sample domains—new enrollees, established enrollees, and disenrollees—
which will permit a detailed analysis of how outcomes differ among children in the different enrollee
groups. For example, comparisons between recent enrollees and established enrollees will provide
insight into how the experiences of CHIP enrollees in seeking and obtaining care compare with their
experiences prior to enrollment. Similarly, comparisons between established enrollees and
disenrollees in such matters as reasons for originally enrolling, satisfaction with care, and service
utilization can provide insight on factors affecting disenrollment.

75

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research
a

Table VII.6. Illustrative Outcome Variables

Measures

Relevant Sample

Application and Enrollment
Sources of
information
Experiences with
application and
enrollment process

Sources of CHIP information
Most important source of CHIP information
Ease of enrollment
Received assistance with application
Wait time to enroll
Understanding of renewal requirements

Recent enrollees
Recent enrollees

Access, Use, Content of Care, and Satisfaction
Usual source of care
(USC)
Access

Service use

Content of Care

Characteristics of
Care (medical home
principals)

Unmet need

Had USC in past 6 months (medical, dental)
USC type
Having a person doctor or nurse
Availability and ease of finding a doctor
Can reach doctor after hours
Ease of obtaining a timely appointment
Wait time before getting care for scheduled appointment
Affordability of care
Any physician visit
Preventative care (well-child visits; immunizations; flu shot)
Dental care
Vision care
Prescription drugs
Mental health
Specialty care
Hospitalizations and ER visits
Provider asked parent to fill out questionnaire on child’s
development, communication, and social behaviors
Provider measured child’s height and weight
Provider offered anticipatory guidance
Received needed referrals to receive care or services
Received help arranging or coordinating care
Received family-centered care
Providers spend enough time with child
Providers listen carefully to family concerns
Providers are sensitive to family values and customs
Providers give needed information
Providers make family feel like a partner in child’s care
Availability of an interpreter when needed
Doctor/other health professional
Dental care
Vision care
Physical, occupational, speech therapy
Prescription drugs
Mental health
Specialist care
Hospital care

76

Recent enrollees
Established enrollees
Disenrollees
Recent enrollees
Established enrollees
Disenrollees

Recent enrollees
Established enrollees
Disenrollees

Recent enrollees
Established enrollees
Disenrollees
Recent enrollees
Established enrollees
Disenrollees

Recent enrollees
Established enrollees
Disenrollees

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.6 (Continued)
Measures
Parent perceptions
of their ability to
meet child’s health
care needs

Confidence that child could get care needed
Stress about meeting child’s health care needs
Adequacy of insurance coverage
Relative quality of care under CHIP
Provider views of CHIP enrollees

Relevant Sample
Recent enrollees
Established enrollees
Disenrollees

Renewal and Disenrollment
Disenrollment
experiences

Reason for disenrolling

Disenrollees

Renewal
experiences

Knowledge of renewal process
Receipt of renewal communication materials
Ease of renewing coverage

New enrollees
Established enrollees
Disenrollees

Relationship Between CHIP and Other Coverage
Coverage prior to
enrollment
Coverage after
leaving the program
Parent coverage &
substitution
Preferences about
coverage options

Type(s) of coverage 12 months before enrollment
Length of time coverage was held
Reason(s) for ending coverage/being uninsured
Type of coverage after disenrolling
Reason for remaining uninsured
Any parent with employer insurance
Any parent with public insurance
Availability and cost of dependent coverage
Factors considered when choosing a health plan
Importance of select factors when choosing a health plan,
including: services covered, choice of providers, ability to
keep providers, premium, out of pocket costs,
transportation covered, and having covered under one
policy

New enrollees

Disenrollees
Established enrollees

Recent enrollees
Established enrollees
Disenrollees

This table presents an illustrative set of measures for each outcome category. The actual outcome
measures used in our analyses of the CHIP survey data may vary from what is presented in the table,
depending on the content of the final survey instrument, which is in the process of being developed.
a

We will also use survey responses to construct a base set of demographic and other explanatory
variables across all of the descriptive analyses (Table VII.7) to be used in three ways, to (1) describe
the characteristics of the CHIP population across sample domains and states, (2) form key
subgroups for the analysis, and (3) serve as covariates for multivariate analyses. In some cases, a
variable might be both an outcome and a subgroup variable. For example, respondents’ satisfaction
with CHIP would be both an important outcome variable and a key subgroup variable when
examining rates of disenrollment.
3.

Analytic Approach

The methodological approach to the descriptive analysis will include three parts—descriptive
univariate and bivariate analyses, multivariate analysis, and benchmark comparisons of outcomes.
Descriptive analysis. The descriptive analysis will generate a series of descriptive statistics for
each of the outcomes categories shown in Table VII.6 in two stages. In the first stage, we will
calculate simple frequency distributions of each outcome variable within each sample domain and
state. This univariate analysis will provide basic but valuable information on enrollee characteristics,
behaviors, and perceptions at each stage of the CHIP life cycle within and across the study states.

77

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.7. Illustrative Explanatory Variables
Child Characteristics

Family Characteristics

State-Level Variables

Age
Gender

Residence (urban/rural)
Parental education

Type of Program
- S-CHIP

Race/ethnicity

Household income

Primary language

Parental employment status

Self-reported health status

Household size

Program Design Features

One or more chronic health conditions

Family structure

Economic indicators

Diagnosed with asthma; presence of
behavioral health condition

Parental satisfaction

Prior insurance coverage

Attitudes and perceptions about
efficacy of health care

- M-CHIP

- Combination

In the second stage, we will investigate in more detail children’s and families’ experiences and
behaviors in each stage of the CHIP life cycle, by examining whether and how they vary across
different groups of enrollees and states. Specifically, we will conduct a series of bivariate analyses (or
cross-tabulations) to examine whether and how the descriptive statistics vary across key individual
subgroups and states. For this analysis, we will focus on individual subgroups listed in Table VII.D.2
that are of particular policy interest for the outcome considered. For example, we will examine how
reported experiences with the CHIP application and enrollment process vary by such child and
family characteristics as race/ethnicity, primary language spoken, and parent’s education. This
bivariate analysis can tell us, for instance, which demographic groups report the greatest barriers to
enrollment, the types of barriers they identify, and whether the barriers are more prevalent in certain
types of programs (for example, Medicaid expansion versus stand-alone programs).
For each cross-tabulation, we will generate simple test statistics to determine whether
statistically significant differences are evident in the distribution of the outcome across categories.
The specific test will depend on the structure of the measure. For example, for continuous or
bivariate measures we will use a t-test and for categorical variables we will use a chi-square test. All
testing will account for the complex sampling design by adopting appropriate sampling weights and
by applying appropriate corrections to standard errors due to design effects from sample clustering
and nonresponse.
Samples used for the univariate and bivariate analyses will often be pooled across selected
domains or states (or both) to further inform a given research question. One motivation for pooling
is that it will increase the sample size and thus improve the precision of our outcome estimates. Such
gains in precision will be particularly important when analyzing differences between unbalanced
categories, such as English-speaking and non-English-speaking households. A second reason for
pooling is to explore differences in outcomes between sample domains or state-level characteristics.
For example, by pooling the samples of new and established enrollees, we will be able to examine
outcomes that can be generalized to nearly the entire enrollee population in a state (at the time of
sampling), such as the proportion with a usual source of care, or the extent of program satisfaction.
Pooling across states will enable an examination of how differences in program types may be related
to such factors as enrollment barriers and experiences.
Multivariate analysis. While the bivariate analyses will provide an informative view of how the
reported experiences and behaviors of CHIP enrollees differ across demographic subgroups and
states, they cannot explain the source(s) of these differences. For example, the bivariate results may
suggest that established enrollees in non-English-speaking households are less likely to report having
78

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

a preventive care visit. Such a finding has potentially significant policy implications, but this type of
analysis cannot tell us whether the source of this difference is a language barrier or other factors,
such as education or income level, which could be highly correlated with both the primary language
spoken and preventive-care-seeking behaviors. Similarly, we may observe different levels of access
and use among CHIP enrollees across different types of CHIP programs, but these cross-state
differentials may be driven by cross-state differences in the underlying sociodemographic
characteristics of the CHIP populations.
To isolate the association between outcomes and key explanatory variables, we will use a
multivariate regression framework. Regression analysis will allow us to control for possible
confounding variables and to investigate the extent to which various factors explain the bivariate
results (by examining differences between the unadjusted and regression-adjusted comparisons). We
will use a standard set of regression models (linear regression and logistic regression) for the
multivariate analyses; the specific model will depend on the nature of the dependent variable
(whether it is a dichotomous or continuous outcome measure). The basic model specification will
include an extensive set of covariates that encompass characteristics of the child and family (health
status, health care needs, age, race/ethnicity, household income, family structure, and so forth) and
state indicators (for cross-state models). Like the univariate and bivariate analyses, our multivariate
analyses will first investigate samples defined at the state and domain level, followed by analyses of
samples pooled across states and/or domains. As with all descriptive analyses, we will apply
statistical weights, when appropriate, to draw inferences about the whole population, and will take
into account the complex sampling design when calculating standard errors on which our tests of
statistical significance will be based.
We will use the parameter estimates from the multivariate models to calculate adjusted means
on key outcomes for each subgroup of interest, which we will use to make inferences about
differences in these outcomes between groups of enrollees and across states. In addition, differences
between the unadjusted and regression-adjusted outcomes will be probed to determine how much
and which factors explain the unadjusted differences. For example, we may observe different levels
of access and use among CHIP enrollees in different states, but find that such cross-state differences
become smaller after controlling for sociodemographic characteristics of the CHIP populations in
each state.
It is important to note that multivariate models will be estimated for exploratory reasons only—
namely, to better understand the linkages between important policy outcomes and individual and
state-level characteristics. We will not be able to draw any causal inferences from this analysis. For
example, a finding that CHIP enrollees with less-educated parents receive fewer services would not
be interpreted to mean that low parental education reduces access to care. Instead, we would
conclude, more simply, that a significant association exists between education and access to care,
which may or may not be causal. Even with this more limited interpretation, such a finding would
remain important since it pinpoints a group for which improved program services should be
targeted. Caution must also be used when interpreting associations between state program types or
specific design features (such as use of cost sharing) and outcomes (such as enrollment or access to
services) due to the small number of states included in the analysis and our inability to control for all
potentially important policy or other state-level variables.
Benchmark comparison of outcomes. The descriptive analyses described above will generate
an extensive set of outcome measures related to enrollees’ experiences, behaviors, and satisfaction
during each stage of the CHIP life cycle. While these measures will provide rich insight into the
characteristics and effectiveness of the program, further insight can be gained by comparing survey
79

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

outcomes to historical, national, and state benchmark measures, as well as to evidence-based
pediatric care recommendations. Two sources of “internal” benchmarks are the 2002 survey of
CHIP enrollees and disenrollees in 10 states and the Medicaid survey conducted under that same
Congressionally-mandated evaluation. Comparing outcomes between the 2002 and current CHIP
survey (in overlapping survey states) will provide a sense of how the CHIP program has evolved
over the past decade, including changes in enrollees’ experiences with the application and enrollment
process, health care access and use while in the program, and reasons for disenrolling. In the three
study states where we will also have Medicaid survey data, we will contrast the enrollment and access
experiences of CHIP and Medicaid enrollees to gain insight into how different aspects of the
programs may affect key outcomes.
We will also use external benchmarks from various sources to better assess the extent to which
CHIP is meeting the health needs of children enrolled in the program. Such benchmarks may
include federal targets set by Healthy People 2020, types and levels of care recommended by the
American Academy of Pediatrics, HEDIS measures, and specific health care access and use
measures from state or national survey data.
E. Analysis of CHIP’s Impact on Children’s Access, Use, and Other
Outcomes
Ultimately, the impact of CHIP on the lives of children and their families depends on the extent
to which the program improves access to care, receipt of services, and satisfaction with care, and
reduces the financial burden of care for the children who enroll. To measure the impact of CHIP on
the health and well-being of children and their families, improvement in intermediate access and use
outcomes will be critical. These factors may also influence whether parents want their children to
remain in the program and whether they are willing to pay premiums.
Prior studies have demonstrated that uninsured children experience more access problems and
receive fewer services relative to children with public health insurance coverage (Rosenbach 1989;
Monheit and Cunningham 1992; Stoddard, St. Peter, and Newacheck 1994; Currie and Thomas
1995; Newacheck et al. 1998; Davidoff et al. 2000; Moreno and Hoag 2001; Dubay and Kenney
2001). Past research has also shown the existence of access and use differentials for children in
different demographic and socioeconomic subgroups. Numerous studies have found further that
expansions of CHIP and investments in outreach and enrollment in Medicaid and CHIP have led to
improvements in children’s access to care, preventive services, or reductions in unmet needs
(Damiano et al. 2002; Dick et al. 2004; LoSasso and Buchmueller 2004; Szilagyi et al. 2004, 2006;
Kempe et al. 2005; McBroome, Damiano, and Willard 2005; Shone et al. 2005; Hudson and Selden
2007; Gruber and Simon 2008; Dubay and Kenney 2009). In addition, gaps in access to health care
by race/ethnicity and income have narrowed for children due to gains in public coverage for
children (Shone et al. 2005; Dick et al. 2004). The findings from these papers suggest that
differences in service use found between the uninsured and those in Medicaid and CHIP are not all
driven by unmeasured differences in characteristics of the two groups, but instead reflect greater
access to care afforded to children with health insurance coverage.
Our analysis will extend this previous research by examining the effects of CHIP across a
combination of both traditional and emerging outcome domains associated with children’s health
care access, use, and unmet needs, and families’ well-being. For example, despite research indicating
that anticipatory guidance provided by medical providers has positive outcomes on children’s lives,
many parents do not routinely receive advice in these areas from their children’s physicians (Perry
80

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

and Kenney 2007). To address this, we will analyze the extent to which children receive certain types
of care and anticipatory guidance that have been shown to have positive outcomes on children’s
lives. For example, we plan to analyze the receipt of flu shots (Committee on Infectious Diseases
2010), vision screenings (USPSTF 2004), and hearing screenings (Harlor et al 2009), as well as
anticipatory guidance, such as counseling on diet and exercise (Chen et al. 2007; USPSTF 2010).
We will conduct analyses of CHIP’s impacts for specific subgroups of children, such as children
with special health care needs, asthma, or mental health problems, to assess how CHIP enrollment is
affecting children who have the greatest health needs. We will also examine how impacts vary across
the study states. Notably, states have made several design decisions in their CHIP programs that
could influence access, use, satisfaction, and financial burden. CHIP program choices regarding the
benefit package, service delivery and payment provider arrangements, cost-sharing, and degree of
care coordination vary within and across states (Hill et al. 2005; Cohen Ross et al. 2009; Decker
2010). The sum total of these design choices could affect the access and use experiences of children
under CHIP programs in different states and programs. In addition, access and use may be
influenced by the supply of health care services available to low-income children and by the
characteristics of children enrolling in CHIP and their families.
1.

Research Questions
Among the research questions addressed by the impact analysis will be: 20
• Impacts on children’s health care access, use, and unmet needs. How much does
CHIP increase the extent to which enrolled children have a usual source of care or a
patient-centered medical home? Does CHIP increase the receipt of well-child and dental
care checkups? Does CHIP increase the extent to which children receive flu shots or
vision and hearing screenings, or their parents receive anticipatory guidance on critical
issues? How much does CHIP reduce unmet needs among children who enroll?
• Impacts on child and family well-being. Does CHIP increase parents’ confidence
that their enrolled children will obtain needed care? To what extent does CHIP increase
their satisfaction with health care provided to enrolled children? To what extent does it
reduce the financial burden of health care for families, particularly those who have
children with special health care needs?
• Impacts among and between key subgroups. To what extent do estimated CHIP
impacts appear to vary across states and subgroups? Are impacts larger among children
who enrolled in CHIP for a specific medical reason, such as an illness or injury,
compared to others? What are the impacts among children with special health care
needs?

2.

Data and Measures

The impact analysis will draw on data from the 10-state survey of CHIP enrollees and
disenrollees, focusing on a subset of the measures that will be the focus of the extensive descriptive
20 In addition to the impact analysis, all of the outcomes explored through these questions will be examined
through the

81

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

analysis described above in Section D. Specifically, the analysis will focus on outcome measures in
the following areas.
1. Access to care, including ability to find a doctor and get timely appointments
2. Service use
3. Usual source of care (medical and dental)
4. Content of care, including provider communication and medical home concepts
5. Unmet needs
6. Parent perceptions about ability to meet the child’s health care needs and financial
burdens associated with care
To minimize the risks of confounding influences, and to form subgroups, the analysis will also
use the survey to construct a series of explanatory variables—many of which will likewise be the
focus of the descriptive analysis. Among these are (1) the child’s age, sex, and race/ethnicity,
interacted with the parent’s interview language; (2) the health status of the child (that is, general
health status, presence of chronic conditions, or special health care needs); (3) household income
(defined as a percentage of the federal poverty level) and household size (the number of children in
the household); (4) the educational attainment, work status, health status, and insurance status of the
parent; (5) the parent’s attitudes regarding the efficacy of medical care (defined as the extent to
which the parent believes that he or she can overcome most illness without help from a doctor and
that home remedies are often better than prescribed drugs); and (6) the child’s county of residence.
3.

Analytic Approach

Ideally, to estimate how CHIP enrollment affects access, service use, unmet needs, satisfaction,
and financial burdens, children would be randomly assigned to the CHIP program (the treatment
group), and their experiences would be compared to those of a control group. With random
assignment, differences observed between the treatment and the control group could be attributed
to CHIP. Such a strategy is not an option here; instead, we will define a comparison group to
provide a counterfactual for what would have happened to CHIP enrollees in the absence of CHIP.
We also will use multivariate methods to minimize systematic differences between the comparison
and the treatment group.
We considered several different comparison groups for the purpose of defining a counterfactual
for CHIP, including using the pre-CHIP experiences of established enrollees, a longitudinal design,
or a comparison group of near eligibles. The most advantageous design for this analysis is the use of
two different cross-sections of new and established enrollees, as demonstrated in Kenny (2007).
This quasi-experimental approach uses a separate sample pretest and posttest design (Campbell and
Stanley 1963; Singleton, Straits, and Straits 1993). The experience of established enrollees (that is,
children who have been enrolled in the program for at least five months)—the treatment group—
will be compared with the pre-CHIP experiences of newly enrolling children—the comparison
group. Thus, the pre-CHIP experiences of the recent enrollee sample serve as a counterfactual for
the CHIP experiences of the established enrollee sample. In an effort to minimize the differences
between the comparison and the treatment group, the analysis will control for other potentially
confounding factors related to the characteristics of the children and their parents. In addition,
numerous alternative model specifications are estimated to assess the robustness of the impact
estimates.
82

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Regression model. The model aims to estimate the effect of CHIP enrollment on access to
care, unmet needs, service use, satisfaction, and financial burdens. We accomplish this by comparing
measures obtained from the treatment and comparison groups described above, using multivariate
techniques to statistically control for differences in the characteristics of the samples. The model
specification may be written as follows:
Yjs = β0 + β1Tjs + β2Agejs + β3Sexjs + β4(Race/Ethnicity*Interview Language)js + β5Health
Statusjs + β6Household Incomejs + β7Household Sizejs + β8Parental Education Attainmentjs +
β9Parental Work Statusjs + β10Parental Attitude To Medical Carejs + β11Countyjs + εjs
where Y is a measure of access to care or use for child j in state s; T is a (0, 1) indicator variable for
treatment or comparison group; Age represents the child’s age; Sex represents the child’s sex;
Race/Ethnicity*Interview Language represents the child’s race/ethnicity interacted with the
interview language; Health Status represents measures of the health status of the child (that is,
general health status and presence of elevated health care needs); Household Income is defined as a
percentage of the federal poverty level; Household Size is defined as the number of children in the
household; Parental Education Attainment and Parental Work Status reflect those characteristics of
the parent; Parental Attitude To Medical Care represents the parent’s attitudes regarding the efficacy
of medical care; County is the child’s county of residence (county fixed effects capture unobserved
factors that could be related to characteristics of CHIP enrollees and their parents); and ε is residual
error.
In addition to the state-specific models, separate models will also be estimated for a number of
key subgroups to assess the extent to which the findings hold up for different types of children.
Separate impact estimates will be derived for children in different subgroups defined by the child’s
race/ethnicity, age, health status, and the parent’s educational attainment. In addition, interaction
terms will be added to test whether CHIP impacts appeared to vary with the characteristics of the
child and his/her family. Models will be estimated on recent and established enrollees who had been
uninsured just before enrolling in CHIP and will include all the demographic and socioeconomic
control variables from the core model, a dummy variable for whether the child is a recent or
established enrollee, and a set of terms that interact that dummy variable with the child’s health
status, age, and race/ethnicity/primary language, the parent’s educational attainment, and the child’s
state of residence.
Robustness testing. Our confidence in the summary impact estimates we derive will depend
on several different factors, including how sensitive the regression estimates are to alternative
specifications, the confidence we have in the counterfactual for established enrollees, and our
success at minimizing differences between comparison and treatment groups. In order to assess
whether the findings are robust with respect to alternative specifications, and to address potential
concerns about the validity of the impact estimates, we will test a broad range of alternative model
specifications.
The most fundamental concern is that the pre- CHIP experiences of the recent enrollees may
not serve as a reliable counterfactual for the experiences of established enrollees because of
differences between the two samples. We will address this issue by conducting a number of
sensitivity analyses to assess the robustness of the findings. To address possible unobserved
differences between recent and established enrollees, models can be estimated with only recent
enrollees who stay enrolled in the program for at least five months. We will also estimate the model
with the subset of established enrollees who were enrolled in CHIP closer to the time period during
83

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

which children in the recent enrollee sample were entering CHIP. To make the recent and
established enrollee samples as homogeneous as possible, we can also use the information regarding
the presence of insurance coverage just before enrolling in CHIP for both established and recent
enrollees, estimating one set of impacts for recent and established enrollees who were uninsured just
before enrolling in CHIP, and another set for recent and established enrollees who were insured just
before enrolling.
An additional concern is that the access and use experiences of children just prior to enrolling
may not reflect what these children typically face. They may have had atypically high service needs,
which, in turn, triggered their enrollment into CHIP. To address this possibility, children who had
an emergency room visit (a hospital stay) or unmet health needs before enrollment can be excluded
from the analysis to assess the extent to which the impact estimates for the other outcomes are
sensitive to these exclusions. In addition, as in Howell and Trenholm (2007), we will estimate the
models after limiting the study sample to children who did not enroll for a medical reason, to
attempt to exclude children who enrolled in CHIP due to a temporary health problem that would
have improved regardless of enrollment.
The analysis will also address the concern that the experiences of established enrollees may
overstate the access to care that children typically have under CHIP. Other analysis (Kenney et al.
2005) suggests that disenrollees might have had slightly worse access and use experiences with CHIP
coverage relative to the established enrollees, therefore an additional set of CHIP impacts will be
estimated using disenrollees as the treatment group in place of established CHIP enrollees. In
addition, alternative models will be estimated replacing the county fixed effects with dummy
variables for the child’s state of residence and for whether the child lives in a county that is in a
metropolitan statistical area (MSA).
Finally, there is a risk that some of our control variables may be endogenous to the access and
use outcomes being studied. This is a particular concern for the health status measures because
receipt of care may improve a child’s health status. While including health status controls may
introduce bias, excluding health status altogether may introduce omitted variable bias. We will
address this issue by controlling for the presence of chronic conditions that had an onset prior to
the period under study, which should be exogenous to the services received in the recent past. We
also will examine how sensitive the findings are to the inclusion of additional health status controls,
such as perceived health status of restricted activity days, and when findings are estimated for
subgroups of children who are more homogeneous with respect to health status.
F. Analysis of Relationship Between CHIP, Medicaid, and Private Coverage
CHIP was created to offer health care coverage to eligible low-income children as long as “the
plan does not substitute for coverage under group health plans” (CHIP statute) and to serve as a
bridge between Medicaid and employer coverage. Between 2000 and 2009, the percentage of lowincome children (<200 percent FPL) with Medicaid/CHIP coverage increased, while the percentage
who were uninsured decreased by five percentage points (Urban Institute estimates). 21 At the same
The health insurance coverage numbers are based on data from the 2001 and 2010 Annual Social and Economic
Supplement (ASEC) to the Current Population Survey (CPS) and are adjusted for changes in the survey questions over
time.
21

84

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

time, the percentage of uninsured low-income nonelderly adults increased by 6.3 percentage points
(Urban Institute estimates).
Concerns that CHIP could displace or “crowd out” employer coverage led Congress to require
that states implement such policies as waiting periods to minimize this type of substitution. The
Congressional Budget Office’s review of the literature (2007) found that estimates of the proportion
of children enrolling in CHIP who would have been covered by employer-sponsored insurance in
the absence of CHIP ranged between 25 percent and 50 percent, 22 consistent with projections made
when CHIP was authorized in 1997. Concerns about crowd-out remain (GAO 2009), especially as
CHIP eligibility expands to higher-income children, a group with a greater risk of substituting
private coverage.
Understanding coverage transitions and dynamics between CHIP and private coverage is even
more critical today, given (1) the more recent CHIP income eligibility expansions to cover children
with family incomes above 200 percent FPL; (2) the deterioration of employer coverage, which has
become less generous and more costly (Claxton et al. 2010; Kogan 2010); (3) a decrease in the
availability of dependent coverage through ESI ( Vistnes et al. 2010); and (4) ACA provisions
regarding CHIP, employer coverage, and pediatric plans offered through the state health insurance
exchanges to be established. While the substitution of CHIP for employer coverage may reduce the
target efficiency of the former (Blumberg et al. 2000) because some of the dollars are going to those
already insured rather than to the uninsured, low-income families who choose to substitute public
for private coverage may be improving the quality and affordability of health care for their children
(e.g., Dubay and Kenney 2001; Davidoff 2004). In this context, it is crucial to understand the cost
and generosity of the different types of coverage available to low-income children to assess the
implications of moving from one type to another.
Using data from the survey of new and established enrollees, we will examine coverage
transitions and assess the extent to which CHIP appears to be displacing employer coverage. Using
the two alternative measures for defining crowd-out employed in the prior CHIP evaluation, 23 we
will expand that evaluation’s analyses and estimates by drilling down by income group and taking
into account the availability and affordability/generosity of dependent coverage through ESI.
We will also explore how the costs and benefit packages of employer plans compare with CHIP
in light of ACA’s provisions regarding interactions between employer coverage and CHIP. In
addition, we will examine how the transitions between public coverage, private coverage, and
uninsurance, as well as crowd-out estimates, vary across states. To the extent possible, we will also
explore the association between state-specific CHIP policy choices and these outcomes. However,
our ability to draw conclusions about the effect of state programs and anti crowd-out measures will
be limited significantly by the small number of states included in the study, as well as the risk of
These figures include estimates from the two general approaches to calculating substitution. The first approach
relies on econometric methods applied to national datasets to derive substitution estimates by measuring differences in
insurance trends between CHIP-eligible children and a comparison group. The second approach uses surveys of CHIP
enrollees to estimate crowd-out, examining the extent to which children had employer coverage before they enrolled in
CHIP or to which CHIP enrollees could obtain employer coverage through their parents.
22

23

These measures are discussed below and are referred to as ‘Substitution at the Time of Enrollment’ and
‘Potential for Substitution After Enrollment. Also, see Sommers et al. (2005) for the previous evaluation’s findings.

85

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

potentially confounding factors that cannot fully be factored into the analysis, such as state
unemployment rate differentials arising from the recent recession.
To the extent possible we will also contrast our findings with those obtained in the prior
evaluation. However, our ability to make relevant comparisons over time will be limited by factors
such as differences in the underlying economic situation, given the recent nearly unprecedented
recession; and differences in the profile of new CHIP enrollees (due to program maturity, potential
secular changes, and/or the influence of the recent recession).
1.

Research Questions

Research questions center on the coverage experience of children prior to enrolling in CHIP
and the extent to which families retained access to private coverage despite this enrollment:
• Insurance Coverage Prior to CHIP. What was children’s distribution of coverage
prior to enrolling in CHIP? For those who had employer-sponsored coverage, why did
they drop or lose it? What fraction retains ESI coverage once enrolled in CHIP? For
those with no insurance, how long had they been uninsured? To what extent was this
due to a need to satisfy a waiting period prior to enrolling in CHIP?
• Extent of Crowd-Out from CHIP. What share of new CHIP enrollment can be
attributed to crowd-out, and what can be attributed instead to reductions in uninsurance?
To what extent is crowd-out evident among established enrollees?
• Variation with State Policy Choices. Does the share of new CHIP enrollees who had
prior private coverage vary by whether states have waiting periods and by the length of
the waiting period? Does it vary by whether the state imposes premiums or by program
type (separate, Medicaid expansion, combination approach)?
2.

Data, Measures, Analytic Approach

In previous literature, crowd-out has been estimated for the CHIP program using two main
approaches. The first approach relies on econometric methods applied to national datasets to derive
substitution estimates by measuring differences in insurance trends between CHIP-eligible children
and a comparison group (Dubay and Kenney 2009; Gruber and Simon 2007; Hudson et al. 2005). In
these studies, impacts are derived using either variation in eligibility thresholds over time, or
comparison groups, to provide a counterfactual to measure what coverage the CHIP-eligible
children might have had if CHIP were not available. These types of studies have produced a range
of estimates but they suggest that approximately one-quarter to one-half of CHIP-eligible children
(with family incomes around 200 percent FPL) would have had employer coverage if CHIP were
not available. The second approach to the estimation of crowd-out relies either on estimates derived
from state surveys of CHIP enrollees 24 (Shone et al. 2008; Hughes et al. 2002; Shenkman et al.
2002; Sommers et al. 2007) or national data (Kenney and Cook 2007) to examine the extent to
which children had employer coverage before they enrolled in CHIP or to which CHIP enrollees
could obtain employer coverage through their parents. These types of studies also have produced a

24

This is the general approach we will employ in our crowd-out estimation.

86

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

range of estimates and suggest that approximately 0.7 to 35 percent of children enrolled in CHIP
potentially could be substituting CHIP for employer coverage. 25
As in the prior evaluation, we will use methods that fall under this second approach. Using data
from the survey of recent enrollees, we will examine the coverage patterns of new CHIP enrollees
during the six months prior to enrollment (see Table VII.8). We will also explore the reasons behind
the different transitions for both the entire sample and subgroups of special interest, such as higher
income children. We will also conduct analyses for a state and groups of states based, for example,
on program type or presence of particular policy programs (e.g., waiting periods).
Table VII.8. Patterns of Insurance Coverage in 6 Months prior to CHIP Enrollment
Status in the 6 Months Before CHIP Coverage

Share of New Enrollees

Private coverage in prior 6 months
− ESI
− Nongroup private
Medicaid coverage in prior 6 months b

%
%
%
%

a

Uninsured for 6 months

%

Other coverage in prior 6 months
Source:
a
b

%

Survey of New CHIP Enrollees.

Includes children who had private coverage at some point in the 6 months prior to CHIP enrollment.

Includes children who had Medicaid coverage at some point in the 6 months prior to CHIP enrollment.

Substitution at the Time of Enrollment. Our first measure for estimating the potential
substitution of CHIP for ESI will identify the share of entrants into CHIP that had private coverage
at some point in the six months before enrolling in CHIP and for which employer coverage might
still be available. 26 To estimate the extent of substitution among this share of new enrollees, we need
to distinguish between those with prior ESI coverage whose parents dropped it voluntarily and
those whose parents dropped it involuntarily. We will do this by examining the reasons parents
report a child’s ESI insurance coverage as ended before enrolling in CHIP. Broadly, these can be
classified under the categories shown in Table VII.9.

A third approach estimates crowd-out through a state-specific review of CHIP applicant data, producing
estimates of the percentage of applicants declined due to existing or prior private health insurance (Limpa-Amara et al.
2007). This approach produces estimates which suggest that few children enrolled in CHIP (less than 15 percent) drop
private coverage for CHIP.
25

26 The

survey of CHIP enrollees may be supplemented with state administrative data reporting Medicaid and CHIP
enrollment histories to clarify coverage prior to enrollment in CHIP in those cases where parents’ recall of prior
coverage was problematic because their children had been enrolled in CHIP for more than one year.

87

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.9. Broad Categories for Why Coverage Ended Among Recent Enrollees with Prior ESI
Reason for Loss of ESI

Employment or benefit loss/change
Family structure change/loss of parent
Preference for CHIP/dislike of other insurance
Affordability/cost-sharing concerns
Miscellaneous (e.g., moved/relocated, failed to reapply)

Involuntary
Involuntary
Voluntary
Ambiguous
Not enough information to determine

Reasons classified as involuntary indicate that the child would not have been able to keep his or
her private coverage and thus do not constitute crowd-out. Reasons of affordability are considered
ambiguous because it is hard to differentiate between those families that simply considered CHIP
cheaper than their ESI coverage and voluntarily chose to enroll their child in CHIP (voluntary
substitution) and those that would have dropped ESI coverage for their child (who would have
become uninsured) even if CHIP were not available (involuntary substitution). Finally, there are
cases for which there is not enough information and so cannot be classified as either voluntary or
involuntary substitution.
We will expand the analysis used in the prior evaluation 27 in a number of ways:
1. To address CHIP income eligibility expansions during the last decade, we will drill
down by income category (sample size permitting), paying particular attention to
higher income group(s).
2. As mentioned above, the deterioration of affordability and quality of care in private
coverage has become a critical issue in the last several years. Basic information on the
cost and quality of the coverage being dropped is crucial to discussing the focus of policy
directed at crowding-out. We will examine this issue by adopting an affordability-quality
“typology” similar to Kogan et al. (2010). 28 We will attempt to identify cases in which
prior private coverage was inadequate/unaffordable (i.e., underinsurance) and revise
substitution estimates taking this new dimension into consideration. Sample size
permitting, we also plan to examine children with special health care needs (SHCN),
since these children are more likely to be underinsured (Kogan et al. 2010).
3. Another important trend in ESI is the decreasing availability of dependent coverage. Not
considering this possibility might cause us to overestimate the substitution of CHIP for
ESI (Howell et al. 2008; Shone et al. 2008). However, we plan to obtain information on
the availability of dependent coverage in the current enrollee survey to examine the
sensitivity of our substitution estimates to this emerging trend.
Potential for Substitution After Enrollment. Using the survey of established enrollees, our
second measure of substitution will estimate the potential for CHIP to substitute for ESI over the
27 See

Sommers et al. (2005, 2007) for findings.

Kogan uses the following questions to identify underinsurance: (i) Does the child’s health insurance offer
benefits or cover services that meet his/her needs? (ii) Does the child’s health insurance allow him/her to see the health
care providers he/she needs? Not including health insurance premiums or costs covered by insurance, (iii) does the
parent pay any money for the child’s health care? If yes, (iv) how often are these costs reasonable? If a parent answered
“sometimes” or “never” to either (i), (ii), or (iv), the child was considered to be underinsured.
28

88

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

longer term: the share of established enrollees who could be covered by employer-based coverage
through their parents if CHIP were not available. Following the analysis conducted in the previous
evaluation, 29 we will present alternative estimates based on several scenarios of parents’ employer
coverage and their children’s health needs (see Table VII.10). To the extent our data allow, we will
also expand this analysis along the same lines as our substitution measure at enrollment time, as
discussed, by calculating substitution estimates by (1) income group, (2) taking into consideration
underinsurance, and (3) whether dependent coverage is available through parents’ ESI.
Table VII.10. Potential Substitution for Established CHIP Enrollees
Aspects of Parents’ Employer Coverage and Children’s Needs

Potential Substitution Estimates (%)

Employer Pays Some or All of the Premium + Child Does Not
Have Special Health Care Needs

%

Employer Pays Some or All of the Premium + Child Does Not
Have Special Health Care Needs or Greater Health Care Needs

%

Employer Pays Some or All of the Premium

%

To the extent possible, and with the caveats mentioned above, we then will contrast our
findings on the two measures of substitution to those obtained in the previous evaluation.
Limitations. Several analytic challenges will limit our ability to measure crowd-out accurately.
One, and perhaps the most important, is the impossibility of producing a counterfactual because we
cannot actually observe what parents would have done if CHIP were not available. For instance,
would a parent have dropped ESI coverage after an increase in premiums or accepted a job not
offering dependent coverage had CHIP not existed? Another issue relates to instrumentation. As
mentioned above, underinsurance has become a relevant issue that may have important policy
implications. However, there is no standard definition of “underinsurance” 30 and obtaining
information on the cost and quality of coverage dropped by new enrollees is challenging. Finally,
there are issues regarding measurement error (e.g., ability of respondents to characterize accurately
the reasons for changing insurance coverage) and omitted variable bias (e.g., trends and/or shocks
related to changes in the underlying economic situation or health care costs) that cannot be
controlled for properly in this type of analysis.
Careful survey instrumentation can address these limitations, at least to some degree. For
example, by augmenting questions about the reasons why parents do not take up ESI for their
children with reliable questions on the qualities of this coverage, we can further explore the question
of the likely take-up of the coverage in CHIP’s absence (the counterfactual). In addition, we can
refine the questions related to cost of care in the original CHIP survey, drawing on our experience in
analyzing these questions in the past as well as on more recent instrumentation. Through this
approach, we expect to better examine and understand the issue of underinsurance among incomeeligible children in the absence of CHIP.
29 See Sommers et al. (2005) for how children with elevated needs were defined in the prior evaluation. In this
evaluation, we propose to use the definition of SHCN developed by the Maternal and Child Health Bureau. In addition,
we will consider the subset of children who meet the SHCN definition and who also have greater care needs (for
instance, those classified as being in fair or poor health) to focus on children with the greatest health care needs.
30 See

Kogan et al. (2010), Oswald et al. (2005), Ward (2006).

89

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

G. Analysis of Retention and Reenrollment
A central question to be addressed by this evaluation is the extent to which CHIP is providing
continuous coverage for eligible children, and whether and when children who lose CHIP coverage
obtain some other type of health insurance. While CHIP provides coverage to more than seven
million children during the year, a large percentage experience uninsurance spells or disenroll entirely
(Sommers 2005, 2007; Wooldridge et al. 2005; Haley and Kenny 2001). Children uninsured for even
short periods of time have reduced access to care and report more unmet health care needs than
those with continuous coverage (Olson et al. 2005; Aiken et al. 2004). In 2006, more than 40 percent
of uninsured children eligible for public coverage were enrolled in CHIP or Medicaid in the previous
year, suggesting that gaps in coverage could be reduced in part through increases in retention
(Sommers 2007). To improve the retention of eligible children in CHIP—and, ultimately reduce the
incidence and duration of uninsurance spells among low-income children—states need detailed
information on CHIP enrollment patterns, transitions between CHIP and other types of health
insurance, and how coverage dynamics vary across states and subgroups of children eligible for
public insurance.
In an effort to provide this information to states, this analysis will examine CHIP enrollment
and exit (or disenrollment) spells, including what coverage children obtain during their time out of
the program. It will expand on the analysis of state administrative data by linking enrollment data
from the 10 study states with data from the survey of enrollees and disenrollees. This merged dataset
will provide a valuable additional means of examining coverage dynamics among CHIP enrollees,
particularly during gaps in public insurance coverage, and for key subgroups. It will update and build
on previous research examining coverage duration and transitions among CHIP enrollees, most
notably the work of Trenholm et al. (2009), Wooldridge et al. (2005), and Moreno and Black (2005).
In addition to providing analysis on states not included in this previous work, we will examine an
extended set of enrollment and coverage transition outcomes, including reenrollment and program
“cycling” (when children lose and regain coverage over a short period of time). We will also
introduce a competing hazards framework that explicitly accounts for different transitions out of
CHIP enrollment and disenrollment spells, including exits into Medicaid, private insurance, or an
uninsured state. We will also examine enrollment and disenrollment spells for public health
insurance (CHIP and Medicaid combined).
1.

Research Questions

The primary purpose of this analysis is to broaden our knowledge about coverage dynamics
among CHIP enrollees, and whether and how they are affected by individual-, family-, and statelevel factors. The key research questions that the analysis will address fall into three categories:
• Duration of CHIP Enrollment. How long do families stay in CHIP? What proportion
of enrollees has single short-term, medium-term, and long-term enrollment spells and
multiple enrollment spells? What percentage of children remains enrolled at selected time
points since enrollment? Does the probability of disenrollment increase at certain time
points (for example, at first renewal) or vary by time in the program? Does length of
enrollment differ across key subgroups and states, and what factors explain this
variation?
• Coverage After Leaving CHIP. What type of coverage, if any, do children obtain after
leaving the program? How long are enrollees continuously covered under public health
insurance coverage (Medicaid and CHIP) after program entry? How long do children
90

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

who lose coverage after leaving CHIP remain uninsured? What proportion of children
who exit into an uninsured state may still be eligible for CHIP? How do enrollee- or
state-level factors affect coverage transitions?
• Reenrollment and Cycling. What proportion of children who disenroll from CHIP
returns to the program after a period of disenrollment? What percentage of children reenrolls at selected time points since their exit from the program? What is the median
time out of CHIP between enrollment spells? Does the rate of reenrollment decrease
over time out of the program? What factors are associated with reenrollment rates and
the duration of exit spells?
2.

Data and Construction of Analytic File

One benefit of using administrative program records is the sheer size of the data set, which
includes all children who have been enrolled in public insurance (CHIP or Medicaid) and allows for
precise state-level estimates of enrollment and exit durations. However, a limitation of using
administrative records only is the lack of information on the coverage status of children after they
leave public insurance. In addition, administrative data typically contain very limited information on
the characteristics of program participants other than eligibility-related measures such as age. To
overcome these limitations, we will link the survey data to individual-level data from the state
enrollment files on the survey sample of CHIP enrollees and disenrollees in the 10 study states. In
addition to providing rich data on the health, demographic, and socioeconomic characteristics of
CHIP enrollees, the survey data will provide key information on the insurance coverage of children
upon entry into and exit from CHIP during the months preceding and following the survey period.
This additional information will enable us to create complete enrollment histories for the survey
sample over a multiyear period that starts before the sampling month and ends before the last
month for which administrative data are available, or are right-censored in that month. Because the
availability of administrative data will limit how far back we can observe children’s start dates, we
will exclude left-censored spells from the analysis sample (enrollment spells that begin before the
first month for which we have administrative data).
The first step in implementing this analysis will be to construct an analytical file. Using the
individual-level state monthly enrollment files, we will create one enrollment history record for each
individual included in the survey sample for the entire study period (which will be determined by the
period for which we obtain enrollment files from each state). The format and content of the
enrollment files will vary across the study states but will contain, at a minimum, information on the
month-by-month eligibility status of each child, including whether the child was enrolled in MCHIP, S-CHIP, or the Medicaid program, and the eligibility group within each of these coverage
types. We will use this information to construct enrollment and exit spells and measures for the
coverage type during these spells. An enrollment spell will be defined as beginning on either the first
day of the month when enrollment is first recorded or the first day of the month following a period
of disenrollment. An enrollment spell will end on the last day of the month immediately before the
next disenrollment period. We will take the CHIP or Medicaid eligibility category for an enrollment
spell from the first month of the spell. If an enrollment spell has not ended by the end of the study
period, we will define that spell as censored. Similarly, an exit spell will begin on the first day of the
month immediately following a period of enrollment and end on the last day of the month
immediately before the next enrollment period; spells that do not end before the last month of the
study period will be defined as censored. Data from the individual-level enrollment and exit spells—
namely, the length of the spells and whether or not the spell was censored—will be the basis for the
91

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

survival analysis described in this section. 31 Each child in the survey sample will contribute one or
more spells to the analysis sample.
For the months during the study period when a child was not enrolled in either CHIP or
Medicaid, we will rely primarily on self-reported answers on insurance coverage from the survey to
determine whether the child was privately insured or uninsured during these months. For cases in
which we are unable to determine insurance coverage during exit spells based on the survey data, we
will consider imputing a child’s insurance status based on children with similar enrollment patterns,
health status, and demographics living in the same state.
We will also use the survey data and, to a limited extent, administrative data, to construct a
number of person-level variables to explore variations in coverage dynamics across key subgroups.
These variables will include the subgroup measures listed in Table VII.7, such as children’s health
status and health care needs, family demographic characteristics, prior insurance coverage, and state
of residence.
3.

Analytic Approach

Our general methodological approach will consist of two parts. First, we will analyze the
characteristics of CHIP enrollment and exit spells observed for the period covered by the state
administrative data, using life table analysis. Second, we will conduct multivariate survival analyses to
examine the factors that influence coverage dynamics. All analyses will use individuals as the unit of
analysis.
Life Table Analysis. We will use “life table analysis,” a descriptive approach for analyzing data
on duration of participation in a given status (Elandt-Johnson and Johnson 1980), to examine spells
of coverage and uninsurance among the survey sample. Using a life table similar to the example
shown in Table VII.11, we will generate descriptive information for the full sample and
specific subgroups. For each month that a particular spell of enrollment or disenrollment lasts, the
life table will contain six pieces of information: (1) the number of spells in the sample lasting at least
that long; (2) the number of spells lasting at least that long that we continue to observe in the
following month (are not right-censored), (3) the number of uncensored spells ending in that month,
(4) the hazard rate, (5) the survival rate, and (6) the cumulative exit rate. The hazard rate represents
the probability that a spell will end in a particular month, given that it has lasted that many months.
The survival rate, which can be derived from the hazard rate, gives the unconditional probability that
a spell will last at least a given number of months. Finally, the cumulative exit rate is the
unconditional probability that a spell ends within a given number of months (the survival and
cumulative exit rates total 100 percent). The month in which the cumulative exit rate (or survival
rate) equals 50 percent provides the median spell duration.

To inform and complement the survival analysis described in this section, we will also use information on the
enrollment and exit spells of survey sample members (and linked survey data on their characteristics and coverage
transitions) to conduct a standard descriptive analysis that examines the prevalence of different types of coverage
transitions, the prevalence and nature of program churning, and the factors associated with program churning.
31

92

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.11. Life Table of CHIP Enrollment Spells

Month
1
2
3
4
5
…

Note:

Number of
Spells at
Beginning of
Month (a)

Number InSample in
Following
Month (b)

Number Exiting
During Following
Month (c)

Survival
Rate (d)

Hazard
Rate (e)

Cumulative
Exit Rate (f)

Column (a) represents the number of enrollment spells that have lasted at least the indicated
number of months, regardless of when the spell first started. Column (b) indicates the number
of the spells from (a) that we continue to observe in the following month (that is, spells that
are not right- censored). Column (c) is the number of spells from (b) that exit CHIP in the
following month. The hazard rate (e) is 100*(c)/(b). The cumulative exit rate (f) is the sum of
the previous row’s cumulative exit rate and the product of the current row’s hazard rate and
previous row’s survivor rate, divided by 100. The survival rate is 100-(f).

The survival and hazard rates represent two important pieces of information for policymakers.
The survival rate provides policymakers with information about the fraction of individuals starting
an enrollment spell that remains covered for a given number of months—that is, the percentage of
individuals that remains enrolled at specific intervals since entry into that status. It provides answers
to such questions as: “Among children who enroll in CHIP, how many will still be enrolled after
24 months?” Similarly, when applied to exit spells, the survival rate can address the question,
“Among children who disenroll from CHIP, how many experience a coverage gap of at least six
months?” An examination of changes in the hazard rate by time in the program can be used to
determine whether there are certain months when the probability of disenrollment increases (for
example, at the time of first renewal), and whether that likelihood of disenrollment decreases the
longer that children stay in the program. This information can provide insight into whether renewal
processes may be a factor in the retention of children on CHIP and whether efforts to retain
children should focus on outreach at certain time points. Similarly, questions related to the duration
of exit spells and how the probability of reenrollment varies by time out of the program can be
answered using life tables of CHIP exit spells.
A major advantage of using life table analysis to generate descriptive statistics on enrollment
and exit spells is that it easily handles right-censored spells (spells that are ongoing at end of the
study period). If such censoring is not taken into account, any estimate of spell length will be biased
downward. Right-censored spells contribute information to the life table only up to the month that
they are censored. For each month prior to censoring, we know that the spell lasts at least that long,
and we know that the spell does not end in that month. After censoring, the spell contributes no
more information to the table and is dropped from the analysis.
For the life table analysis, we will define enrollment as consecutive months of CHIP
enrollment. One issue with this definition involves what to do about one-month gaps in enrollment
identified in the administrative data. One option is to fill these gaps under the assumption that they
not “real” but rather due to administrative or data error. However, if these gaps are real, closing
them may lead to an overestimate of the length of enrollment spells. We will test the sensitivity of
our results in addressing one-month gaps by generating some of the basic life tables under the
assumption that the gap is real and the one-month gaps should be closed up.
93

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

For the analysis of subgroups, we will generate separate life tables for each subgroup of interest.
We will use the log-rank statistic within each subgroup category (e.g., race/ethnicity, state) to test the
significance of differences in the life table measures (e.g., median spell duration) across subgroups.
This information will help policymakers determine the most effective ways to target resources that
can improve retention and reduce the duration of coverage gaps among children eligible for CHIP
(or public coverage).
The summary outcomes generated from the life table analysis of the full sample and subgroups
will be presented in a format similar to Table VII.12.
Multivariate Analysis. Although the life table analysis will provide a rich description of
enrollment and disenrollment spells and the relationship between the spell duration measures and
key subgroups, it does not allow us to control for factors that may be driving (or confounding) these
bivariate relationships. To address this limitation, we will use a multivariate framework to isolate and
explain the linkages between specific factors, such as demographics and state policies, and the
duration of enrollment and disenrollment. In particular, we will estimate several multivariate hazard
models of time in and out of the program.
The proportional hazards model is the standard multivariate model to analyze duration until an
event. In a proportional hazard model, the hazard function, h(t |X, β), depends on a baseline hazard
component that is a function of time, t, a set of covariates, X, and a vector of parameters, β. This
model enables us to answer the question of how the likelihood of disenrollment/reentry depends on
the length of enrollment/disenrollment and a set of individual-, family-, and state-level
characteristics.
Because CHIP enrollment (or disenrollment) spells completed in the sample window may end
with a transition to Medicaid, private insurance, or no insurance, we will estimate proportional
hazards models in a competing risks framework, which allows spells to terminate with a transition
into any of these three alternative insurance states. 32,33 This model is similar to a simple
proportional hazards model, except that it allows the effect of any covariate on the overall hazard
rate to vary by the type of transition made out of CHIP. For example, having special health care
needs, such as asthma, may increase the likelihood of transferring from CHIP into Medicaid but may
not change the likelihood of moving from CHIP to private insurance. Accounting for possible
variation in the effect of covariates on different coverage transitions, the model can identify factors
that affect each distinct transition and how the effects of these factors vary across transition types
and over time.

32 See Kenney et al. (2007), Dolton and van der Klaauw (1999), Van den Berg et al. (2008), and Wolbers et al.
(2009) for recent applications of competing risk hazard models.
33 The proportional hazard model is one of the most general regression models because it is not based on any
assumptions concerning the nature or shape of the underlying survival distribution. However, a key assumption of the
model is that the hazard function for an individual depends on the values of the covariates and the value of the baseline
hazard. Given two individuals with particular values for the covariates, the model assumes that the ratio of the estimated
hazards over time will be constant, which may not be a valid assumption. We will test this proportionality assumption to
determine whether the proportional hazards model is valid for our data.

94

Chapter VII: Survey of Enrollees and Disenrollees

Mathematica Policy Research

Table VII.12. Duration of Enrollment Spells by Subgroup

Subgroup

Sample
Size

Median
Length of
Enrollment
(Months)

Proportion
Enrolled
After 12
Monthsa

Proportion
Enrolled
After 18
Monthsa

Proportion
Enrolled
After 24
Monthsa

Log-Rank
Statistic to
Test
Differences
Across
Subgroupsb

All Individuals

Household Structure
Two Parents
One Parent
Other

Highest Education level of
Parent(s)
No GED or HS diploma
GED or HS Diploma
Some College/College
Degree
Household Income by FPL
Range
< 150% FPL
150–199% FPL
>200% FPL
Child’s Overall Health Is
Fair or Poor

Child Has Elevated Health
Care Need
Child Has Asthma

Child Has Mental or
Behavior Health Condition

Age of Child (at enrollment)
Age 0–5
Age 6–12
Age 13–20
Child’s Race/Ethnicity
Hispanic
White
Black
Asian
All Other Races

Main Language Spoken at
Home
Spanish
Other
Prior Insurance Coverage
No insurance
Medicaid
Private coverage
a

Cumulative exit rate at specific intervals.

The log-rank test is a hypothesis test to compare the survival distributions of two samples. It compares the estimated
monthly hazard rate to the expected monthly hazard rate, where the expected rate is calculated based on the null
hypothesis that the hazard rate is the same for each time period of the subgroup category. The null hypothesis, that the
distributions are the same across categories, is not rejected if the aggregate difference between the estimated and
expected hazard rate is small relative to the aggregate variance of the difference. The null hypothesis is rejected if the
difference is large.
b

95

Mathematica Policy Research

For each state, we will estimate separate hazard models for each exit route j (e.g., Medicaid,
private insurance, uninsurance). Each of the models will have the following functional form:
(1)

hj (t |X, β) = hj,o(t) exp { X'βi) for j = 1, 2, or 3

where hj is the hazard rate for exit route j, hj,o(t) is the baseline hazard rate for exit route j (the hazard
when all covariates are equal to zero), and X is a set of covariates. The set of covariates will include
individual-, family-, and state-level characteristics hypothesized to affect enrollment (or
disenrollment) durations. In this framework, the overall hazard rate h (t |X, β) is the sum of the
hazards of the three types of transitions (Medicaid, private insurance, and uninsurance) that can be
observed. This model can be fitted non-parametrically using a Cox proportional hazards approach
(Cox 1972), where the baseline hazard is unspecified, or by assuming a functional form for the
baseline hazard, the most common of which is the Weibull distribution. The most appropriate
specification will be determined by the data at the time of analysis.
Using data on CHIP enrollment spells, we will use the hazard models to estimate the effect of
individual and groups of factors on length of time in the program for each type of coverage
transition among CHIP disenrollees. Similarly, using data CHIP exit spells, we will examine the
determinants of exit spell duration. In addition to examining CHIP spell lengths, we will also
estimate a similar model for enrollment/disenrollment spells for public insurance (CHIP and
Medicaid combined). The specification of the public insurance model is identical to that presented
above, except the overall hazard rate out of a public insurance enrollment/disenrollment spell is the
sum of the two hazard rates corresponding to a transition into private coverage or the uninsured
state.
We will also use the model estimates to create tables of regression-adjusted life table outcomes,
such as median length of enrollment (or disenrollment), for the full sample and specific subgroups.
These adjusted outcomes control for possible confounding and can be used make inferences about
differences between subgroups. The adjusted outcomes also can be compared to the unadjusted
outcomes produced by the life table analysis to examine possible causes of variation in outcomes
across subgroups and states.
Finally, all analyses will be weighted using survey sampling weights to ensure that our estimated
outcomes are representative of the CHIP population at the time of sampling. It is important to note,
however, that the survey sample does not represent the population of all children who have been
enrolled in CHIP because it includes a smaller proportion of children who were enrolled for just a
short time (a result of being a point-in-time sample). While this can lead to overestimates of
enrollment spell durations, differences are likely to be small due to the exclusion of left-censored
spells and right-censoring, both of which act to counterbalance the effects of this so-called “lengthtime bias” on duration estimates (Flinn 1986). However, recognizing that restricting our analysis to
the survey sample has some drawbacks relative to using all state enrollment records, including some
loss of precision and statistical power due to the smaller size of the sample, we will test the
robustness of results based on the survey sample by replicating select analyses using all
administrative records. Due to the limited information on enrollee characteristics in the
administrative data, we will focus our robustness tests on key univariate statistics, including the
median duration of enrollment and exit spells and the probability of disenrolling/reenrolling at
specified intervals.

96

Mathematica Policy Research

VIII.

STATE PROGRAM DATA

We will collect state program data to conduct a descriptive analysis of the characteristics of the
CHIP program in all 50 states and the District of Columbia, as well as to provide in-depth analysis
about the 10 study states’ recent progress in enrolling and retaining children in the program. Our
findings will inform Congress and HHS about the evolution of CHIP program features and
operations nationwide, particularly following CHIP reauthorization and the economic downturn. In
addition, the findings from the analysis of enrollment data from the 10 study states will inform HHS
and the individual states of the policies and procedures that appear most able to expand coverage of
children in public programs—a particularly important issue in light of the pending major expansion
in public coverage through the Affordable Care Act. In this chapter, we describe our plans for the
analyses of state program data.
A. Analysis of CHIP Annual Reports and Other Secondary Data
During the first year of the contract, using data from the CHIP Annual Report Template
System (CARTS), the CHIP Statistical Enrollment Data System (SEDS), and other secondary data
sources Mathematica will conduct a descriptive analysis of the characteristics of the CHIP program.
The CARTS and SEDS data systems are maintained by the Centers for Medicare & Medicaid
Services (CMS) and contain data submitted by states for program monitoring and tracking purposes.
This analysis will be included in the report to Congress and will document how CHIP has evolved
since the previous ASPE-sponsored CHIP evaluation and the program’s reauthorization in February
2009. The analysis will provide a comprehensive profile of CHIP in all 50 states and the District of
Columbia and place the 10 study states in a national context. In addition to CARTS and SEDS, the
analysis will draw on information from other secondary sources, including past evaluations and
studies, state plans and amendments, and other published reports. In the remainder of this section,
we describe the two main data sources―CARTS and SEDS―our plan for accessing the data, and our
data analysis approach.
1.

CARTS

States are mandated to assess the performance of their CHIP programs annually and report the
results to the Secretary of Health and Human Services by January 1 following the end of each federal
fiscal year (FFY). States submit their annual reports using the CARTS standardized template, which
includes quantitative measures and qualitative perspectives on each state’s performance and progress
during the previous year. Although CHIP programs vary widely, the template provides a structured
approach for each state to describe its program characteristics, report on core national performance
and state-specific measures, assess its program implementation and operations in key areas, and
discuss its major challenges and accomplishments. Table VIII.1 summarizes the content of the
CARTS reports.

97

Chapter VIII: State Program Data

Mathematica Policy Research

Table VIII.1. Overview of Content of the CHIP Annual Report Template System (CARTS)
Report Section

Section I: Snapshot of CHIP Program
and Changes
Section II: Program's Performance
Measurement and Progress
Section IIA: Reporting of Core
Performance Measures

Content

Program characteristics, including income limits, methods of
application, type of delivery system, and changes since the
previous report
Four core performance measures (current year performance
compared to two previous years) – standardized measures for
every state
1.
2.
3.

Section IIB: Enrollment and
Uninsured Data

Section IIC: State Strategic
Objectives and Performance Goals
Section III: Assessment of State Plan
and Program Operation
Outreach

Substitution of Coverage
Eligibility
Eligibility Renewal and Retention
Eligibility Data
Cost Sharing
Employer-Sponsored Insurance
Program
Program Integrity ( for separate
CHIP programs only)

Section IV: Program Financing for
State Plan – Budget Information
Section V: 1115 Demonstration
Waivers

Section VI: Program Challenges and
Accomplishments

Source:

4.

Well child visits in the first 15 months of life
Well child visits in the 3rd, 4th, 5th, and 6th years of
life
Use of appropriate medications for children with
asthma
Children's access to primary care practitioners

Enrollment data are provided directly from SEDS data, but
states have an opportunity to make corrections or explain
changes. Uninsured data are provided directly from the
Current Population Survey (CPS) (three-year averages), but
states are given an opportunity to supplement these data with
alternative data sources and comment on any limitations in the
CPS data that may affect the reliability or precision of those
estimates.
Each state defines its own objectives and goals and provides
data and an explanation of progress for the current year and
previous two years
How outreach strategies have been modified, most effective
methods of outreach, and strategies to target specific
populations

Description of substitution policies, how they are monitored
and modified
Medicaid and CHIP eligibility and eligibility for a CHIPRA
performance bonus
Most effective strategies to renew/retain CHIP eligibles

Percentage of children retained or disenrolled at
redetermination, percentage of children denied at enrollment
Description of state tracking policies on cost sharing,
assessment of cost sharing on utilization and participation

Description of employer-sponsored insurance program for
children and/or adults using Title XXI funds
(Data not available for FFY 2006)

Plan for prevention, investigation, and referral of cases of
fraud and abuse

Budget information for current year and projected budget for 2
years
Identify demonstration waivers financed through CHIP (if any)
Narrative covering state’s political and fiscal environment,
program challenges, accomplishments, and future changes
(either planned or already made)

Centers for Medicare & Medicaid Services 2008.

98

Chapter VIII: State Program Data

Mathematica Policy Research

Mathematica will analyze CARTS data for four years, beginning with FFY 2006 and continuing
through FFY 2009. With the exception of information on employer-sponsored insurance, the
CARTS content has not changed significantly from FFY 2006 to FFY 2009. We anticipate that
CHIP annual reports for FFY 2010 will not be available in time for analysis for the 2011 report to
Congress, as states are not required to submit their 2010 reports until January 1, 2011; also, they are
frequently delayed in their submissions. However, we will do our best to capture information from
the FFY 2010 reports for the 10 study states.
2.

SEDS

The SEDS contains aggregate CHIP enrollment data submitted by each state on a quarterly
basis using standardized statistical reporting forms. The data are housed in an online system
maintained by CMS. States are required to submit quarterly enrollment data within 30 days of the
end of the quarter and aggregate annual data within 30 days of the end of the fourth quarter. We will
use data primarily from three forms: (1) Form CMS-21E, which gathers data on children enrolled in
separate CHIP programs; (2) Form CMS-64.21E, which gathers data on children in Medicaid
expansion CHIP programs; and (3) Form CMS-64EC, which gathers data on children enrolled in
Title XIX—funded Medicaid coverage. 34 Table VIII.2 lists the enrollment measures reported in
SEDS.
Table VIII.2. SEDS Enrollment Measures and Definitions
Enrollment Measure

Unduplicated number ever enrolled
during the quarter
Unduplicated number of new enrollees
in the quarter
Unduplicated number of disenrollees in
the quarter
Number of member-months of
enrollment in the quarter

Average number of months of
enrollment
Number of children enrolled at quarter’s
end
Unduplicated number ever enrolled in
the year
Unduplicated number of disenrollees
ever enrolled in the year
Source:

Definition

Number enrolled in the program for any length of time
during the quarter
Number enrolled in the program at any time during the
quarter who were not enrolled in the program as of the
last day of the previous quarter
Number disenrolled from the program at any time during
the quarter who were not re-enrolled as of the last day of
the quarter
Sum of member-months for each child ever enrolled
during the quarter, calculated by counting one month for
each month in which the child was enrolled for at least
one day and then aggregating the number of months
across all children ever enrolled during the quarter
Automatically calculated by dividing the member-months
of enrollment by the number ever enrolled
Number enrolled in the program on the last day of the
quarter
Number enrolled at any time during the FFY
Number disenrolled in the program at any time during the
FFY

Centers for Medicare & Medicaid Services 2009.

Other forms submitted through SEDS cover populations not relevant to our study, including Form CMS21-PW,
which gathers data on low-income pregnant women and Form CMS-21 Waiver, which gathers information on adults
enrolled in CHIP under a Section 1115 waiver.
34

99

Chapter VIII: State Program Data

Mathematica Policy Research

SEDS provides these measures along several other dimensions, including the child’s age; service
delivery system (fee-for-service [FFS], managed care [MC], primary care case management [PCCM]);
type of CHIP program (M-CHIP, S-CHIP, or combination); family income; race; ethnicity; and
gender. We will assess the completeness of reporting across these various dimensions (that is, the
extent of missing/unknown data) and will present trends by each of these to the extent the data
permit. At a minimum, we will present SEDS data for FFY 2006 through FFY 2009; however, more
recent SEDS data may also be used to highlight trends after CHIPRA’s implementation.
3.

Accessing the Data

To access both SEDS and CARTS data electronically, Mathematica employees required to use
these data have completed an “Application for Access to CMS Computer Systems” form. The forms
have been sent to Jeffrey S. Silverman, the CMS contact person for CARTS and SEDS, who will
review the applications and grant access. Once granted access, our team members can enter the
electronic systems and extract the data for specific measures and populations.
To complement CARTS and SEDS data, Mathematica will obtain information from other
secondary resources. State plan data is one available option. The CMS website contains the currently
approved CHIP state plans, all state plan amendments, and press releases for all states. Other
resources include reports by the Kaiser Family Foundation, the Georgetown University Health
Policy Institute’s Center for Children and Families, and Health Management Associates, all of which
publish CHIP enrollment figures and policies by state.
4.

Analyzing the Data

We will draw on our experience in analyzing CARTS and SEDS data under Mathematica’s
CMS-sponsored evaluation of CHIP. Under the CMS evaluation, we abstracted data from CHIP
annual reports to develop a national profile of the program and document how it evolved over its
first decade. Because the reports are not maintained in a database per se, we will create a state-level
database and populate it with information from CARTS. This will involve targeted data entry of
selected state program characteristics or quantitative indicators (such as the core performance
measures) and the “cutting and pasting” of narrative, open-ended items (such as state descriptions of
their challenges and accomplishments or program operations). A common challenge is ensuring the
comparability and completeness of the data across states and over time. To accomplish this, we will
make a significant effort to assess data quality from CARTS, particularly by comparing state reports
over time and identifying any unusual patterns not explained in them. If necessary, we will contact
CMS to review outliers or anomalies to ensure the integrity of the data for the report to Congress.
Given the limited time available to prepare this report, we recognize the need to be selective in our
use of CARTS and SEDS and so will focus on items for which reporting is comparable and
complete across states and over time, excluding items for which the reporting lacks comparability
(such as the state-specific performance measures in CARTS).
Our focus will be on characterizing the evolution of state program features and operations over
the four- to five-year period, particularly following CHIP reauthorization and the economic
downturn. We will focus on changes in eligibility criteria, eligibility determination and
redetermination policies and procedures, benefits and cost sharing, and coordination with employersponsored insurance coverage. We will create a series of state-level tables highlighting the features of

100

Chapter VIII: State Program Data

Mathematica Policy Research

CHIP programs over time, with a particular emphasis on changes that occurred between FFY 2006
and 2009. We also will abstract the most recent data on the four core performance measures. 35 As
in the past, we will benchmark state CHIP performance against Medicaid and commercial plans for
the four core measures. We also plan to synthesize state reports on major challenges and
accomplishments, although we recognize that the specificity and quality of the information may vary
from state to state. Nevertheless, such information will provide a state perspective on the challenges
and opportunities facing the CHIP program as it enters its second decade.
To complement the operational perspective contained in the CHIP annual reports, we will track
CHIP enrollment trends using SEDS. We will refer to information from state annual reports in
CARTS to explain anomalies in the trends (such as large increases or decreases over time). One of
the factors complicating the use of SEDS is that state reporting may vary over time. For example,
some states do not report data in every time period or for their entire CHIP population; moreover,
some states have changed their specifications (such as how they count children enrolled in both MCHIP and S-CHIP during a quarter). Using our knowledge of SEDS data (coupled with the crosschecking we routinely conduct against Medicaid Statistical Information System [MSIS] data), we will
develop a time-series of CHIP enrollment that is as consistent as possible, focusing on quarterly ever
enrolled and annual ever enrolled. We will also use SEDS data to classify the dominant delivery
system in the state and examine how it has evolved over time (FFS, MC, PCCM, or mixed), using
the approach developed for the CMS-sponsored CHIP evaluation. Finally, we will examine
variations in continuity of enrollment across states based on the average number of months of
enrollment and the percentage of enrollees who were disenrolled during the quarter or year.
B. Analysis of CHIP and Medicaid Enrollment and Eligibility Data
Between December 2007 and December 2009, the number of children covered by CHIP and
Medicaid grew by nearly 7 million nationwide (Kaiser Commission on Medicaid and the Uninsured
2010; Smith et al. 2010). Some of this increase is almost certainly tied to a decline in families’ access
to affordable coverage for their children, as unemployment has grown and the cost of employerbased coverage for working families has continued to rise. At the same time, however, many states
have also been taking aggressive steps in recent years to facilitate enrollment and retention of
children eligible for CHIP and Medicaid—from expansions in income eligibility to the adoption of
simplified enrollment procedures and improvements in renewal procedures to speed the process and
reduce the potential burden on eligible families (Ross et al. 2009). Such changes have been fueled
not only by states’ recognition of the growing importance of CHIP and Medicaid as a source of
affordable coverage but by the new flexibility and financial incentives offered by the federal
CHIPRA legislation to adopt these changes.
In the 10 states that are the focus of the CHIP survey, we will use CHIP administrative data to
analyze in detail the states’ recent progress in enrolling and retaining children in the program. In
addition, in states where we can link these data with similar data from the state’s Medicaid program,
we will extend the analysis to examine trends in the transition of children between the two programs
The FFY 2010 CARTS report template introduces substantial changes in the performance measures must report.
In particular, the asthma medications measure will no longer be available and is being replaced with other asthma
measures. Should the FFY 2010 CARTS data be available for the 10 study states, we will assess the quality and
completeness of these new measures before using them. This may be a non-issue, given the delay in data submission for
many states.
35

101

Chapter VIII: State Program Data

Mathematica Policy Research

and the retention of children in overall public coverage (that is, in the two programs combined).
Through these analyses, we will also examine how recent gains in new enrollment and/or retention
may be linked with specific policy or procedural changes in the states, such as the adoption of
presumptive eligibility, express lane eligibility, or administrative renewal policies. Our findings will in
turn inform HHS and individual states of the policies and procedures that appear most able to
expand coverage of children in public programs—a particularly important issue in light of the
pending major expansion in public coverage through the Affordable Care Act.
1.

Research Questions
More specifically, the main research questions we will address through this analysis include:
• Enrollment Trends. What are the recent trends in overall CHIP enrollment, Medicaid
enrollment, and enrollment in public coverage across the 10 study states? How do these
enrollment trends differ across states? To what extent are changes in these trends a
function of changes in new enrollment? To what extent are they a function of changes in
disenrollment/retention?
• Program Churning and Transition Trends. How do states differ in program
churning and transition? To what extent has program churning declined over time in the
study states? To what extent has program transition increased? How do any
improvements observed in program churning or transitions translate into improvements
in retention? For which programs and in which states are these gains most evident?
• Role of State Policy and Procedural Changes. To what extent have outreach or
policies/procedures designed to simplify program application and enrollment been
responsible for any gains in new and overall enrollment across the states? To what extent
are policies/procedures designed to simplify program renewal or otherwise extend
retention responsible for any gains in retention across the states? On a related topic, to
what extent are they responsible for any declines in program churning or any gains in
program transition?

2.

Data

Data for this analysis will be acquired from each of the 10 CHIP survey states for a period of up
to five years, as available—from approximately July 2007 to June 2012. (The latter date is estimated
based on the anticipated timing of the request in summer 2011). These data typically reside in one of
two administrative sources. The first is the CHIP management information system (MIS)—the same
system from which we expect to acquire the sample frame for the CHIP survey in each of the 10
states. The types of information we expect to request from this system include the timing of CHIP
coverage for each child enrolled in the program over the five-year period; the child’s date of birth,
county and zip code of residence, and eligibility category (for defining co-payment) for each spell of
coverage; and a unique identifier that can be used for linking the child across multiple CHIP
coverage spells and between CHIP and Medicaid coverage spells (as relevant). The second source of
information is the Medicaid MIS—the same system from which we expect to acquire the sample
frame for the Medicaid survey in each of the 3 states where we will be surveying Medicaid enrollees.
The information that will be requested from this system is equivalent to that expected to be
requested from the CHIP MIS, except that it will pertain to the child’s period(s) of coverage in
Medicaid, as opposed to CHIP.

102

Chapter VIII: State Program Data

Mathematica Policy Research

In most states, we expect the process of acquiring these data to be straightforward, in large part
because it centers on the same systems (and the same data elements) that will support the earlier
construction of the CHIP and Medicaid survey sample frames. In addition, in several of the states,
we expect that the request may need to center on only a single system, either because the state has
an M-CHIP component only or because it has taken the step of integrating its S-CHIP and Medicaid
enrollment data into a single MIS. In any states where neither of these conditions is met, however,
the process of acquiring the state’s Medicaid data could present a challenge. Specifically, in states
that (i) have only an S-CHIP component, (ii) maintain their S-CHIP enrollment data in a separate
MIS, and (iii) are not selected for the Medicaid survey, the request for Medicaid data will be new and
may focus on a relatively antiquated Medicaid MIS (given that it has not been integrated with CHIP).
We still anticipate ultimately acquiring all of the requested data from these states. Nevertheless, given
the relative challenge of the process, we must acknowledge the possibility that, in one or two SCHIP-only states, our analysis might focus only on CHIP.
One important question that may arise regarding our data acquisition is why it excludes data
elements tied specifically to the application and renewal process, such as the number of denied
applications or renewals or the reasons children are disenrolled. We recognize that these data
elements are potentially quite useful for our analysis; for example, they could be used to determine
which children disenrolling from CHIP may remain eligible for public coverage—a potentially useful
indicator of states’ progress with retention. Nevertheless, we do not plan to request these data for
two reasons. First, because they often reside in a separate eligibility data system, these data elements
can be time consuming and costly to acquire. Second, because they are less critical to states for
managing their programs—compared with, say, whether a child has coverage and their basis of
eligibility—the reliability of these data elements could be suspect and, even to be considered for the
analysis, substantial resources would need to be invested to assess their quality and usefulness.
3.

Focal Measures

Drawing on the requested data elements from the states’ MIS files, we will construct a series of
measures spanning the course of children’s coverage—from their program entry and retention to
their eventual program disenrollment and possible return (either through transfer between programs
or churning back into the same program following a gap in coverage). All measures will be
constructed on a monthly basis over the five-year period, creating a substantial time series from
which to investigate trends and their links to state policies.
Three basic measures allow us to monitor the number of children covered in a given month, as well
as the number entering and exiting coverage. They are:
• Total Enrollment. The total number of children enrolled in coverage in a given month
• Total New Enrollment. The total number of children who have newly enrolled in (i.e.,
entered) coverage in a given month
• Total Disenrollment. The total number of children who have newly disenrolled (i.e.,
left) coverage in a given month
To understand the coverage experiences of children prior to their becoming newly enrolled in a
program, we will use a set of four basic enrollment source measures; they are:

103

Chapter VIII: State Program Data

Mathematica Policy Research

• Churners. The proportion (or count) of new enrollees in a particular program who have
recently disenrolled from that same program. A downward trend in this measure reflects
a positive retention outcome.
• Seamless Transfers. The proportion (or count) of new enrollees in one program (i.e.,
S-CHIP or Medicaid) who transfer, without a gap in coverage, from the other program.
An upward trend in this measure reflects a positive retention outcome.
• Non-Seamless Transfers. The proportion (or count) of new enrollees in one program
(S-CHIP or Medicaid) who transfer, with a short gap in coverage, from the other
program. Absent a decline in seamless transfers, an upward trend in this measure reflects
a positive retention outcome.
• New Entries. The proportion (or count) of new enrollees who show no prior public
coverage for at least a year. (In other words, they are truly “new” to public coverage). An
upward trend in this measure reflects a positive enrollment outcome.
A similar set of post-disenrollment measures will help us to understand children’s coverage
experiences after leaving a program; they include:
• Churners. The proportion (or count) of disenrollees from a particular program who
return to that same program within a short time. A downward trend in this measure
reflects a positive retention outcome.
• Seamless Transfers. The proportion (or count) of disenrollees from one program (i.e.,
S-CHIP or Medicaid) who transfer, without a gap in coverage, to the other program. An
upward trend in this measure reflects a positive retention outcome.
• Non-Seamless Transfers. The proportion (or count) of disenrollees from one program
(S-CHIP or Medicaid) who transfer, with a short gap in coverage, to the other program.
Absent a decline in seamless transfers, an upward trend in this measure reflects a positive
retention outcome.
Finally, a series of duration measures will provide an effective and relatively straightforward way
to monitor the retention of children in public coverage. While the specific intervals listed below are
somewhat arbitrary, they have been chosen to reflect the retention outcomes of children before,
during, and after a state’s annual renewal process.
• Retention After 6 Months. The proportion of new enrollees who remain in public
coverage for at least 6 months. An upward trend in this measure reflects a positive
retention outcome.
• Retention After 12 Months. The proportion of new enrollees who remain in public
coverage for at least 12 months. An upward trend in this measure reflects a positive
retention outcome.
• Retention After 18 Months. The proportion of new enrollees who remain in public
coverage for at least 18 months. An upward trend in this measure reflects a positive
retention outcome.
To the extent that the practice is beneficial, we can limit any of these measures to a particular
demographic or eligibility group of interest. Such a delimitation can be particularly useful when
104

Chapter VIII: State Program Data

Mathematica Policy Research

examining links between these measures and specific policy changes, as the measure can become
more sensitive to the change. For example, to examine the role of a major outreach effort in a
particular part of a state, our basic count measure of new enrollment could be divided into separate
counts for the region(s) that the outreach did and did not target. Then, by comparing trends for
these two measures before, during, and after the period of the outreach campaign, we could gain a
reliable gauge of how the campaign may have translated into additional enrollment.
4.

Analytic Approach

For each of the 10 states, a preliminary descriptive analysis will assess trends for the series of
measures above. Through this analysis, we will gain a full understanding of states’ progress in
enrolling and retaining children in CHIP, Medicaid, and public coverage—addressing the first two of
the three overarching research questions for this analysis. In addition, we will be able to contrast
progress among states, providing a preliminary understanding of how differences in states’ economic
conditions, program models, or other major program features may be associated with variations in
coverage. Finally, by identifying any major shifts or “spikes” in our measures over time, the
descriptive analysis will provide a starting point for understanding the role that specific policy or
procedural changes may be having on enrollment and retention across the states.
Building on the findings from the descriptive analysis, a more focused multivariate analysis will
examine more rigorously how specific policy and procedural changes may be affecting coverage
trends—addressing the third overarching research question. To conduct this analysis, we first will
pool our data across the 10 states, creating a panel (cross-time, cross-state) data file of the various
enrollment and retention measures described above. Next, drawing on information from the site
visits and other descriptive/qualitative sources, we will add to the file a series of explanatory
variable(s) for when, and in which states, major policy or procedural changes have been adopted.
(Examples include the adoption of express lane eligibility, centralization of the renewal process, and
elimination of income verification.) Using basic multivariate models, we then will regress each
measure on the relevant policy variable(s) of interest, adding time- and state-fixed effects to control
for the influence of any secular trends and state-specific factors that have a mostly constant effect
on outcomes over time. (The model will also include a control variable for the state unemployment
rate and possibly other economic indicators to account for possible fluctuations in eligibility due to
economic conditions.) A general specification of this model is as follows:
Outcome(s,t) = Policy(s,t)*Β1 + S(s) *Β2 + T(t) *Β3 + [Characteristics(s,t)]*B4 + ε(s,t),
where:
Outcome (s,t) is the outcome of interest, measured for a given state s in time period t (e.g., in a
given month). The regression model that we will use to estimate this equation will depend on the
properties of the outcome variable. For example, for a simple continuous variable (such as total new
enrollment), the model can be estimated through linear or log-linear regression. 36

36 For proportional measures—such as the fraction of new enrollees that retain coverage for a specific period or
the fraction of disenrollees that churn—the model can be estimated at the individual level (rather than this aggregate
level) using logistic regression. This approach may in fact be preferred because the model can include control variables
for various individual-level characteristics available from the administrative data (such as the child’s age and eligibility

105

Chapter VIII: State Program Data

Mathematica Policy Research

Policy(s,t) is a vector of (one or more) binary indicator variables equal to one if the policy was
in place in state s in period t, and zero otherwise.
S(s) is a dummy variable that captures fixed differences across states (state fixed effects).
T(t) is a dummy variable that captures time effects common to all states (time fixed effects).
Characteristics(s,t) is a vector of variables measuring the characteristics of the state s in time
period t, such as the unemployment rate.
Β1, Β2, Β3, and Β4 are vectors of coefficients estimated for the four sets of covariates.
ε(s,t) is a random error term.
For each policy variable, the corresponding coefficient from the vector B1 provides an estimate
of its effect on the measure of interest. To the extent that this estimate is statistically significant, it
indicates that the policy has had a significant impact on the measure. However, before we can draw
this causal conclusion with any confidence, we must rigorously assess the robustness of the estimate
as to the risk that other, unobserved factors may be influencing the outcome trends and possibly
biasing the coefficients associated with the policy variable. (Examples of possible unobservables
include major fluctuations in the numbers of children eligible for public programs that go
unaccounted for and unobserved differences between the families eligible for or enrolled in
programs over time.) Indeed, adding to this risk of bias—which is evident for any study that exploits
cross-time, cross-state variation in policies to estimate their effects—is the limited degrees of
freedom afforded by a 10-state sample, which may preclude inclusion of all relevant policy variables
in a given model.
The specific robustness tests to be conducted will depend on the nature of the measure and the
possible sources of bias in the policy estimate. For example, to address the risk associated with the
simultaneous effect of different policies (which, as noted, may arise because we cannot include all of
them in the model simultaneously), we will re-estimate the model for a given outcome, including
variables one at a time and in different combinations. To the extent that the original estimate is
significant for a given policy and the robustness test(s) shows consistency in the estimate’s size and,
ideally, significance, we can more credibly interpret the estimate as causal. Alternatively, if the tests
do not show such consistency, the credibility of this interpretation is reduced. Instead, in this latter
scenario, we will interpret the estimate more cautiously, characterizing it as evidence of an
association rather than an impact.

(continued)

group and whether the family’s residence is rural or urban). With such control variables, the model can account for
changes in the composition of the enrollee/disenrollee population over time that might otherwise introduce bias in the
estimated policy effects.

106

Mathematica Policy Research

IX. NSCH/SLAITS
The 2011 National Survey of Children’s Health (NSCH), a module of the State and Local Area
Integrated Telephone Survey (SLAITS), is a national survey of households with children under 18
years of age. One section of the NSCH targets uninsured children in households below 400 percent
of the Federal Poverty Level (FPL). Questions in this section focus on awareness and perceptions of
Medicaid and CHIP, children’s prior enrollment in both public and private coverage, and families’
potential access to employer-sponsored insurance. Data collection for the NSCH, expected to begin
in January 2011, is slated to identify thousands of uninsured children nationally. 37 Together with the
information collected about these children elsewhere in the NSCH, the survey provides information
on the reasons eligible children continue to be unenrolled in Medicaid/CHIP, differences in these
reasons across subgroups, the characteristics of uninsured children, whether reasons for
nonparticipation may have changed over time, and the extent to which uninsured children may be
able to enroll in coverage.
A. Design/Content of the NSCH and Uninsured Component
Design of NSCH. Like its predecessors in 2003 and 2007, the 2011 NSCH is sponsored by the
Department of Health and Human Services’ Maternal and Child Heath Bureau. It is conducted by
the Centers for Disease Control and Prevention’s (CDC) National Center for Health Statistics
(NCHS) as a module of the SLAITS. SLAITS surveys are random digit dial (RDD) telephone
surveys conducted using the sampling frame of the CDC’s National Immunization Survey.
Interviews are conducted in each of the 50 states and the District of Columbia and are designed to
produce both national and state-specific estimates. To be included in the NSCH, households are
screened to identify those having at least one child under age 18; detailed interviews are conducted
for one randomly selected child in the household. Interviews are conducted in English, Spanish,
Mandarin, Cantonese, Vietnamese, or Korean with the adult (usually a parent) most knowledgeable
about the health and health care of the sampled child (Blumberg et al. 2009).
Identification of Uninsured Children. All children in the NSCH are screened for eligibility
for the detailed uninsurance section of the survey, defined as being uninsured and living in a
household below 400 percent of the FPL. Uninsurance is defined as not having “any kind of health
care coverage, including health insurance, prepaid plans such as HMOs, or government plans such
as Medicaid” at the time of the survey. Income below 400 percent of the FPL is calculated by
comparing the household’s total income in the past calendar year (ascertained through a single
question) to the federal poverty guidelines for that household size and the appropriate year.
Respondents who do not answer the income question are asked a series of questions to determine
whether their income falls within certain income ranges that represent multiples of the poverty level
for the specified household size. For the cases that do not provide a response, NCHS will likely
conduct an income imputation, and the analysis of this section will only include those cases imputed
to have incomes below 400 percent of the FPL.

37 The prior round of the NSCH, in 2007, identified approximately 6,800 uninsured children of all income groups
(Centers for Disease Control 2010).

107

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

Notably, the eligible income range of below 400 percent of the FPL in the 2011 NSCH is
higher than the range of below 200 percent of the FPL for inclusion in the Low-Income Uninsured
Supplement to the 2001 SLAITS Survey of Children with Special Health Care Needs (SHCN) used
in the prior evaluation (Kenney, Haley, and Tebay 2004). There are two main reasons for the
emphasis on higher-income uninsured children in the new SLAITS analysis.
• First, many states have expanded CHIP eligibility to higher-income children. Since the
enactment of CHIPRA in early 2009, 15 states have expanded eligibility to higherincome children. As of January 2011, in all but four states, children in families with
income up to 200 percent of the FPL are covered under either Medicaid or CHIP. In
half of states (24 states and DC) children in families with income of 250 percent of the
FPL or higher are eligible for Medicaid or CHIP. 38 Yet 21 percent of uninsured children
are in families with incomes between 200 and 400 percent of the FPL. 39 Even in 2008,
before the expansions of CHIP to higher-income children, a significant share of eligible
but uninsured children were in families with incomes above 200 percent of the FPL. 40
• Second, new provisions in the ACA will provide coverage subsidies for families with
incomes between 138 and 400 percent of the FPL, and new questions in this survey
dealing with offers of employer coverage in the family, the extent to which employers
contribute to premiums, and firm size can help to identify which children likely will be
eligible for this subsidized coverage. To complement the analysis of reasons uninsured
children do not participate in Medicaid/CHIP, we plan to examine how many of these
children may become eligible for subsidized coverage through the new health insurance
exchanges.
Topics. Table IX.1 provides the topics to be covered in the uninsurance section of the survey.
Questions about Medicaid and CHIP will use state-specific program names. (CHIP questions will
not be asked in states whose two programs use the same name; analysis of awareness and
perceptions of CHIP will be limited to children in states with separate CHIP programs.) Perceptions
of Medicaid and CHIP will be obtained for those respondents who are familiar with Medicaid
and/or CHIP.

38

“Health Policy Brief: Enrolling More Kids in Medicaid and CHIP,” Health Affairs, January 27, 2011.

39 “Health

Policy Brief: Enrolling More Kids in Medicaid and CHIP,” Health Affairs, January 27, 2011.

Genevieve M. Kenney, Victoria Lynch, Allison Cook, and Samantha Phong, “Who And Where Are The
Children Yet To Enroll In Medicaid And The Children’s Health Insurance Program?” Health Aff October 2010 29 In
2008, of the 4.7 million children eligible for Medicaid or CHIP but not enrolled, “3 million had family incomes below
133 percent of the federal poverty level, 1.2 million had family incomes of 133–200 percent of poverty, and 500,000 had
incomes above 200 percent of poverty.”
40

108

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

Table IX.1. Topics Covered in Uninsured Section of the 2011 National Survey of Children’s Health
Health
Medicaid/CHIP Awareness, Perceptions, and Experiences
− Awareness of Medicaid
− Awareness of CHIP
− Whether respondent knows where to go to get more information about Medicaid/CHIP
− Whether respondent knows how to enroll child in Medicaid/CHIP
− How difficult respondent thinks it is to enroll or re-enroll in Medicaid/CHIP
− Whether respondent believes child is eligible for Medicaid/CHIP and, if not, why not
− Whether respondent is interested in enrolling child in Medicaid/CHIP and, if not, why not
− Whether ever enrolled in Medicaid
− Whether ever enrolled in CHIP
Child’s Insurance Coverage History
− Main reason child is uninsured
− Length of time uninsured
− Whether ever had employer-sponsored coverage
− Whether ever had private nongroup coverage
− If ever enrolled in Medicaid, when and why enrollment ended
− If never enrolled in Medicaid, whether ever applied for Medicaid and, if so, when and why unable to
enroll
− If ever enrolled in CHIP, when and why enrollment ended
− If never enrolled in CHIP, whether ever applied for CHIP and, if so, when and why unable to enroll
Availability of Employer- Sponsored Insurance (ESI) for Child (asked about both mother and father if
both are present in household; if respondent is not parent, questions are asked about the respondent)
− Whether parent has insurance coverage and, if so, whether coverage is provided through employer
− If parent does not have ESI, whether parent is eligible
− If parent has or is eligible for ESI: whether child is eligible (and, if so, why child is not covered);
whether employer pays for all/some/none of the cost for family coverage; and size of firm

For all sampled children, the NSCH collects information on a range of characteristics, including
health insurance coverage (classified as uninsured, Medicaid/CHIP, or other at the time of the
interview); health status and access (such as access to care, utilization, medical home, identification
of children with SHCN, presence of common conditions, child well-being, and parental health);
demographic and socioeconomic characteristics (such as the gender and age of the child,
race/ethnicity/interview language, and educational attainment of the respondent); household
characteristics (such as income, household size, and participation in other public programs, such as
cash assistance and food stamps); and geographic characteristics (such as region and residential
location) that will be used to conduct subgroup analysis.
Weights. NCHS plans to develop child-specific sample weights for each record. These will take
into account the probability of selection of households and children into the sample. They will
include adjustments for unit nonresponse and noncoverage of nontelephone households. At the
final stage of weighting, a post-stratification adjustment will be made to “known” population totals,
which may include race/ethnicity, age, gender, household income, mother’s education, and number
of children in the household.

109

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

Imputations. Imputations will be made to address item nonresponse for all variables used to
construct weights. 41 Additional imputations may be made for analytic purposes. We will examine
levels and patterns of item nonresponse to assess whether variables are reliable. If analytic variables,
such as income, are not reliable, and imputed versions are not available, we may not be able to use
them for detailed analysis.
Precision. Because of the complex design of the SLAITS, the variances of estimates will be
higher than they would be with a simple random sample. We will use software developed to produce
correct variances based on stratum identifiers and primary sampling unit (PSU) codes appended to
the data files.
B. Analysis
Description of Uninsured Children. The breadth of information collected in the NSCH will
allow for a description of the characteristics of uninsured children below 400 percent of the FPL.
For example, we will examine health expenses and barriers to care, usual source of care, utilization
of care, delayed/unmet needs, and communication with providers as well as individual and family
characteristics, such as age, race/ethnicity/language, parents’ health status, and parents’ educational
attainment and employment. We will also contrast uninsured children in this income group with (1)
children with Medicaid/CHIP and (2) children with other types of coverage.
Reasons Potentially Eligible Children Do Not Participate in Medicaid and CHIP. The
main goal of this analysis is to understand why some uninsured children eligible for Medicaid and
CHIP do not participate in these programs. Following the analysis conducted in the prior evaluation,
the tabulations will distinguish three basic reasons for nonparticipation: lack of knowledge, lack of
interest, and difficulty in enrolling. These tabulations will build on the indicators included in the
2001 survey and will also utilize two new questions (drawn from the 2007 Kaiser Survey of
Children’s Health Coverage) to identify other knowledge gaps that may impede enrollment among
eligible children (Kaiser Commission on Medicaid and the Uninsured 2009). 42 A summary measure
of potential barriers to enrollment in Medicaid/CHIP will classify children into one of five mutually
exclusive categories based on the responses their parents provide:
1. Lack of knowledge about the programs only – including those whose parents have
not heard of either program, as well as those who have heard of at least one program,
would enroll the child, and see the application process as easy, but are confused about
eligibility or do not know how to get more information or enroll
2. Enrollment not considered easy only – including those whose parents have heard of
at least one program, would enroll the child, are not confused about eligibility, and know
how to get more information or enroll, but do not see the application process as easy

41 Standard hot deck imputation procedures will be used for all variables except income and household size. These
will be imputed using multiple imputations.

The two questions from the Kaiser Survey are: (1) If you wanted to get more information about [PROGRAM],
do you know where to go to get that information? and (2) If you wanted to enroll [SC] in [PROGRAM], do you know
how to do that?
42

110

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

3. Both lack of knowledge and enrollment not considered easy – including those
whose parents have not heard of either program, as well as those who have heard of at
least one program, would enroll the child but are confused about eligibility or do not
know how to get more information or enroll, as well as those whose parents do not see
the application process as easy
4. Lack of interest – including those whose parents have heard of at least one program
but said they would not enroll the child or do not know whether they would want to
enroll the child
5. No reported reason – including those whose parents have heard of at least one
program, would enroll the child, are not confused about eligibility or know how to get
more information or enroll, and see the application process as easy
Additional tabulations will explore reasons children lack coverage, reasons for lack of interest in
enrolling, past history with public coverage, and reasons for nonenrollment among those who had
tried to enroll but did not succeed. We will also use the questions on insurance coverage history and
perceptions of re-enrollment to examine issues related to retention. For example, we can identify
which children had been enrolled in Medicaid or CHIP previously and examine how long ago they
were enrolled, why they are no longer enrolled, and their parents’ perceptions of re-enrollment
processes.
Subgroup Differences. Reasons for nonparticipation will also be examined based on a variety
of characteristics, such as income, race/ethnicity/language, experience with private coverage,
experience with public coverage, parents’ employment and insurance status, and other demographic
and socioeconomic characteristics. Differences will illuminate how perceptions of Medicaid and
CHIP differ between segments of the uninsured and can also help to explain how these segments
may react differently to implementation of the Affordable Care Act (ACA) as more low-income
children and parents become eligible for public coverage.
Changes in Awareness and Perceptions over Time. The prior evaluation, which analyzed
2001 data, found that, while awareness of Medicaid coverage was high among low-income families
with uninsured children, many had not heard of the separate CHIP program in their state, especially
if that program was relatively new. In addition, the majority of families said they would enroll their
child if told he or she was eligible for coverage, but fewer than half believed their child was eligible
(Kenney, Haley, and Tebay 2004).
Over the last decade, coverage through Medicaid and CHIP has grown substantially, which
makes it likely that more families are aware of the programs at this point. For the survey questions
included in both the 2001 Low-Income Uninsured Supplement and the uninsurance section of the
2011 NSCH (such as awareness of Medicaid and CHIP, how difficult the respondent thinks it is to
enroll, whether the respondent believes the child is eligible, and whether the respondent is interested
in enrolling the child), and for families below 200 percent of the FPL, we will track changes in the
responses and composition of the low-income uninsured families over time.
Potential Eligibility for Subsidies in the ACA. New provisions in the ACA will provide
coverage subsidies for families with incomes between 138 and 400 percent of the FPL, and new
questions in this survey dealing with offers of employer coverage in the family, the extent to which
employers contribute to premiums, and firm size can help to identify which children likely will be
eligible for this subsidized coverage. To our knowledge, this survey will offer the only nationally
representative observation of these indicators among uninsured children. To complement the
111

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

analysis of reasons uninsured children do not participate in Medicaid/CHIP, we plan to examine
how many of these children may become eligible for subsidized coverage through the new health
insurance exchanges.
Consequences of Nonenrollment. We will analyze differences in access to care between a
variety of subgroups, including (1) eligible but unenrolled children, (2) children just above the
income eligibility threshold, (3) children enrolled in Medicaid/CHIP, and (4) children who are
potentially eligible for subsidies under the ACA. Both cross-tabs and regression analyses will be used
to estimate differences between these groups on a set of access measures, controlling for other
individual characteristics. Access indicators available on the survey include health expenses and
barriers to care, usual source of care, utilization of a variety of types of health services,
delayed/unmet care, and communication with providers.
C. Challenges and Potential Limitations
Survey Differences over Time. The prior evaluation included analysis of the Low-Income
Uninsured Supplement, part of the 2001 SLAITS National Survey of Children with SHCN. Much of
the content of that survey will be repeated in the 2011 NSCH; however, attempts to compare the
results from the two surveys directly would be biased because the NSCH uninsurance questions will
include families of higher incomes and because of other differences in the surveys. Other questions
are new to this instrument for 2011 and cannot be used for comparisons over time.
Lack of Precision in Measuring Poverty Status. Poverty level is an important variable in this
data source because it is used to screen respondents into the section on uninsurance and categorize
children by poverty level during analysis. However, measurement of income in the NSCH represents
a potential source of error because (1) income is measured through a single question, which is less
precise than asking about multiple sources of income; (2) income is collected as a total amount for
the entire household, rather than just the child’s family unit; and (3) income is collected for the
previous calendar year, so respondents in different phases of data collection will be asked about
different calendar years, thus possibly causing more misclassification of households in some phases
of the data collection period than others.
Random Digit Dial (RDD) Mode. SLAITS is an RDD telephone survey, a less expensive and
faster method than in-person data collection. While the RDD method has significant cost
advantages, it also has several drawbacks: (1) lower response rates, (2) exclusion of nontelephone
households (which will be addressed to some extent by adjustments to the weights based on
telephone service interruptions), and (3) exclusion of some linguistically isolated households (despite
being conducted in six languages).
Selection into Public Coverage. Comparisons of access and utilization between uninsured
children and those enrolled in Medicaid/CHIP may be biased because of selection issues. If the
reasons that families obtain coverage for their children are related to the reasons for better or worse
access (for instance, if sicker children are both more likely to be insured and have a usual source of
care), then the relationship between insurance status and access will be biased. Similarly, findings
related to geographic differences among uninsured children may be biased. In states with high levels
of participation in Medicaid/CHIP (because of success in enrolling and retaining eligible children),
the remaining uninsured may have more antipathy toward the programs or differ in other
unmeasured ways from those in states with lower levels of participation, thus encompassing a more
varied population of uninsured. We will attempt to control for these differences by using
multivariate analysis that takes into account health status, health conditions, participation in other
112

Chapter IX: NSCH/SLAITS

Mathematica Policy Research

public programs, and other indicators of families’ preferences, but unmeasured differences may
remain.
Plans to Assess the Reliability of Estimates. To examine the sources of potential bias
outlined above, we will undertake extensive external and internal validation of the estimates. For
external validation, we will conduct benchmarking, using the March 2011 Current Population Survey
and the 2011 American Community Survey, to examine how weighted distributions compare across
these surveys. Planned comparisons include distributions of health insurance status, income, age,
race/ethnicity, and other demographic characteristics. We will also conduct internal validation of
consistency of responses across questions and within and across households.

113

This page has been left blank for double-sided copying.

Mathematica Policy Research

X. REPORTING
As our experience conducting the first CHIP evaluation demonstrated, information about the
design and implementation of CHIP and its effects on children’s health and coverage status at the
state and national level is eagerly awaited. In the new policy environment since health reform was
enacted, the thirst for timely and updated information is even greater, as policymakers consider the
role CHIP might play in the context of an individual mandate, universal enrollment in Medicaid for
the lowest-income Americans, and state-based exchanges for purchasing health insurance. To
support these information needs, we will deliver timely, accurate, and policy-sensitive findings to the
target audiences at critical junctures during and at the end of the project.
In our proposal we described plans to present our results in two reports to Congress as well as
in a series of task-specific reports. Because this is a Congressionally mandated evaluation, our first
priority is to present the two reports to Congress, in 2011 and 2013. We recognize the central
importance of these integrated reports and that they must draw on information from various taskspecific analyses. For the task-specific reports, however, we introduce a potential alternative that
would produce a set of topic-oriented reports that integrate findings across evaluation components.
We describe this concept further in Section C below, but the next step will be to discuss with ASPE
the relative benefits and drawbacks of the alternative and take a closer look at the feasibility with
respect to both the schedule and budget. We also recommend preparing a standalone executive
summary that provides a cross-cutting synthesis of evaluation findings at the end of the project.
All of the reports (and any other analytic products) will be reviewed by a senior member of the
Mathematica staff and professionally edited before submission to ASPE. All reports will be
submitted as a draft and revised based on ASPE feedback before a final version is submitted. Final
reports will be made 508 compliant for posting on ASPE’s website.
The remainder of this chapter highlights our approach to four types of reports: (1) reports to
Congress in 2011 and 2013, (2) case study reports, (3) other reports, and (4) a standalone executive
summary.
A. Reports to Congress
The Mathematica-Urban team has consulted with ASPE and members of the federal
workgroup for the CHIP evaluation to develop a plan for the 2011 report to Congress, which
presents a challenge because of the limited time available to collect and analyze primary data
(especially given the need for OMB clearance) and to gather state administrative data (especially
given data security provisions). As shown in Table X.1, our plan for the 2011 report is to focus on
information submitted by states through CARTS and SEDS from FFY 2006 through FFY 2009 (or
FFY 2010 if possible) to assess how the CHIP program has evolved following the implementation
of CHIPRA. Our analysis will include information from all 50 states to provide a national profile,
with more in-depth analysis of the 10 study states in order to set the stage for the CHIP evaluation.
We also will describe the design and scope of the evaluation to preview the contents of the 2013
report to Congress.

116

Chapter X: Reporting

Mathematica Policy Research

Table X.1. Data Sources for the 2011 and 2013 CHIP Reports to Congress
Data Sources

Secondary data from CARTS and SEDS
Individual case study reports
Cross-cutting case study synthesis report
Analysis of enrollment data
Analysis of survey of CHIP enrollees and
disenrollees
Analysis of survey of program
administrators
Analysis of NSCH/SLAITS survey

2011 Report to Congress
X

2013 Report to Congress
X
X
X
X
X
X

The 2013 report to Congress will synthesize findings from all of the evaluation data sources—
both quantitative and qualitative—to tell a comprehensive story of the implementation and impacts
of CHIP. The report will focus on the 10 selected study states, although the survey of program
administrators and NSCH/SLAITS analysis will place the 10 states in a national context.
While Congress requires detailed reports, we also appreciate that members of Congress and
their staffs may have limited time to review and digest the depth of the findings presented.
Accordingly, we will prepare a concise executive summary for both reports to Congress, drawing on
the expertise of Mathematica’s editorial and design staff to lay out each chapter for maximum
readability. Up-front chapter summaries, easy to understand graphics, and call-out boxes, for
example, would improve the accessibility of these documents.
B. Case Study Reports
The 10 state case studies and accompanying cross-cutting synthesis report are central to
understanding how CHIP is being implemented, and how states are envisioning the role of CHIP in
the context of health care reform. Not only do the case studies provide an opportunity for extensive
stakeholder input into the evaluation, they also gather insights and perspectives from families of
CHIP enrollees to provide a richer understanding of how they view the program. We will use a
standardized template to summarize case study findings in each state, integrating findings from
stakeholder interviews and focus groups to describe program features, strengths, and challenges. The
cross-cutting report will summarize lessons learned and promising practices across the 10 states,
with an emphasis on outreach, enrollment, retention, and service delivery. The report will also
synthesize state perspectives on how CHIP has evolved and how CHIP is likely to be incorporated
in health insurance exchanges.
C. Other Reports
In addition to the individual and crosscutting case study reports, the original plan calls for
individual reports for several other evaluation components: the analysis of enrollment data, the 10state survey of enrollees and disenrollees, the survey of state program administrators, and the
analysis of NSCH/SLAITS data. Table X.2 lists these source-specific deliverables and the schedule
for completion of draft and final reports.
The source-specific reports are valuable in documenting in one place all aspects of the data
collection and analysis undertaken as well as serving as a reference that can be consulted when
questions are raised about evaluation findings that require more detailed information than would be
appropriate in a report to Congress or journal publication. An important limitation of these source117

Chapter X: Reporting

Mathematica Policy Research

Table X.2. Other Source- Specific Reports
Data Source
Report on Analysis of State Program
Reports
Report on Analysis of State Program
Administrator Survey

Draft

Scheduled Completion 43

Final

July 2011

August 2011

August 2012

January 2013

March 2013

May 2013

March 2013

May 2013

Report on Analysis of NSCH/SLAITS
Report on Analysis of Enrollment Data
Report on Analysis of Survey of Enrollees
and Disenrollees (CHIP and Medicaid)

March 2013

May 2013

specific reports, however, is that they present findings on a given research question from only one
data source and, by detailing that source extensively, may be less well suited for a policy audience
interested in key findings on specific topics or questions. This limitation is particularly true of a
source-specific type of report on the 10-state survey, which has the potential to be quite lengthy
given the extensive number of topics and questions this survey can inform.
In the next few months we would like to discuss with ASPE the feasibility of an alternative
approach that would reconfigure some of the source-specific reports to support a set of standalone
topical publications that would bring together relevant information from multiple components.
Under the alternative plan, we would identify which of the source-specific reports could be
reconfigured into topic-oriented reports, identify the topics of greatest interest, and develop a plan
for producing these reports within the current schedule and budget. Although priorities are likely to
change over the next year or two, the following list illustrates the kinds of topics that could be
addressed through this alternative approach.
• Changes in State Program Design and Operations Following Implementation of
CHIPRA
• CHIP Outreach, Enrollment, and Retention: Trends and Best Practices
• Coverage and Care: How CHIP Serves Children in Ten States ( a broad overview of
coverage and care in the CHIP program in the 10 study states).
• Impact of CHIP Coverage on Health Care Access, Use, Satisfaction, and Financial
Burden
• CHIP Enrollees’ Experience Getting Health Care: How Does It Measure Up? (a closer
look at CHIP enrollees experiences, benchmarked against other national data sources).
• Access to Dental Care in CHIP
• State Policies, Practices, and Outcomes in Coordinating Between CHIP and Other
Coverage

43 The dates shown here reflect the revised schedule tentatively approved by ASPE; the revised schedule will be
incorporated formally after the contract is modified, which is expected to take place early in 2011.

118

Chapter X: Reporting

Mathematica Policy Research

• Variation in Public Program Experiences Among Low-Income Children in Three States
• Looking Ahead to 2015: The Role of CHIP in Health Care Reform
Resources likely do not permit us to produce reports on all of these topics. Therefore, should
APSE express support for this general approach, we would next prepare a memorandum detailing
the number and types of reports we anticipate to be feasible within the existing schedule and budget.
In preparing this memorandum, we would naturally appreciate any initial feedback from ASPE on
topics that are of particular interest to HHS, including topics not reflected in the illustrative list
above.
D. Standalone Executive Summary
At the end of the project, we will prepare an executive summary that synthesizes results across
all the study components, highlighting key themes and evidence related to program performance and
progress. We envision this report will be approximately 25 pages in an easy-to-read layout with
minimal technical detail. Brief text will introduce the key themes and lessons learned, accompanied
by graphics and simple tables to provide supporting evidence. The executive summary will address
the major domains of the evaluation: (1) program design features and their influence on outcomes,
(2) enrollment and retention trends and dynamics, (3) access and utilization experiences and impacts,
(5) relationship with Medicaid and private coverage, (6) findings related to uninsured children, and
(7) implications for health reform.

119

Mathematica Policy Research

REFERENCES
Aiken, Kimberly D., Gary Freed, and Matthew Davis. “When Insurance Status Is Not Static:
Insurance Transitions of Low-Income Children and Implications for Health and Health Care.”
Academic Pediatrics,vol. 4, no. 3, May 2004, pp. 237–243.
American Association for Public Opinion Research. “Final Dispositions of Case Codes and
Outcome
Rates
for
Surveys.”
Revised
2008.
Available
at
[http://www.aapor.org/uploads/Standard_Definitions_07_08_Final.pdf].
Accessed
December 29, 2010.
Blumberg, L. J., L. Dubay, and S. A. Norton. “Did the Medicaid Expansions for Children Displace
Private Insurance? An Analysis Using the SIPP.” Journal of Health Economics, vol. 19, no. 1,
2000, pp. 33–60.
Blumberg, S. J., E. B. Foster, A. M. Frasier, J. Satorius, B. J. Skalland, K. L. Nysse-Carris, H. M.
Morrison, S. R. Chowdhury, and K. S. O’Connor. “Design and Operation of the National
Survey of Children’s Health, 2007.” Vital and Health Statistics: Series 1—Program and
Collection Procedures. Hyattsville, MD: National Center for Health Statistics, 2009.
Campbell, D. T., and J. C. Stanley. Experimental and Quasi-Experimental Designs for Research on
Teaching. Chicago, IL: Rand McNally, 1963.
Centers for Disease Control and Prevention. “2007 NSCH Public Use File Variables.” Available at
[http://ftp.cdc.gov/pub/Health_Statistics/NCHS/slaits/nsch07/
4_List_of_Variables_and_Frequency_Counts/2007_NSCH_Formatted_Frequencies.pdf].
Accessed on December 23, 2010.
Centers for Medicare & Medicaid Services. “Children’s Health Insurance Program Statistical
Enrollment Data System: Instructions for Data Entry.” Baltimore, MD: CMS, July 2009.
Centers for Medicare & Medicaid Services. “CMS FY 2008 CHIP Annual Enrollment Report.”
Available
at
[https://www.cms.gov/NationalCHIPPolicy/downloads/FY2008StateTotalTable012309FINA
L.pdf]. Accessed on December 23, 2010.
Centers for Medicare & Medicaid Services. “CHIP Annual Report Template—FFY 2009.”
Baltimore, MD: CMS, October 2008.
Chen, J., M. Kresnow, T. R. Simon, and A. Dellinger. “Injury-Prevention Counseling and Behavior
Among U.S. Children: Results from the Second Injury Control and Risk Survey.” Pediatrics,
vol. 119, 2007, pp. e958–e965.
Claxton, G., B. DiJulio, H. Whitmore, J. D. Pickreign, M. McHugh, A. Osei-Anto, and B. Finder.
“Health Benefits in 2010: Premiums Rise Modestly, Workers Pay More Toward Coverage.”
Health Affairs, vol. 29, no. 1, 2010, pp. 1942–1950.

120

References

Mathematica Policy Research

Cohen Ross, D., M. Jarlenski, S. Artiga, and C. Marks. “A Foundation for Health Reform: Findings
of a 50-State Survey of Eligibility Rules, Enrollment and Renewal Procedures, and Cost-Sharing
Practices in Medicaid and CHIP for Children and Parents During 2009.” Menlo Park, CA:
Kaiser Commission on Medicaid and the Uninsured, December 2009.
Cohen Ross, D. “New Citizenship Documentation Option for Medicaid and CHIP Is Up and
Running.”
April
20,
2010.
Available
at
[http://www.cbpp.org/cms/index.cfm?fa=view&id=3159]. Accessed November 23, 2010.
Cohen, J. Statistical Power Analysis for the Behavioral Sciences. Second edition. New York:
Academic Press, 1988.
Committee on Infectious Diseases. “Recommendations for Prevention and Control of Influenza in
Children, 2010–2011.” Pediatrics, vol. 136, 2010, pp. 816–826.
Congressional Budget Office. “The State Children’s Health Insurance Program.” Washington, DC:
Congressional Budget Office, May 2007.
Cox, D. R. “Regression Models and Life-Tables (with Discussions).” Journal of Royal Statistical
Society: Series B, vol. 34, 1972, pp. 187–220.
Currie, J., and D. Thomas. “Medical Care for Children: Public Insurance, Private Insurance, and
Racial Differences in Utilization.” Journal of Human Resources, vol. 30, no. 1, 1995, pp. 135–
163.
Damiano, P. C., J. C. Willard, E. T. Momany, and M. C. Tyler. “Impact on Access and Health Status:
Second Evaluation Report to the hawk-i Clinical Advisory Committee.” Iowa City, IA: Health
Policy Research Program, Public Policy Center, University of Iowa, 2002.
Davidoff, A. J. “Insurance for Children with Special Health Care Needs: Patterns of Coverage and
Burden on Families to Provide Adequate Insurance.” Pediatrics, vol. 114, 2004, pp. 394–403.
Davidoff, A. J., A. B. Garrett, D. M. Makuc, and M. Schirmer. “Medicaid-Eligible Children Who
Don’t Enroll: Health Status, Access to Care, and Implications for Medicaid Enrollment.”
Inquiry, vol. 37, no. 2, 2000, pp. 203–218.
Davidoff, A., G. Kenney, and L. Dubay. “Effects of the State Children’s Health Insurance Program
Expansions on Children with Chronic Health Conditions.” Pediatrics, vol. 116, no. 1, 2005, pp.
e34– e42.
Decker, S. L. “Changes in Medicaid Physician Fees and Patterns of Ambulatory Care.” Inquiry, vol.
46, 2009, pp. 291–304.
DeNavas, C., B. D. Proctor, and J. Smith. “Income, Poverty, and Health Insurance Coverage in the
United States: 2008.” Current Population Reports, series P60-229. Washington, DC: U.S.
Census Bureau, 2009.
Dick, A., W. C. Brach, R. A. Allison, E. Shenkman, L. P. Shone, P. G. Szilagyi, J. D. Klein, and E.
M. Lewit. “SCHIP’s Impact in Three States: How Do the Most Vulnerable Children Fare?”
Health Affairs, vol. 23, no. 5, 2004, pp. 63–75.
121

References

Mathematica Policy Research

Dolton, Peter J., and van der Klaauw, Wilbert. “The Turnover of Teachers: A Competing Risks
Explanation.” Review of Economics and Statistics, vol. 81, no. 3, 1999, pp. 543–552.
Dubay, L., and G. M. Kenney. “Health Care Access and Use Among Low-Income Children: Who
Fares Best?” Health Affairs, vol. 20, no. 1, 2001, pp. 112–122.
Dubay, L., and G. M. Kenney. “The Impact of CHIP on Children’s Insurance Coverage: An
Analysis Using the National Survey of America’s Families.” Health Services Research, vol. 44,
no. 6, 2009, pp. 2040–2059.
Dubay, L., J. Guyer, C. Mann, and M. Odeh. “Medicaid at the Ten-Year Anniversary of SCHIP:
Looking Back and Moving Forward.” Health Affairs, vol. 26, no. 2, 2007, pp. 370–381.
Elandt-Johnson, R.C., and N. L. Johnson. Survival Models and Data Analysis. New York: John
Wiley & Sons, Inc., 1980.
Families USA. “Express Lane Eligibility: What Is It and How Does It Work?” October 2010.
Available at [http://www.familiesusa.org/assets/pdfs/Express-Lane-Eligibility.pdf]. Accessed
October 27, 2010.
First Focus. “Children in Health Reform: Comparing CHIP to the Exchange Plans.” Washington,
DC: First Focus, December 2009.
Flinn, Christopher J. “Econometric Analysis of CPS-Type Unemployment Data.” Journal of Human
Resources, vol. 21, no.4, 1986, pp. 456–484.
Folsom, R. E., F. Potter, and S. R. Williams. “Notes on a Composite Size Measure for SelfGeorgetown Center for Children and Families. “The Children’s Health Insurance Program
Reauthorization Act of 2009.” Washington, DC: Georgetown Health Policy Institute,
March 2009.
Gruber, J., and K. Simon. “Crowd-Out 10 Years Later: Have Recent Public Insurance Expansions
Crowded Out Private Health Insurance?” Journal of Health Economics, vol. 27, no. 2, 2008,
pp. 201–217.
Haley, Jennifer M., and Genevieve Kenney. “Why Aren’t More Uninsured Children Enrolled in
Medicaid
or
SCHIP?”
May
2001.
Available
at
[http://www.urban.org/publications/310217.html]. Accessed December 29, 2010.
Hansen, M. H., and W. N. Hurwitz. “The Problem of Nonresponse in Sample Surveys.” Journal of
the American Statistical Association, vol. 41, 1946, pp. 517–529.
Harlor, A. D. B., C. Bower, the Committee on Practice and Ambulatory Medicine, and the Section
on Otolaryngology Head and Neck Surgery. “Hearing Assessment in Infants and Children:
Recommendations Beyond Neonatal Screening.” Pediatrics, vol. 134, 2009, pp. 1252–1263.

122

References

Mathematica Policy Research

Heberlein, M., Brooks, T., Guyer, J., Artiga, S., and J. Stephens. “Holding Steady, Looking Ahead:
Annual Findings of a 50-State Survey of Eligibility Rules, Enrollment and Renewal Procedure,
and Cost Sharing Practices in Medicaid and CHIP, 2010 – 2011.” Kaiser Commission on
Medicaid
and
the
Uninsured,
January
2011.
Available:
http://www.kff.org/medicaid/upload/8130.pdf.
Hill, I., B. Courtot, and J. Sullivan. “Ebbing and Flowing: Some Gains, Some Losses as SCHIP
Responds to Third Year of Budget Pressure.” Assessing the New Federalism Policy Brief A-68.
Washington, DC: the Urban Institute, 2005.
Howell, E. M., and C. Trenholm. “The Effect of New Insurance Coverage on the Health Status of
Low-Income Children in Santa Clara County.” Health Services Research, vol. 42, no. 2, 2007,
pp. 867–889.
Howell, E. M., D. Hughes, L. Palmer, G. Kenney, and A. Klein. “Final Report on the Evaluation of
the San Mateo County Children’s Health Initiative.” Washington, DC: the Urban Institute,
2008.
Hudson, J. L., and T. M. Selden. “Children’s Eligibility and Coverage: Recent Trends and a Look
Ahead.” Health Affairs, vol. 26, no. 5, 2007, pp. w618–w629.
Hudson, J. L., T. M. Selden, and J. S. Banthin. “The Impact of SCHIP on Insurance Coverage of
Children.” Inquiry, vol. 42, 2005, pp. 232–254.
Hughes, D., J. Angeles, and E. Stilling. “Crowd-Out in the Health Families Program: Does It
Exist?” San Francisco, CA: Institute for Health Policy Studies, University of California,
August 2002.
Kaiser Commission on Medicaid and the Uninsured. “Medicaid Enrollment: December 2009 Data
Snapshot.” September 2010. Available at [http://www.kff.org/medicaid/upload/8050-02.pdf].
Accessed December 30, 2010.
Kaiser Commission on Medicaid and the Uninsured. “Next Steps in Covering Uninsured Children:
Findings from the Kaiser Survey of Children’s Health Coverage.” Report #7844. Menlo Park,
CA: The Kaiser Family Foundation, January 2009.
Kaiser
Family
Foundation.
“Monthly
CHIP
Enrollment.”
Available
at
[http://www.statehealthfacts.org/comparetable.jsp?ind=236&cat=4&sub=61&yr=1&typ=1].
Accessed October 27, 2010.
Kempe, A., B. L. Beaty, L. A. Crane, J. Stokstad, J. Barrow, S. Belman, and J. F. Steiner. “Changes in
Access, Utilization, and Quality of Care After Enrollment into a State Child Health Insurance
Plan.” Pediatrics, vol. 115, 2005, pp. 364–371.
Kenney, G. “The Impacts of the State Children’s Health Insurance Program on Children Who
Enroll: Findings from Ten States.” Health Services Research, vol. 42, no. 4, 2007, pp. 1520–
1543.
Kenney, G., A. Cook, and L. Dubay. “Progress Enrolling Children in Medicaid/CHIP: Who Is Left
and What Are the Prospects for Covering More Children?” Princeton, NJ: Robert Wood
Johnson Foundation and Washington, DC: the Urban Institute, 2009.
123

References

Mathematica Policy Research

Kenney, G., and A. Cook. “Coverage Patterns Among SCHIP-Eligible Children and Their Parents.”
Washington, DC: the Urban Institute, February 2007.
Kenney, G., and D. Chang. “The State Children’s Health Insurance Program: Successes,
Shortcomings, and Challenges.” Health Affairs, vol. 23, no. 5, 2004, pp. 51–62.
Kenney, G., and J. Yee. “SCHIP at a Crossroads: Experiences to Date and Challenges Ahead.”
Health Affairs, vol. 26, no. 2, 2007, pp. 356–369.
Kenney, G., and S. Dorn. “Health Care Reform for Children with Public Coverage: How Can
Policymakers Maximize Gains and Prevent Harm?” Washington, DC: the Urban Institute,
2009.
Kenney, G., V. Lynch, A. Cook, and S. Phong. “Who and Where Are the Children Yet to Enroll in
Medicaid and the Children’s Health Insurance Program?” Health Affairs, vol. 29, no. 10, 2010,
pp. 1920–1929.
Kenney, Genevieve, Christopher Trenholm, Lisa Dubay, Myoung Kim, Lorenzo Moreno, Jamie
Rubenstein, Anna Sommers, Stephen Zuckerman, W. Black, F. Blavin, and G. Ko. “The
Experiences of SCHIP Enrollees and Disenrollees in 10 States: Findings from the
Congressionally Mandated SCHIP Evaluation.” Princeton NJ: Mathematica Policy Research
and Washington, DC: the Urban Institute, October 2005.
Kenney, Genevieve, James Marton, Joshua McFeeters, and Julia Costich. “Assessing Potential
Enrollment and Budgetary Effects of SCHIP Premiums: Findings from Arizona and
Kentucky.” Health Services Research, vol. 42, no. 6, part 2, 2007, pp. 2354–2372.
Kenney, Genevieve, Jennifer Haley, and Alexandra Tebay. “Awareness and Perceptions of Medicaid
and SCHIP Among Low-Income Families with Uninsured Children: Findings from 2001.”
Washington, DC: Mathematica Policy Research and the Urban Institute, 2004.
Kogan, M. D., P. W. Newacheck, S. J. Blumberg, R. M. Ghandour, G. K. Singh, B. B. Strickland,
and P. C. van Dyck. “Underinsurance Among Children in the United States.” New England
Journal of Medicine, vol. 363, 2010, pp. 841–851.
Limpa-Amara, S., A. Merrill, and M. Rosenbach. “SCHIP at 10: A Synthesis of the Evidence on
Substitution of SCHIP for Other Coverage.” Princeton, NJ: Mathematica Policy Research,
2007.
Link, M., and A. Mokdad. “Use of Prenotification Letters: An Assessment of Benefits, Costs, and
Data Quality.” Public Opinion Quarterly, vol. 69, 2005, pp. 572–587.
Lo Sasso, A. T., and T. C. Buchmueller. “The Effect of the State Children’s Health Insurance
Program on Health Insurance Coverage.” Journal of Health Economics, vol. 23, 2004, pp.
1059–1082.
Lynch, Victoria, Samantha Phong, Genevieve Kenney, and Juliana Macri. “Uninsured Children:
Who Are They and Where Do They Live? New National and State Estimates from the 2008
American Community Survey.” Washington, DC: the Urban Institute, August 2010.

124

References

Mathematica Policy Research

McBroome, K., P. C. Damiano, and J. C. Willard. “Impact of the Iowa S-SCHIP Program on Access
to Dental Care for Adolescents.” Pediatric Dentistry, vol. 27, no. 1, 2005, pp. 47–53.
Monheit, A. C., and P. J. Cunningham. ‘‘Children Without Health Insurance.’’ Future of Children,
vol. 2, no. 2, 1992, pp. 154–170.
Moreno, L., and S. D. Hoag. “Covering the Uninsured Through TennCare: Does It Make a
Difference?” Health Affairs, vol. 20, no. 1, 2001, pp. 231–239.
Moreno, L., and W. Black. “Analysis of Length of SCHIP Enrollment and Time to Re-Enrollment.”
In The Experiences of SCHIP Enrollees and Disenrollees in 10 States: Findings from the
Congressionally Mandated SCHIP Evaluation: Appendixes, by C. Trenholm, G. Kenney, W.
van Kammen, L. Dubay, F. Potter, J. Rubenstein, M. Kim, A. Sommers, L. Moreno, S.
Zuckerman, B. Schiff, F. Blavin, W. Black, and G. Ko. Princeton NJ: Mathematica Policy
Research and Washington, DC: the Urban Institute, October 2005.
Newacheck, P. W., J. J. Stoddard, D. C. Hughes, and M. Pearl. “Health Insurance and Access to
Primary Care for Children.” New England Journal of Medicine, vol. 338, no. 8, 1998, pp. 513–
519.
Olson, Lynn M., S. Tang Suk-fong, and Paul W. Newacheck. “Children in the United States with
Discontinuous Health Insurance Coverage.” The New England Journal of Medicine, vol. 353,
no. 4, 2005, pp. 382–391.
Oswald, D. P., H. N. Bodurtha, C. H. Broadus, et al. “Defining Underinsurance Among Children
with Special Health Care Needs: A Virginia Sample.” Maternal and Child Health Journal, vol. 9,
2005, pp. S67–S74.
Perry, C. D., and G. M. Kenney. “Preventive Care for Children in Low-Income Families: How Well
Do Medicaid and State Children’s Health Insurance Programs Do?” Pediatrics, vol. 120, 2007,
pp. e1393–e1401.
Redline, Cleo, Julia Oliver, and Ron Fecso. “The Effect of Cover Letter Appeals and Visual Design
on Response Rates in a Government Mail Survey.” Arlington, VA: National Science
Foundation, 2004.
Rosenbach, M. L. “The Impact of Medicaid on Physician Use by Low-Income Children.” American
Journal of Public Health, vol. 79, no. 9, 1989, pp. 1220–1226.
Rosenbach, Margo, Carol Irvin, Angela Merrill, Shanna Shulman, John Czajka, Christopher
Trenholm, Susan Williams, So Sasigant Limpa-Amara, and Anna Katz. “National Evaluation of
the State Children’s Health Insurance Program: A Decade of Expanding Coverage and
Improving Access.” Cambridge, MA: Mathematica Policy Research, January 2007.
Shenkman, E., B. Vogel, J. Boyett, and R. Naff. “Enrollment and Disenrollment in a Title XXI
Program.” Health Care Financing Review, vol. 23, no. 3, 2002, pp. 47–63.
Shone, L. P., P. M. Lantz, A. W. Dick, M. E. Chernew, and P. G. Szilagyi. “Crowd-Out in the State
Children’s Health Insurance Program (SCHIP): Incidence, Enrollee Characteristics and
Experiences, and Potential Impact on New York’s SCHIP.” Health Services Research, vol. 32,
no. 1, part 2, 2008, pp. 419–434.
125

References

Mathematica Policy Research

Simpson, L., G. Fairbrother, S. Hale, and C. Homer. “Reauthorizing SCHIP: Opportunities for
Promoting Effective Health Coverage and High-Quality Care for Children and Adolescents.”
New York: The Commonwealth Fund, 2007.
Singleton, R. A., B. C. Straits, and M. M. Straits. Approaches to Social Research. New York: Oxford
University Press, 1993.
Smith, V., D. Roberts, D. Rousseau, and T. Schwartz. “CHIP Enrollment, June 2009: An Update on
Current Enrollment and Policy Directions.”
April 2010. Available at
[http://www.kff.org/medicaid/upload/7642-04.pdf]. Accessed December 30, 2010.
Smith, Vernon, David Rousseau, Caryn Marks, and Robin Rudowitz. “SCHIP Enrollment in June
2007: An Update on Current Enrollment and SCHIP Policy Directions.” Washington, DC:
Henry J. Kaiser Family Foundation, January 2008.
Sommers, A., S. Zuckerman, and L. Dubay. “The Experiences of SCHIP Enrollees and Disenrollees
in 10 States: Findings from the Congressionally Mandated SCHIP Evaluation.” Princeton, NJ:
Mathematica Policy Research and Washington, DC: the Urban Institute, October 31, 2005.
Sommers, A., S. Zuckerman, L. Dubay, and G. Kenney. “Substitution of SCHIP for Private
Coverage: Results from a 2002 Evaluation in Ten States.” Health Affairs, vol. 26, no. 2, 2007,
pp. 529–537.
Sommers, B. D. “The Impact of Program Structure on Children’s Disenrollment from Medicaid and
SCHIP.” Health Affairs, vol. 24, no. 6, 2005, pp. 1611–1618.
Sommers, B. D. “Why Millions of Children Eligible for Medicaid and SCHIP Are Uninsured: Poor
Retention Versus Poor Take-Up.” Health Affairs (Millwood), vol. 26, no. 5, 2007, pp. w560–
w567.
Stoddard, J. J., R. F. St. Peter, and P. W. Newacheck. “Health Insurance Status and Ambulatory Care
for Children.” New England Journal of Medicine, vol. 330, no. 20, 1994, pp. 1421–1425.
Szilagyi, P. G., A. W. Dick, J. D. Klein, L. P. Shone, J. Zwanziger, A. Bajorska, and H. L. Yoos.
“Improved Asthma Care After Enrollment in the State Children’s Health Insurance Program in
New York.” Pediatrics, vol. 117, 2006, pp. 486–496.
Szilagyi, P. G., A. W. Dick, J. D. Klein, L. P. Shone, J. Zwanziger, and T. McInerny. “Improved
Access and Quality of Care After Enrollment in the New York State Children’s Health
Insurance Program (SCHIP).” Pediatrics, vol. 113, no. 5, 2004, pp. e395–e404.
Trenholm, C., G. Kenney, L. Dubay, M. Kim, L. Moreno, J. Rubenstein, A. Sommers, S.
Zuckerman, W. Black, F. Blavin, and G. Ko. “The Experiences of SCHIP Enrollees and
Disenrollees in 10 States: Findings from the Congressionally Mandated SCHIP Evaluation:
Appendixes.” Princeton, NJ: Mathematica Policy Research and the Urban Institute, October
2005.
Trenholm, Christopher A., James G. Mabli, and Ander Wilson. “SCHIP Children: How Long Do
They Stay and Where Do They Go?” Princeton, NJ: Robert Wood Johnson Foundation,
January 2009.
126

References

U.S.

Mathematica Policy Research

Census Bureau. “American Fact Finder.”
[http://www.census.gov/]. Accessed October 27, 2010.

Geographic

data.

Available

at

U.S. Department of Health and Human Services. “Children’s Health Insurance Program
Reauthorization Act: One Year Later, Connecting Kids to Coverage.” Washington, DC: HHS,
2010.
U.S. Department of Health and Human Services. “HHS News Release: States Get Bonuses for
Boosting Enrollment in Children’s Health Coverage.” December 17, 2009. Available at
[http://www.hhs.gov/news/press/2009pres/12/20091217a.html]. Accessed November 23,
2010.
U.S. Government Accountability Office. “CMS Should Improve Efforts to Assess Whether SCHIP
Is Substituting for Private Insurance.” GAO-09-252. Washington, DC: GAO, February 2009.
U.S. Government Accountability Office. “State Children’s Health Insurance Program.” Report to
the chairman, Committee on Finance, U.S. Senate, February 2009.
U.S. Preventive Services Task Force. “Screening for Obesity in Children and Adolescents: U.S.
Preventive Services Task Force Recommendation Statement.” Pediatrics, vol. 125, 2010, pp.
361–367.
U.S. Preventive Services Task Force. “Screening for Visual Impairment in Children Younger Than
Age 5 Years.” Rockville, MD: Agency for Healthcare Research and Quality, May 2004.
Urban Institute and Kaiser Commission on Medicaid and the Uninsured. “Population Distribution
of Children by Race/Ethnicity, States (2008–2009), U.S. (2009).”
Available at
[http://www.statehealthfacts.kff.org/comparetable.jsp?ind=7&cat=1&sub=1&yr=199&typ=1]
. Accessed November 23, 2010.
Van den Berg, G. J., A. G. C. van Lomwel, and J. C. van Ours. “Nonparametric Estimation of a
Dependent Competing Risks Model for Unemployment Durations.” Empirical Economics,
vol. 34, 2008, pp. 477–491.
Vistnes, J., A. Zawacki, S. Kosali, and A. Taylor. “Declines in Employer-Sponsored Coverage
Between 2000–2008: Offers, Take-Up, Premium Contributions, and Dependent Options.”
September
2010.
Available
at
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1688322## ]. Accessed December 30,
2010.
Ward, A. “The Concept of Underinsurance: a General Typology.”
Philosophy, vol. 31, no. 5, 2006, pp. 499–531.

Journal of Medicine and

Weighting Samples in Multiple Domains.” Presented at the American Statistical Association
Meeting, Section on Survey Research Methods, 1987.
Wolbers, Marcel, Michael T. Koller, Jacqueline C. Witteman, and Ewout W. Steyerberg. “Prognostic
Models with Competing Risks: Methods and Application to Coronary Risk Prediction.”
Epidemiology, vol. 20, no. 4, 2009, pp. 555–561.

127

References

Mathematica Policy Research

Wooldridge, Judith, Genevieve Kenney, Christopher A. Trenholm, Lisa C. Dubay, Ian T. Hill,
Myoung Kim, Lorenzo Moreno, Anna S. Sommers, and Stephen Zuckerman. “Congressionally
Mandated Evaluation of the State Children’s Health Insurance Program.” Princeton, NJ:
Mathematica Policy Research and Washington, DC: the Urban Institute, October 26, 2005.

128

APPENDIX A
ENABLING LEGISLATION FOR THE ORIGINAL AND CURRENT EVALUATION

APPENDIX B
STATE SELECTION MEMO

www.mathematica-mpr.com

Improving public well-being by conducting high-quality, objective research and surveys
Princeton, NJ ■ Ann Arbor, MI ■ Cambridge, MA ■ Chicago, IL ■ Oakland, CA ■ Washington, DC
Mathematica® is a registered trademark of Mathematica Policy Research


File Typeapplication/pdf
AuthorSusie Moore
File Modified2011-08-22
File Created2011-07-20

© 2024 OMB.report | Privacy Policy