0990-CHIP OMB Supporting Statement Part B_10_19_REV

0990-CHIP OMB Supporting Statement Part B_10_19_REV.docx

CHIPRA_ Children Health Insurance

OMB: 0990-0384

Document [docx]
Download: docx | pdf


Children’s Health Insurance Program Reauthorization Act (CHIP10) 10—State Evaluation


Supporting Statement Part B:

Data Collection Procedures

and Statistical Methods

Final

October 18, 2011






CONTENTS

BACKGROUND 1

B. Supporting Statement 2

1. Respondent Universe and Sampling Methods 2

2. Procedures for the Collection of Information 8

3. Methods to Maximize Response Rates and Deal with Nonresponse 16

4. Tests of Procedures or Methods to be Undertaken 18

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 18



ATTACHMENT A: CHIPRA 10—STATE EVALUATION: EVALUATION DESIGN REPORT

ATTACHMENT B1: CHIP SURVEY OF ENROLLEES AND DISENROLLEES

ATTACHMENT B2: CHIP DATA ELEMENTS AND QUESTION SOURCES

ATTACHMENT C1: CASE STUDIES_STATE OFFICIALS

ATTACHMENT C2: CASE STUDIES – COMMUNITY ENROLLMENT AGENCIES

ATTACHMENT C3: CASE STUDIES_HEALTH CARE PROVIDERS

ATTACHMENT C4: CASE STUDIES_MANAGED CARE AND HEALTH PLANS

ATTACHMENT D1: FOCUS GROUP_MODERATOR GUIDE FOR PARENTS WITH EMPLOYER INSURANCE

ATTACHMENT D2: FOCUS GROUP MODERATOR GUIDE FOR PARENTS OF CHIP ENROLLEES

ATTACHMENT D3: FOCUS GROUP MODERATOR GUIDE FOR PARENTS OF CHIP DISENROLLEES

ATTACHMENT D4: FOCUS GROUP MODERATOR GUIDE FOR PARENTS OF ELIGIBLE BUT UNSURED CHILDREN

ATTACHMENTS E-G: OMITTED FROM PART B

ATTACHMENT H: RESPONDENT MATERIALS

ATTACHMENT I1: FOCUS GROUP CONSENT FORM FOR PARENTS OF CHIP ENROLLEES

ATTACHMENT I2: FOCUS GROUP CONSENT FORM FOR PARENTS OF CHIP DISENROLLEES

ATTACHMENT I3: FOCUS GROUP CONSENT FORM FOR PARENTS OF ELIGIBLE BUT UNINSURED CHILDREN

ATTACHMENT I4: FOCUS GROUP CONSENT FORM FOR PARENTS OF CHILDREN COVERED BY EMPLOYER INSURANCE

ATTACHMENT J: PRETEST MEMO

BACKGROUND

The Children’s Health Insurance Program Reauthorization Act (CHIPRA) 10—State Evaluation will provide the federal government with new and detailed insights into how the Children’s Health Insurance Program (CHIP) has evolved since its early years, what impacts on children’s coverage and access to care have occurred, and what new issues have arisen as a result of policy changes related to CHIPRA and the Patient Protection and Affordable Care Act (ACA) of 2010 (PL 111-148). The evaluation will address numerous key questions regarding the structure and impact of CHIP and Medicaid programs for children, including (1) to what extent CHIP has reduced uninsurance among children, and how this has been impacted by expansions to the program to cover more children with family incomes above 200 percent of the federal poverty level; (2) how enrollment and disenrollment trends have changed over time in CHIP, and what economic and policy factors appear to be driving those trends (such as reductions in access to employer coverage as a result of the economic downturn); and (3) what outreach, enrollment, and retention policies are most successful at increasing enrollment and retention in Medicaid and CHIP, particularly for children of racial and ethnic minorities and children with special health care needs. To answer these and other questions, the Assistant Secretary for Planning and Evaluation (ASPE) will draw on three new primary data collection efforts, including a survey of selected CHIP enrollees and disenrollees in 10 states (and Medicaid enrollees and disenrollees in 3 of these states), qualitative case studies in the 10 states, and a survey of State Program Administrators in all 50 States and the District of Columbia. ASPE seeks a three-year clearance for the first two information collections at this time. Each collection will take place once.

Survey of enrollees and disenrollees. The parent or primary caregiver of CHIP/Medicaid eligible children will be interviewed for this study. They will be selected from all eligible children in the 10 states’ CHIP and Medicaid administrative files. Three groups of children will be eligible for the study: new CHIP/Medicaid enrollees (child enrolled in CHIP/Medicaid at least two months and less than three months at time of sample selection), established CHIP/Medicaid enrollees (child enrolled in CHIP/Medicaid 12 or more months at the time of sample selection), and recent CHIP/Medicaid disenrollees (child disenrolled from CHIP at least two months but less than three months at the time of sample selection). The sample will be divided into two parts: a multi-stage, clustered sample that will be interviewed by telephone (using computer-assisted telephone interviewing, or CATI) with face-to-face follow-up of nonresponding households and households that cannot be located through the central office; and a stratified, unclustered random sample that will be interviewed by telephone only. While the clustered design is more costly than the unclustered design, it results in high response rates and improved population coverage. Without this design, children in non-telephone households (often subgroups such as Hispanics, Native Americans, and African Americans) would not be represented in the study. The survey will collect data on application and enrollment; access, use, content of care, and satisfaction; program retention, renewal, and disenrollment; health insurance coverage; and child and family characteristics, including child health.

Case studies. The qualitative case studies in the 10 states will include site visit interviews with CHIP and Medicaid administrators and other public and child health stakeholders (key informant interviews). In addition, researchers will conduct focus groups in the 10 states; participants will include parents of (1) CHIP enrollees, (2) CHIP disenrollees; (3) CHIP eligible but uninsured, and (4) children covered by employer-sponsored insurance. The case studies will characterize the program implementation and impacts, implications of the Affordable Care Act, and enrollment retention, access, and utilization trends.

Attachment A is the Final Design Report submitted to ASPE by the contractors on April 21, 2011. As per ASPE’s agreement with OMB (based on the December 9, 2010 OMB Guidance), the pages referenced below may be found in the Design Report. Because the Design Report was written without reference to the OMB questions, there is some page overlap.

B. Supporting Statement

1. Respondent Universe and Sampling Methods

a. The Survey of Enrollees and Disenrollees

The design for the Survey of Enrollees and Disenrollees (the Survey) follows that used in the prior CHIP evaluation that was conducted in 2001 – 2003. First, ten states were selected in a highly structured manner and recruited to participate in the 10-state evaluation and, second, from within each state’s CHIP enrollment lists, children were selected and interviews conducted with their parents or guardians.

State Selection. The design calls for first selecting and recruiting 10 states that met three stages of selection criteria. For Stage 1, the states had to meet three federal authorizing legislation requirements: utilized diverse approaches to providing child health assistance, represented various geographic areas (including a mixture of rural and urban areas), and contained a significant portion of uninsured children. For Stage 2, ASPE included additional policy-relevant selection criteria to ensure the states would represent diverse program features. Stage 3 specified that the state’s data system(s) could support the evaluation’s needs for building a sample frame, and that a state was willing to participate. The ten states selected and agreeing to participate are: Alabama, California, Florida, Louisiana, Michigan, New York, Ohio, Texas, Utah, and Virginia. (Wisconsin was selected initially but it was the one state that did not consent to participate and so it was replaced by Michigan). The 10 states altogether include roughly 57 percent of children enrolled in CHIP as of June 2009 (Kaiser Family Foundation 2010), and they include an estimated 2.8 million children eligible for but not enrolled in CHIP or Medicaid. Half of the states selected are from the top 10 largest CHIP programs in the nation.

Each of the ten states has agreed to participate and provide the data needed to construct the sampling frame for the CHIP survey. Three states (California, Florida, and Texas) will also provide complimentary data for a survey of the Medicaid population. Each state is signing a memorandum of understanding document that outlines the terms of participation and a more detailed data sharing agreement governing the use of administrative data for survey sampling and analysis.

  • Further detail on state selection and the process of securing state participation is in the design report for the project, Attachment A, pages 17 – 25.

  • A supplemental pdf document titled “CHIPRA-10 State Selection_For OMB” contains two memos that describe the criteria and process employed for selecting CHIP and Medicaid survey States.

Survey Sample. The design calls for conducting a survey of the parents or guardians of children currently or previously covered by CHIP in these 10 states (and by Medicaid in CA, FL, and TX). The respondent universe will be the parents or guardians of children enrolled in CHIP or Medicaid during a designated time period. Generally, this is a population of children with family incomes under 200 percent of the federal poverty level (FPL). The survey design calls for selecting children in each state into three sample domains based on these definitions:

  • New enrollees include children who have been enrolled in CHIP for at least 60 days (two months), but less than 90 days (three months), at the time of sampling. In addition, the children must have been disenrolled for more than one month prior to their current new enrollment.

  • Established enrollees include children who have been enrolled for twelve months or more at the time of sampling.

  • Recent disenrollees include children who have been disenrolled from CHIP for at least 60 days (two months), but less than 90 days (three months), at the time of sampling. In addition, the children must have been previously enrolled for at least two months prior to their current disenrollment.

In addition, to be part of the target population, an individual child must be age 18 years or younger, in the case of the two enrollee domains, or age 19 or younger in the recent disenrollee domain. (Including 19-year-olds in the sample allows us to capture children who lost eligibility because of age restrictions.) ASPE also requires that the individual live in the selected state at the time of sampling. The CHIP survey information collection will provide a detailed description of the characteristics of these children, their movement in and out of the programs, and their experiences accessing and using health care.

The definition for each of the three sample domains is the same as the one used in the prior evaluation and, in each case, reflects a balance of sometimes competing considerations. For example, the new enrollee definition balances the need for a period sufficiently long to reflect a true period of coverage and a period sufficiently short for the respondent to successfully recall their experience prior to enrolling. In addition, by including new enrollees who return to CHIP after some kind of gap in coverage, we appropriately reflect the cross-section of children who enter CHIP, some of whom will have had prior public coverage experience and some of whom will have not. Likewise, the disenrollee definition balances having a period of disenrollment sufficiently long for a respondent to describe their coverage status after leaving CHIP and sufficiently short to successfully locate and interview a sizeable fraction of a domain that may be highly mobile. In addition, by including children who had coverage for as little as two months, the disenrollee definition reflects the fact that some children leave CHIP after quite short periods of coverage, though the vast majority remain enrolled for some time. As with the prior evaluation, exceptions may need to be applied to these definitions. For example, in the prior evaluation, the adoption of presumptive eligibility in New York led us to extend the definition of new enrollment from two to three months for cases sampled with a presumptive eligibility indicator.

One complication with the sample design that arose in the prior evaluation is that we had a sizeable fractions of interviews completed with respondents who reported start and end dates of coverage that were not at all close to what was shown in the administrative records. For two of the sample domains, new enrollees and disenrollees, such confusion greatly limits the value of the information provided. For example, when respondents of new enrollees fail to identify the child’s new enrollment, they are unable to provide meaningful information about their pre-CHIP coverage or pre-CHIP experiences accessing and utilizing care. Likewise, when respondents of disenrollees fail to identify the child’s disenrollment, they are unable to provide meaningful information on the factors that contributed to their disenrollment or their coverage or other experiences since they left the program. For these reasons, we plan to administer a shortened version of survey for new enrollees and recent disenrollees when the interviewer determines that the period of reported coverage is far different than what the administrative records show. For established enrollees, this disconnect is less problematic as the data reported on their recent access, use and other health care experiences still reflects their true period of coverage no matter what their self-reported coverage status. Indeed, dropping cases that show a disconnect between self-reported and actual (administrative) coverage periods would risk biasing the findings for established enrollees, as the sample ultimately interviewed in this domain would not accurately reflect the outcomes of the population that was actually covered by the program.

The disconnect between the self-reported CHIP coverage period and the period shown on the administrative files arose most often for two groups in the prior evaluation. The first are new CHIP enrollees that had either experienced a short gap in CHIP coverage or who had transferred from Medicaid, both of whom often reported a period of CHIP coverage far longer than indicated from the administrative records (presumably because they never recognized the transition to CHIP). The second are CHIP disenrollees that subsequently either returned to CHIP after a short gap in coverage or who had transferred to Medicaid, both of whom often reported never having disenrolled (again presumably because they never recognized the transition from CHIP). To minimize the need to drop either of these cases at the time of the interview, we anticipate greatly reducing the proportion of these cases that are actually sampled for the study in each state (and perhaps omitting them altogether). However, before we can commit to this sampling approach and apply a specific decision rule for cases to which this approach applies, we need to acquire each state’s data and be sure we can identify these cases successfully. (This is particularly true of the cases reflecting transitions to and from Medicaid, which will require linking data from two entirely different eligibility systems).

We define enrollment for each sample domain based on when we expect the parent would consider the child enrolled—a date that might differ from that on which the state actually began paying for services. For example, some states retroactively enroll children as of the first day of the month in which the parent applied for CHIP, but they might not determine the child to be eligible until one or more months after the application was received. As a result, the date services began to be covered by the state might be a month or more earlier than the date the parent is notified of the child’s enrollment. In this instance, we would define enrollment from the date associated with the notification of coverage to the parent, as opposed to application date or the retroactive coverage date.

The sampling frame for a given domain is the population of enrollees and disenrollees in each state meeting the definition of the target population summarized above. This frame will be constructed for each state using data from its administrative files. Constructing the sample frame as quickly as possible will be essential for this survey, particularly with respect to the populations of new enrollees and recent disenrollees, for whom risks of recall bias and survey non-response increase with time.1 One key step in assuring timely frame construction is receiving accurate administrative data from the states on a regular basis. In our discussions with the program and technical staff in each state, we will request delivery of data within two weeks of the specified data extract cutoff date.

Using information about the children and families that is contained on the sample frame, we will have the option of oversampling particular groups within each sample domain.2 Examples of such information include the income level or eligibility classification of the household, the age of the child and the prior coverage of child – ideally in both CHIP and Medicaid. While oversampling can result in reduced precision for the full sample (due to design effects from weighting) it can have a couple of benefits that may outweigh these costs. First, it can result in a sample that is larger for relatively small but important subgroups, such as children in upper income households that may be subject to relatively high co-payments. Second, its converse (undersampling) can result in fewer cases that may offer marginal analytic value to the study, such as new CHIP enrollees with recent public coverage experience who (as noted above) will often not even be reported as having newly enrolled.

  • A supplemental document labeled “CHIPRA-10 Sampling Memo_For OMB” contains a memorandum that describes the sampling approach in further detail, including plans for oversampling of children in higher-income categories in two States.

Table B1, below shows the expected sample sizes for CHIP and for Medicaid by sample domain and by clustered versus unclustered strata.

Table B1. Universe of Sample Members

Universe of Sample Members

Sample Numbers

CHIP

22,222

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee

11,778

3,612

4,554

3,612


10,444

3,203

4,038

3,203

Medicaid

6,667

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee

3,534

1,082

1,370

1,082


3,133

960

1,213

960


Table B2, next page, shows the response rates from the 2001 CHIP survey by state and sample domains and by clustered versus unclustered sample. The sample allocations varied widely across state and domain and ASPE expects the same kind of variation will exist in the 2011 sample allocations.



Table B2. State-Level SCHIP Counts and Response Rates from CHIP 2001




State




Domain



Full Sample (Count)



Eligible Sample (Count)


Complete Interviews (Count)

Design-Specific Weighted Response Rate (Percent)

CA

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


402

400

586



407

393

425


343

342

491



379

364

384


303

279

346



296

282

260


88.9

82.7

73.4



75.9

75.6

64.8

CO

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


455

461

445



452

466

466


394

384

344



452

466

466


328

318

265



316

300

319


84.7

84.1

82.9



71.3

66.9

76.6

FL

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


457

440

551


405

418

458


374

357

442



363

374

458


317

303

301



284

292

269



86.0

85.2

72.3



77.0

74.7

63.9

IL

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


525

527

505



447

418

504


413

432

389



447

418

504


291

305

251



283

267

280


72.6

75.1

70.4



65.3

67.5

60.1

LA

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


432

429

501



403

399

453


345

343

400



403

399

453


282

278

279



309

398

286


83.7

83.9

76.8



78.7

77.7

72.3




State




Domain



Full Sample (Count)



Eligible Sample (Count)


Complete Interviews (Count)



MO

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


507

508

551



433

407

483


390

373

415



433

407

483


267

267

251



283

295

282


69.9

73.8

64.2



67.6

74.4

63.7

NJ

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


911

881

998





795

782

998




583

569

536




71.3

70.7

58.3



NY

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


542

532

533



409

416

432


458

446

417



373

372

388


321

317

295



260

259

253


72.1

71.7

76.3



68.9

69.5

64.9

NC

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


518

522

631



398

400

416


377

402

430



348

349

372


280

317

332



262

286

230


75.4

82.5

80.6



68.9

76.3

58.3

TX

Unclustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


Clustered

Recent Enrollee

Established Enrollee

Recent Disenrollee


410

386

565



454

447

451


317

300

448



402

401

385


256

263

293



336

332

284


71.7

88.5

68.5



79.9

79.0

72.3


Response rates. As noted above, the 10 states altogether include an estimated 2.8 million children, or roughly 57 percent of children nationwide enrolled in CHIP as of June 2009 (Kaiser Family Foundation 2010). Half of the states selected are from the top 10 largest CHIP programs in the nation. Based on our experience in previous studies with a similar sample design, we expect that about 85 percent of the sample in both the unclustered and clustered samples will be locatable by the survey operations center, and that about 75 percent of these located cases will complete the interview. For those in the clustered sample and among the 15 percent not able to be located by the central office,3 based on past experience we expect 55 percent of these to be ultimately located and to respond to the interview after field followup. Combining the various sample components, the cumulative completion rate for the entire sample would be about 72 percent (75 percent for the central office located and 55 percent for those initially unlocated but pursued vigorously in the field). If the sample had been limited to telephone interviews, the cumulative completion rate would have been about 64 percent. By pursuing in the field a random subsample of those not located by the survey operations center (those randomly selected into the clustered sample), we expect to add more than 860 completed interviews in the CHIP component and more than 250 interviews in the Medicaid component, increasing the overall cumulative completion rate by about eight percentage points (from about 64 percent to about 72 percent).

Although ASPE’s goal is to reach the highest response rate possible, we expect some nonresponse, and the level of nonresponse may vary for different subgroups. We expect that this variation can be corrected through weighting adjustments after the data are collected, as is customary. However, if the differences are substantial, we may need to increase our efforts to convert refusing sample members into respondents and lengthen the data collection period to obtain enough respondents.

To address the challenge of representativeness, we will examine the response rate overall at regular intervals during the data collection period, as well as response by key sample characteristics. This may include characteristics such as state, clustered vs. unclustered sample status, and enrollment status. For example, we may find that disenrollees respond at lower levels than new and established enrollees. We will compare the distributions of respondents to those in the sample population. If there are large differences in response rates by these key characteristics, we will focus our resources on increasing response among those groups of sample members with lower rates. In this example, we could increase the saliency of participating in the survey by tailoring the request to participate.

We will calculate weighted and unweighted response rates for this survey following the procedures outlined by the American Association for Public Opinion Research (AAPOR, 2008).4 When combining the unclustered and clustered samples, only the weighted response rates will be calculated, as the sampling weights and composite adjustments properly account for overlap between the two samples and for nonresponse subsampling as described in the weighting section.

  • Additional detail on weighting procedures is in the project’s design report, see Attachment A, pages 52 – 53.





b. Case Studies: Site Visits with Key Informants

The site visits are a key component of the case studies of the selected states. ASPE will conduct site visits in the 10 states participating in the CHIP evaluation for the purpose of interviewing key informants at both the state and local levels. First, they will interview CHIP and Medicaid administration, public health and maternal and child health officials, governor’s health policy staff, state legislators and their staffs, family and child advocates, vendors under contract with the state, and providers representing such groups as the American Academy of Pediatrics and the state Primary Care Associations. Next, they will interview local-level key information will also be interviewed: county social services administrators, front-line eligibility workers, local public health officials, managed care organizations, health insurance plans, representatives of the business and employer communities, local clinic- and office-based pediatric providers, and community-based organizations involved in outreach.

During the week-long visits, key informant interviews will be conducted with approximately 30 individuals in each of the 10 states (300 interviews total). These key informant interviews will allow us to develop an in-depth understanding of CHIP implementation over the past decade and the effects of recent policy changes. We will inquire about which program design features have and have not worked, persistent challenges states have faced, and opportunities upon which they have capitalized. We will consider the implications of CHIPRA and health reform, and the anticipated benefits and challenges associated with those developments. Such qualitative findings provide a critical complement to the quantitative components of this evaluation, allowing for a more nuanced understanding of state experiences as well as the opportunity to explore the strengths, weaknesses, and effects of varied state contexts and alternative approaches to ensuring children’s coverage.

  • More detail on the content of the site visits can be found in the design report, Attachment A, pages 30 – 35. The specific procedures for identifying key informants are described on page 34 of this report.

c. Focus Groups

Also as part of the case studies, three focus groups will be conducted in each of the 10 states with families touched by state CHIP and Medicaid programs, for a total of 30 focus groups across the 10 case study states. The focus groups will have an average of 8 participants, for a total of 240 participants and will be conducted during the same week that the key informant interviews take place. We expect the focus group findings to enrich the other evaluation components in several ways, while providing intrinsically valuable information regarding state and local context. First, they will provide valuable detail about the concerns and experiences of families affected by CHIP and Medicaid policies and program practices. Second, insights from the focus groups will also highlight particular focal areas for our analysis of site visit findings. Third, and perhaps most important, focus groups will bring to our evaluation the voices of parents and other family members vividly describing their experiences with CHIP, while also enhancing our understanding of concepts and issues identified through other components of the evaluation.

Focus Group Sample Selection. Overall, we will hold focus groups with parents of children who represent the following categories:

  • Enrolled in CHIP or Medicaid

  • Disenrolled from CHIP/Medicaid

  • Eligible for CHIP or Medicaid but uninsured

  • Low income and covered by employer-sponsored insurance (ESI)

Recruitment of focus group participants. Eight participants is the optimal number for a focus group, but to ensure adequate participation, we will recruit approximately 12 individuals per group. Recruitment strategies will vary based on the different types of groups proposed, but we plan to enlist the help of community-based organizations and providers, child health advocates, policy groups, and/or health plans to gain access to potential participants in many of our groups. For other populations, we will rely on enrollment and disenrollment files of appropriate state or county social services agencies. We describe our recruitment approaches in more detail below. All recruitment materials will be customized to the site aiding us in recruiting.

One proven approach to recruitment that we have often used to good effect enlists the help of local providers and community-based agencies as “partners” in recruitment. Specifically, this approach entails developing a series of recruitment materials (for example, flyers announcing the group, recruitment “scripts” that describe the purpose and process of the focus group, and sign-up sheets), and asking local agencies or providers if they would be willing to recruit focus group participants from among their clients. If they agree to help, administrative staff will use our recruitment materials to either directly recruit from clients they are serving during their routine course of business or telephone potential participants from a roster of clients. We instruct administrative staff to emphasize to clients that participation is entirely voluntary. To help with recruitment, we will offer incentives (for example, $50 cash or a gift card of equivalent value) and also inform parents that light refreshments and child care will be provided during the groups. An added benefit of this approach to recruitment is that local providers such as Federally Qualified Health Centers are often willing to offer their conference or meeting rooms free of charge for focus groups. We believe that this approach to recruitment is both effective and efficient for most of our groups, in particular enrollees, eligibles but not enrolled, non-English speakers and members of racial/ethnic minorities, and parents of children with special health care needs.

An alternative recruitment approach will be needed for the other populations of interest: disenrollees, newly eligibles, and low-income families with ESI. For disenrollees, we will request enrollment and disenrollment files from state or county eligibility agencies and then sample a pool of potential participants from these rolls. Research staff will telephone these families directly and recruit them for the groups using a script similar to that employed for the other groups. For families with ESI, we will need to develop a special recruitment strategy that could, for example, sample families from the largest health plan operating in a local jurisdiction, thus giving us access to families with a range of private policies. Alternately, we could decide to sample from a prominent employer or two in the locality that provides coverage to its low-income employees, thus permitting us access to families that would represent a significant portion of the low-income ESI population. Once again, in these scenarios, research staff will work with the health plans or employers to develop a sampling frame for potential participants, then telephone the families directly to solicit participation in our focus groups.

2. Procedures for the Collection of Information

a. The Survey of Enrollees and Disenrollees

Sampling Approach. ASPE’s sampling approach will use an innovative version of the classic sub-sampling for non-response follow-up design. The advantages of this approach are that it minimizes data collection costs while maintaining the desired response rate. It has two independent components:

  • A multi-stage, clustered sample will be interviewed by telephone, with face-to-face follow-up of unlocatable and nonresponding households.5 Use of face-to-face (field) follow-up is more costly than telephone alone and requires the less efficient cluster-sampling approach, but it results in high response rates and improved population coverage. Without field follow-up of unlocatable and nonresponding households, we would miss some parents of CHIP children who belong to minority or other sub-groups, especially Hispanics, Native Americans, and African Americans (Cybulski et al. 1999).

  • A stratified, unclustered random sample representing the same population as the clustered sample will be interviewed by telephone only. Besides reducing costs, the telephone-only sample design benefits from increased statistical efficiency associated with unclustered designs.

In both sampling components, we will draw and field up to two rounds of samples for each sample domain in each state, allowing two months between each sample draw.6 This staged fielding will be particularly important in reducing the time between sample frame construction and the collection of survey data, since the fielding period will be as close as possible to the time when the administrative data are provided by the states and cleaned by Mathematica. In addition, for states with the smallest populations of CHIP enrollees, these multiple draws may be needed to ensure that sample sizes are sufficient for certain domains (most notably, recent disenrollees). We will draw these samples in such a way as to avoid sampling more than one child from the same household or sampling the same household for more than one draw.

Each sample draw will be derived from the universe that exists at the time of sampling but will take into account whether a household was in the sampling frame or in the sample of the prior draw(s). To speed up the sampling process and ensure timely fielding, we will request an advance test data file from each state to check the database and our sampling algorithms. In addition, our design assumes that the state will send the database of enrollees and disenrollees on two different occasions, each two months apart. On each occasion, we will classify the enrollees based upon month of initial enrollment, with disenrollees in a separate category, so as to create the enrollee domains for use in sampling. Then, we will determine how quickly each of the ten states can deliver enrollee data for a particular month and set the exact domain definitions.

Some sample members may change (or at least report a change in) classification between the time of sampling and interview; for example, a recent disenrollee sample member may return to CHIP by the time of interview, effectively becoming a new enrollee. (This type of transition is most likely to occur when locating or face-to-face follow-up activities are required, extending the time between sampling and interview.) As with the prior CHIP survey, we will address these transitions by allowing sample members to respond on the basis of whatever domain they consider themselves to be in. Because our approach to analyzing the survey data may be affected by such transitions, we will identify them as part of the data and maintain a frequency count for them over the course of fielding the survey.

Multistage Clustered Sample Selection. For the clustered sampling component with face-to-face followup, the first step in sample selection will be defining primary sampling units (PSUs) for each state. These PSUs will be defined based upon ZIP codes or combinations of ZIP codes that provide a specified minimum number of enrollees and disenrollees. The same set of PSUs will be used for all sample draws. A composite size measure will be developed for each PSU in the frame that reflects the desired total state sample of new enrollees, established enrollees, and recent disenrollees (Folsom et al. 1987). We will select a total of 30 PSUs from each state, with probability proportional to this composite size measure and with minimal replacement using Chromy’s (1979) sequential sampling procedure. In selecting the 30 sample PSUs from the frame of PSUs in state h, Chromy’s procedure partitions each state’s total PSUs into 30 zones of equal size, based upon the size measure . Exactly one PSU is selected from each zone. The zones are defined so that all pairs of PSUs have a chance of appearing together in the sample—a requirement for unbiased estimation of sampling variances. Using a controlled ordering of the PSUs, this zoned sequential selection makes possible an implicit stratification of PSUs that ensures they are as representative as possible of selected variables of interest. To ensure selection of both urban and rural PSUs and the distribution of the sample across each state, candidate variables for ordering the PSUs in the frame before sampling will include urbanicity and the geographic location of the PSU.

We will also use a composite size measure to ensure that the desired sample sizes are achieved for the domains of interest (new enrollees, established enrollees, and disenrollees). With this procedure, we will be assured of equal selection probabilities within states for children in each domain. The composite size measure will be defined as

(1) ,

where is the number of children in domain d of household j of PSU i from state h and is the desired overall sampling rate for domain d in state h.

Prior to selection of households, as with the selection of PSUs, we will use a controlled ordering procedure of households within each PSU. Variables for ordering will be the sampling domains and, when available, the race of the children in the households. For each selection of the ith PSU from the hth state, we will select households, with probability proportional to size. When multiple enrollee domains are present within a household, we will randomly determine the enrollee type to interview using differential probabilities based upon the desired state h sampling rates for domain d. If multiple children are present in the sampled household for the selected enrollee domain, we will randomly designate one child to be interviewed. Using the composite size measure for each household will enable us to oversample households with multiple eligible children while ensuring that the selection probabilities are equal within enrollment domains, regardless of household size.

Stratified, Unclustered Sample Selection. For the unclustered, telephone-only sampling component, we will first sample households. To ensure representation throughout each state, we will explicitly stratify households by a geographic measure specific to that state. As with the clustered design, if the household includes children in two or more domains, we will randomly determine the domain for which a child will be selected and, finally, select the child within it. For households with multiple children eligible for interview, we will randomly select one for interview. Prior to sample selection, we will sort the households by the various combinations of enrollment domain(s) to which their eligible children belong (recent enrollee only, recent enrollee and established enrollee, recent enrollee and recent disenrollee, established enrollee only, and so forth). Then, within each combination, we will further sort the households to create an implicit stratification of households. Candidate variables to use will include race and ethnicity, metropolitan status, and geographic area.

Households will be selected with probability proportional to their composite size measures. For sampled households with multiple survey-eligible children, we will randomly sample one child for interview using the desired sub-sampling rates for the enrollee domains. This composite size measure approach will ensure that we achieve equal selection probabilities within each state for each enrollee domain, regardless of the household size. Similar to the approach used for the clustered sample, the selection process for the unclustered sample will prevent selection of the same household in multiple draws.

Weighting Procedures. For this survey, we will calculate sampling weights within each sample (clustered and unclustered) based upon the inverse of the probability of selection across all draws. Each eligible household has a probability of being selected for the clustered and unclustered sampling components, as each sample represents the full population. We will first calculate design-specific sampling weights for each component (clustered and unclustered), for each sample draw and state, using the product of the sampling weight of the household and the conditional sampling weight of the child, given that his or her household was selected. We will then combine the design-specific sampling weights across draws to create a single base sampling weight for each sampled child for each design for each state.

We will pursue households that were unlocatable by the central office only when they have been selected for the clustered sampling component, essentially having sub-sampled them for non-response follow-up. For the unclustered sampling component, we will consider households unlocatable by the central office as nonsampled nonrespondents. The following table shows how the different sample components are dealt with in the composite weights, to be used when combining both sample components.

Unclustered Sample (sampling weights sum to W)

Clustered Sample (sampling weights sum to W)

Located by Central Office (sampling weights sum to A)

Unlocated by Central Office (sampling weights sum to W-A)

Located by Central Office (sampling weights sum to B)

Unlocated by Central Office (sampling weights sum to W-B)

- located -

- not pursued in field -

- located -

- pursued in field -

Composite weight C1 = sampling weight times (1-lambda)

Composite weight = 0

Composite weight C2 = sampling weight times lambda

Composite weight = sampling weight times (W – (C1+C2))/(W-B)

Represents locatable population


Represents locatable population

Represents unlocatable population



To compute a survey estimate, Est(Y), using information from both samples, one cannot simply combine the two samples without adjusting the weights, since the clustered and unclustered located samples represent the same target population. Separate estimates can be computed from each sample and combined using the equation

(1) Est(Y) = λ Y(clustered) + (1 - λ) Y(unclustered)

where Y(clustered) is the survey estimate from the clustered sample, Y(unclustered) is the survey estimate from the unclustered sample, and λ is an arbitrary constant between 0 and 1. Any value of λ will result in an unbiased estimate of the survey estimate, but not necessarily an estimate with the minimum sampling variance. We used an approach that calculates a single lambda using sample sizes and design effects due to unequal weighting for the two samples. In particular, λ acts as a weighting factor, with more weight given to the larger sample, with the sample sizes adjusted by the design effect due to unequal weighting. The formula for λ is given by:

(5) λ =

where n(clustered) and n(unclustered) are the sample sizes of the clustered and unclustered central office-located samples respectively, and deff(clustered) and deff(unclustered) are the design effects due to unequal weighting for the clustered and unclustered central office-located samples, respectively.

The clustered unlocated households are ratio adjusted so that they add up to the estimate of unlocatable households in the population, represented by themselves and the comparable households in the unclustered sample that were not pursued. This adjustment is comparable to that done for a standard subsample among nonrespondents.

The next step will be to implement within-state non-response adjustments among located households (or clustered cases that were not located despite field efforts) to account for non-response to eligibility screening and to the interview. First, we will conduct a non-response analysis to assess the response patterns for the samples, using data from the sampling frame, such as age and race of the sampled child, along with county-level information from the Area Resource File (ARF), such as the percentage of children living in households with family incomes under the poverty level, the percentage of households headed by females, and urbanicity. Based on the results, we will develop logistic regression models to compute response propensity scores to compensate for non-response. We will develop separate models for each sample component (clustered and unclustered), for each domain (recent enrollees, established enrollees, and recent disenrollees, as defined on the frame), and for each state. Finally, we will use the estimated population counts in each state and each domain to post-stratify within each state based on enrollment status at the time of sampling of the child. The final weight will consist of the product of the combined-draw base weight, the inverse of the response propensity score, and the post-stratification adjustment.

Survey Instrument. Data collection procedures are based in the sample design and the stratification procedures. The basis of the survey instrument was the first ASPE-sponsored CHIP evaluation. We modified prior questions to improve wording as needed and added questions to address new topics of interest, using other validated survey questions on child health and coverage as the first source for any new questions. For example, questions about the concept of a patient-centered medical home were not included in the first survey, but given the importance of this topic we include questions in the current survey that will allow us to characterize the medical home-related aspects of the care setting and process of care, using existing instruments that offer validated questions on the topic).7 The surveys ASPE examined include the National Survey of Children’s Health, the National Health Interview Survey, the Medicaid Expenditure Panel Survey, and several surveys of Healthy Kids programs in California fielded by Mathematica during the past decade.

The survey instrument will capture data on outcomes in the following areas: (1) application and enrollment; (2) access, use, content of care and satisfaction; (3) program retention, renewal and disenrollment; (4) relationship between CHIP and other coverage, and (5) demographic characteristics of the families and children to support a range of descriptive and multivariate analyses, including age, income, language, and other demographic and socioeconomic characteristics; health status and chronic conditions; and parental attitudes about the efficacy of health care. Separate modules were developed for the three types of sample members (new and established enrollees, and disenrollees). The concepts covered are largely the same across the different modules but the reference time period will depend on the enrollment or disenrollment status of the child at the time of sampling. The amount of time required to complete the survey will be 30 to 40 minutes. Attachment B1 contains the Survey of Enrollees and Disenrollees. Attachment B2 contains the CHIP data elements and question sources.

  • Further details on the questionnaire design may be found in the design report, Attachment A, pages 59 – 63.

The computer-assisted telephone interview (CATI) survey instrument will be implemented in Blaise software which allows for the complex routing and range checks needed for the CHIP survey. All data, whether collected by call outs from the telephone center or call ins via field locator intervention, will be collected by specially trained telephone interviewers using the Blaise instrument, thus minimizing mode effect. We will conduct interviews with the parents and guardians of CHIP children (in 10 states) and Medicaid children (in 3 of the states). ASPE expects the information gathered from the survey will increase our understanding of the experiences of the parents and guardians in navigating the application, enrollment, and renewal processes; the child’s health status (measured along several dimensions); access to and use of health care services; experiences with the care process; provider communication and coordination of care; barriers and unmet needs; and the perceptions of parents and guardians of the value and quality of their child’s health care. These data will be linked to child and family demographic characteristics, as well as to key program features measured in the qualitative components of the evaluation.

  • The data collection approach is described in further detail in the design report, Attachment A, pages 67 – 74.

In summary, ASPE will:

  • Develop a detailed sample release schedule that provides a good balance between keeping recall periods as short as possible but not increasing management costs excessively..

  • Optimize contact information by processing the selected state sample frame contact information through a national locating database (Accurint). This contact information update step will fill in missing contact information or correct erroneous contact information from the sample files. If no updated information is available, this will signal that an immediate deeper search is needed to obtain it. After the database search and update, advance letters will be mailed sample members for whom firm contact information was obtained. Mailings will be sent using the U.S. postal service via Return Service Requested, which sends updated address information directly to the contractor. All returned mail will be subjected to immediate telephone and database locating procedures.

  • Attachment A (page 68) lists a series of available databases that will be used to supplement Accurint searches.

  • Contact respondents first by an advance letter, offering them a $20 post-pay gift card as an incentive to participate.

  • All interviews will be conducted in English or Spanish using the Blaise CATI instrument which allows interviewers to select English or Spanish as the interview language. When respondents are identified outside these two languages, the contractor will use translators or interpreters to assist with the interview

  • Unclustered sample will be attempted by telephone only. After 15 unsuccessful attempts, unclustered cases will be closed out. Clustered sample will be attempted first by telephone. If it is not possible to complete the interview by telephone after 15 attempts or because the respondent is reluctant to participate, the case will be placed in a holding position until it is time to transfer cases to field locators. We use the holding procedure to accumulate sufficient cases for an individual interviewer to work efficiently.

Quality Assurance. ASPE has put in place strong quality control measures for assuring that data are collected at the highest quality standard. Quality control begins with staffing the most qualified interviewers, training them rigorously during in-person trainings or, for the field staff, using remote training site technology. Monitoring interviewer performance is another key quality control tool: all of an interviewer’s first cases will be monitored by trained monitoring staff; after that 10 percent of all their calls will be systematically monitored

  • Quality assurance steps are described further in the design report, Attachment A, pages 71 – 73.

  • The challenge of ensuring the representativeness of respondents in relation to the respondent population, and locating challenges are discussed in the design report, Attachment A, pages 73-74.

b. Site Visits

The key informant interview protocol is a critical tool for conducting high-quality site visits within a case study framework. A carefully structured protocol permits a range of issues to be discussed in a consistent and thorough manner across all interviews and sites while also allowing the flexibility for interesting issues to be considered as they arise.

  • The site visit protocols are described on pages 30-33 of the design report, in Attachment A.

  • Pages 34 – 35 of the design report, in Attachment A, describe the procedures for conducting the site visits, including contacting state officials, obtaining and reviewing state program document and other background materials, identifying key informants and local sites, conducting the interviews, compiling notes, and contacting key informants as needed to clarify information obtained during the visits

  • Drafts of the discussion guides for different types of key informants are included as Attachments C1 to C4: C1 for state officials, C2 for community enrollment agencies, C3 for health care providers, and C4 for managed care and health plans.

c. Focus Groups

As part of the case studies, in each state we will conduct three focus groups with families touched by state CHIP and Medicaid programs, for a total of 30 focus groups across the 10 case study states. Focus groups will be conducted during the same week that we are conducting site visit interviews with key informants. We expect the focus group findings to enrich the other evaluation components in several ways, while providing intrinsically valuable information regarding state and local context. First, they will provide valuable detail about the concerns and experiences of families affected by CHIP and Medicaid policies and program practices. Second, insights from the focus groups will also highlight particular focal areas for our analysis of site visit findings. Third, and perhaps most important, focus groups will bring to our evaluation the voices of parents and other family members vividly describing their experiences with CHIP, while also enhancing our understanding of concepts and issues identified through other components of the evaluation.

Sample Selection. We will hold focus groups with parents and other family members of children who represent the following categories:

  • Enrolled in CHIP or Medicaid

  • Disenrolled from CHIP/Medicaid

  • Eligible for CHIP or Medicaid but uninsured

  • Covered under employer-sponsored insurance (ESI)

The most critical groups from the array above are parents of enrolled children (since they will be able to discuss direct experiences with CHIP), parents of disenrollees (since they will shed light on the various factors that led to disenrollment), and parents of children who are eligible for CHIP and Medicaid, but are not enrolled and remain uninsured (since they will help us understand more about this critical target group and what factors contribute to their not enrolling their children into available coverage). On occasion, and to the extent possible, within these categories we will attempt to conduct focus groups with selected special populations of particular interest, including parents of children with special health care needs, non-English-speaking families (we plan to conduct Spanish-language groups, led by a focus group leader fluent in Spanish and English, in states with large Latino populations, such as California, New York, Texas and Florida), newly eligibles, and certain racial and ethnic groups. These focus groups will provide insights about the unique experiences of these populations and the particular challenges or circumstances they face.

Moderator Guides. The focus group moderator guide is a critical tool for consistent and systematic information gathering.

  • Attachments D1 – D4 contain the focus moderator guides: D1 for parents of children with employer insurance, D2 for parents of CHIP enrollees, D3 for parents of CHIP disenrollees, and D4 for parents of eligible but uninsured children.

Conducting the Focus Groups. All focus groups will be scheduled for 1.5 to 2 hours and will be facilitated by a senior member of the evaluation team. Written informed consent will be obtained from all participants prior to the start of the focus groups. Moderators will be supported by research staff who will take extensive notes during the proceedings and digitally record the sessions. During these discussions with parents and/or family members, we will ask about enrollment processes, barriers to enrollment and retention, impressions of cost-sharing responsibilities, access to and quality of care, and awareness and impact of outreach. As previously described, moderator guides will be tailored to probe into specific issues relevant to each subgroup. For example, focus groups with families of children with special health care needs will home in on access and quality of care questions, particularly whether the scope of services available through CHIP is adequate to meet their children’s needs. Similarly, groups held with non-English-speaking families will consider the accessibility of program materials and the transparency of enrollment processes. Focus group recordings will be transcribed verbatim and, along with notes taken during the groups, will be analyzed and used to support and further illustrate findings from the case studies and quantitative data analysis.

  • Attachments I1 – I4 are the Focus Group Consent Forms: I1 for parents of CHIP enrollees, I2 for parents of CHIP disenrollees, I3 for parents of eligible but uninsured children, and I4 for parents covered by employer insurance.

Note that key informant interviews and focus groups are not subject to the same type of quality monitoring described above for surveys. The focus in these components is on the quality of the protocols and the experience of the staff conducting the interviews and focus groups. In addition, all key informant interviews and focus groups will be audio-recorded to allow close review and accurate entry of the information into the atlas-t1 system.

3. Methods to Maximize Response Rates and Deal with Nonresponse

a. Survey of Enrollees and Disenrollees

To achieve a 75 percent response rate for this study, we will address two sources of nonresponse: non-contact and non-cooperation. To meet this goal in the more challenging data collection environment we have faced in recent years will require extensive follow-up efforts and implementation of innovative methods. These include providing specialized interviewer training, contacting households at different times of the day, and attempting to reach all households within the first few days of calling to establish refusals and begin attempts at conversion as early as possible. Interviewers will be trained to leave messages identifying their calls as part of a legitimate, important research study and stress that they are not selling anything or asking for a donation, thereby reaching people who otherwise screen their calls. In addition, as soon as we receive the sample from the states, we will run all cases through Accurint to obtain the most updated contact information for each case (described in more detail above under “Locating Sample Members”).

An important factor in reducing nonresponse is understanding that, for various reasons, sample members do not participate in surveys, so solutions to preventing nonresponse should address those reasons most relevant to a particular respondent. These may include the following:

  • Social environmental factors. Households inundated with unwelcome telephone solicitations may have difficulty in distinguishing the initial survey contact from a telemarketing call. Mathematica has addressed this concern successfully by sending well-written and persuasive notification letters, which have been shown to have a positive effect on response to the subsequent telephone calls (Link and Mokdad 2005; Redline et al. 2004). We will pay special attention to training interviewers in conveying the most important messages about the study in the first few seconds of their calls and in leaving effective answering machine messages. We will establish a dedicated toll-free number that sampled households can call to verify the legitimacy of the survey, discuss concerns, or complete the interview. We will include this number in the advance letters, and interviewers will leave the number in answering machine messages.

  • Household characteristics. Sometimes respondents are still reluctant to participate because they are concerned about the confidentiality and privacy of their information. To address these concerns, our advance letters emphasize that maintaining confidentiality is the cornerstone of our work, and we train interviewers not only to address confidentiality routinely in their introductions but also to recognize specific confidentiality concerns that can lead to refusals if left unaddressed. As part of their training, interviewers role play these scenarios and thus can readily reassure respondents about our data security procedures. Mathematica has also developed strategies to accommodate respondents under time pressures—for instance, training interviewers to offer to administer the survey in segments rather than in one session.

  • Refusal conversion. Refusal conversion efforts will be critical in achieving a high response rate. Interviewers will be trained in refusal conversion techniques and refusals will be flagged in the CATI scheduler as they occur. Such sample members will be sent a customized letter that addresses the member’s reasons for refusing and emphasizes the importance of the study. Interviewers highly skilled at converting refusals then will contact the sample member. For the clustered sample, sample members who refuse a second time will be assigned to a field locator for an in-person follow-up. At Mathematica, refusal converter staff are supervisors or senior interviewers who receive specialized training and have considerable experience in working with difficult-to-persuade households. The same interviewer who generated the initial refusal will not make the refusal conversion attempt. The converter will address the respondent’s reason for refusing and stress the importance of the survey.

Indeed, the entire data collection approach has been designed to minimize survey non-response and survey error. To minimize survey error we have developed a respondent-friendly instrument to be implemented in CATI software. To minimize mode effect, we will use a single mode of data collection – telephone interviews conducted by the same set of extensively trained staff. To minimize recall problems, we will release the sample in two rounds. In addition to conducting an initial pretest prior to data collection, ASPE will conduct a second 100 case pretest during the first week of data collection to ensure the instrument is performing correctly.

  • Further discussion of the survey data collection approach can be found in the design report, Attachment A, pages 67 – 74. The way nonresponse is accommodated in the weights is described on pages 52 - 54. A discussion of the recruiting and training of high quality, convincing interviewers is found on pages 71 – 72.

  • Attachment H contains all materials that will be seen by respondents, including advance letters, Sorry I Missed You cards, locating letters, and consent procedures.

b. Case Studies: Site Visits and Focus Groups

Both case study data collection methods are entirely qualitative and do not involve calculation of response rates. Nonetheless the staff recruiting key informants for the site visits and participants for the focus groups will use closely scripted recruiting methods and work closely with local groups to recruit focus group participants.

4. Tests of Procedures or Methods to be Undertaken

First Pretest. The first pretest, conducted prior to OMB submission, was limited to the instrument content, prior to the submission of the OMB package, with not more than 9 respondents in any of the three sample domains: new enrollees, established enrollees, and recent disenrollees. The information learned from the pretest was important in helping to refine the questionnaire.

  • Attachment J contains a memo describing the pretest findings and changes made to the instrument as a result.

Second pretest. The first 50 to 100 completed interviews will constitute a second “live” pretest. At the end of 100 cases, interviewing will stop in order to review the data frequencies to identify any software errors. Debriefings with interviewers and monitors will take place in order to identify wording or procedural issues and seek suggestions for remedying them. The contractor will then submit a second, brief report to ASPE on any problems encountered in fielding the survey, proposing solutions. ASPE will communicate any suggestions for substantive changes to OMB and seek their approval. All approved changes will be made and thoroughly tested before resuming data collection. Depending on the magnitude of problems and the types of corrections needed, ASPE may stop work for as long as one week

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Individuals consulted on statistical aspects of the design:

  • Eric Grau, Senior Statistician, Mathematica Policy Research, Inc. Phone: (609) 945-3330. Email: [email protected].

  • Barbara Carlson, Ph.D., Associate Director of Statistical Services, Mathematica Policy Research, Inc. Phone: (617) 674-8372. Email: [email protected].

Individuals collecting the data:

  • Julie Ingels, Senior Survey Researcher, (202) 554-7535

Individuals analyzing the data:

  • Mary Harrington, Mathematica Policy Research, Inc. Phone: (734) 794-1124. Email: [email protected].

  • Christopher Trenholm, Mathematica Policy Research, Inc. Phone: (609) 936-2796. Email: [email protected]

  • Margo Rosenbach, Mathematica Policy Research, Inc. Phone: (617) 301-8967. Email: [email protected].

1 Delays in construction of the frame could necessitate extending the definition of the new enrollee and recent disenrollee sample domains to include longer periods on or off the program.

2 A related option is to use data obtained from a screener at the start of the survey interview to overrepresent groups of particular analytic interest that cannot be identified from the frame, such as those with special health care needs. This approach can be valuable for obtaining precise measures for relatively small subgroups, particularly those defined by the child’s health; however, it is also very costly, as the initial contact amounts to a screening interview that for some respondents (not meeting the criteria for oversampling) results in a non-completed survey. Thus, we do not expect to adopt this approach.

3 As explained above, these cases represent the 15 percent unlocatable in the unclustered sample.

4 RR AAPOR = # of completed interviews / (# of sampled cases – estimated # of ineligible cases).

5 Unlocatable households may be more accurately described as “households that cannot be located from the central office.” This group also includes households without any type of telephone service. Households that cannot be located from the central office generally have current unlisted numbers that are not recorded in the CHIP enrollment files, or have numbers that have been changed to new numbers that cannot be determined. This is more likely to occur when the new number is a cell phone.

6 For the prior CHIP survey, there were a total of 20 sample draws—two states with one draw; six states with two draws, and two states with three draws.

7 For example, both the National Survey of Children with Special Health Care Needs and the National Survey of Children’s Health have tested medical home components.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleCHIP OMB Supporting Statement Part B_formatted(kr)
SubjectCHIP10-State Evaluation Part B
AuthorJulie Ingels
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy