Supplemental Information for OMB Review of CHIPRA ICR

Supplemental Information for OMB Review of CHIPRA ICR.docx

CHIPRA_ Children Health Insurance

Supplemental Information for OMB Review of CHIPRA ICR

OMB: 0990-0384

Document [docx]
Download: docx | pdf

Supplemental Information for OMB Review of CHIPRA-10 Information Collection

  1. State Selection_For OMB.pdf” This document combines the two memos we submitted on the evaluation regarding state selection: 1) the first memo for selecting the 10 study states, which runs from pages 1 to 12 of the pdf; and 2) the second memo we submitted, regarding selection of the 3 Medicaid states, which runs from pages 13 to 15.

  1. Regarding OMB’s question about the overlap of the states in this evaluation and the first CHIP evaluation, we have added a paragraph, found on page 5 of the state selection PDF, that clarifies the total percentage of uninsured children and CHIP enrolled children the 10 states represent; clarifies which states were also included in the previous evaluation; and clarifies our plan for analyzing changes over time using findings from the previous evaluation. Here is the paragraph, which is found on page 5 of this PDF.

Congress specified that this evaluation use similar methods as in the first Congressionally-mandated CHIP evaluation. We employed similar methods for state selection: establishing a prioritized list of relevant criteria, including those specified in the CHIPRA legislation, and applying those criteria sequentially. The process of applying the criteria resulted in a set of 10 states that represent 54.3 percent of all uninsured children under 200 percent of the federal poverty level, and 56.7 percent of CHIP enrolled children (data also shown in Table 2, found on page 10, columns 2a and 3a). Because the states are dynamic, the resulting application of the criteria in 2011 did not result in the identical list of states that were selected for the first study. However, as noted on the second panel of Table 2 (found on page 11, column 5a), five of the selected states—Texas, California, Florida, Louisiana, and New York—participated in the first CHIP evaluation. For these states, we plan to conduct a limited set of comparative analyses. For example, in the individual case study reports for each of these states, we anticipate including a section on the changes that have taken place in CHIP over the past decade -- in program design, policy context, and enrollment -- and how these changes may be affecting the experiences of eligible children and families. Likewise, in the reports based on the household survey, we anticipate including a brief presentation of how the composition and experiences of CHIP children and families have changed since the prior study in these five states. Examples of potential areas of interest for this presentation include changes in CHIP enrollees’ demographics, their insurance coverage before and after CHIP coverage, and their access to and use of preventive and other health care while covered by the program.

  1. Regarding OMB’s question about how the Medicaid states were selected, this same PDF provides the answers to that, on pages 13 through 15. In particular, we explained to OMB on the call the advantages of selecting Texas, California and Florida as the Medicaid survey sample states because we would then have administrative Medicaid data for all 10 states in the study, allowing an enriched understanding of public coverage, enrollment trends, churning, transitions and crowd out issues; in addition, that the three states represent over one-quarter of Medicaid enrolled children. These paragraphs are found on page 14 of this pdf, and are copied here:

Medicaid data. Including Texas, California, and Florida as the Medicaid survey states will significantly enrich the data available for the study:  we would then have access to Medicaid data for all 10 of the states.  Five of the other states are providing Medicaid data to the Maximizing Enrollment for Kids evaluation (column 11 of Table 1, found on the last page of this document) and would likely agree to share these data; Ohio and Michigan report M-CHIP data in MSIS (column 10 of Table 1, found on the last page of this document) so their data should also be reasonably accessible. With Medicaid data for all 10 study states, we will be able to understand more fully transitions between Medicaid and CHIP and the retention of children in public coverage overall (that is, for the programs combined) which in turn will enrich the study of enrollment trends, churning, transitions, and crowd out.



Because Alabama and Utah share many of the same Medicaid and CHIP characteristics as Texas, California, and Florida (for example, all five of these states use different eligibility systems and different delivery systems for Medicaid and CHIP), we considered selecting them for the Medicaid survey. However, the size of the Medicaid program in both states is small—Alabama has 1.6 percent of the nation’s Medicaid-enrolled children, and Utah has 0.5 percent of the nation’s Medicaid-enrolled children—so the programs in Texas, California, and Florida seemed to be of higher priority for the study, as together they represent over one quarter of all Medicaid enrolled children (27.7 percent, as shown on the last page of this memo in Table 1, column 4). We recommend Alabama and Utah as the back-up states, should Texas, California, or Florida be unwilling or unable to participate in the Medicaid survey.



It is important to note that even though we will have access to Medicaid administrative data for all 10 states, we will still face significant limitations in generalizing findings from the 3 states included in the Medicaid survey, because the administrative data cannot substitute for data obtained through the survey.


  1. CHIPRA-10 Sample Design Memo_For OMB.pdf” provides further detail about the sampling plan for the CHIP and Medicaid surveys. On page 4 of this memo, we first discuss the issue of oversampling children over 200 percent of FPL in two states—New York and California. The paragraph states:

Income: In California and New York, new enrollees will be further divided into two income groups: (1) upper-income and (2) lower-income. Children in households above 200 percent of the federal poverty line (FPL) are considered upper-income. As specified below, the purpose of this stratification is to oversample upper-income new enrollees—a population that is relatively small in both California and New York but that holds considerable interest given the anticipated expansion of public coverage under health reform.

The memo further describes how we will account for the design effects of this oversampling plan on page 7:

Design Effects: To account for the design effects of oversampling upper-income children in California and New York to achieve a pooled sample size of 400 for separate analysis of this sub-group, we made a slight modification to the compromise allocation method used for the new enrollee and disenrollee domains. To ensure that the effective sample sizes for these two sample domains in California and New York are sufficient to meet our analysis objectives after taking into account these larger design effects, we based the compromise allocation on a total sample size of 4500 (rather than 4600) for each of these domains. In doing so, we reserved 200 sample members who we then allocated to the new enrollee and disenrollee samples in California and New York (50 sample members per domain in each state).

  1. Generalizing Findings. The following paragraphs were added to the Part A Justification, at the end of the response to Question 2.

Limitations of the Study. The ten states selected for the evaluation include a majority of all CHIP enrollees nationwide, ensuring that the evaluation findings cover a large fraction of those with recent or current CHIP coverage. Although it is not possible to generalize these findings outside the study states we anticipate that many important findings from the ten study states may be applicable to the population covered in other States, for two related reasons. First, as detailed in our state selection memo, we believe that the ten study states capture much of the important variation in CHIP features and CHIP populations nationwide -- a belief that we can further validate during the evaluation by drawing on our 50-state survey of CHIP administrators and from the CARTS data. Second, despite this wide variation, we expect (based on the prior evaluation) that many key study findings will persist across the ten states, suggesting that they generalize to CHIP elsewhere.



With just three states the focus of the companion Medicaid household survey, we will naturally be less able to make generalized statements about the Medicaid program no matter the findings. Having purposefully chosen the three largest states for this study, however, findings will cover a large fraction of the children enrolled in Medicaid nationwide. In addition, even with just three states, findings from the evaluation can still provide meaningful insight into the Medicaid population -- particularly in how the children enrolled in Medicaid and program experiences compare with those of children covered by CHIP. Indeed, to the extent that these comparative findings persist across the three Medicaid states, they will offer easily the most robust and detailed understanding to date of the similarities and differences that exist between children and families on the two programs 

  1. Response Bias. Our plan for assessing the presence of response bias and for addressing this bias in our analysis is described here.



Nonresponse bias occurs whenever the sampled population (representing the target population) differs from the population represented by the survey respondents. The potential for bias increases with higher levels of nonresponse. We calculate nonresponse adjustments to weights using covariates that are associated with response propensity, and are correlated with important outcome variables, in an attempt to alleviate the potential for nonresponse bias. However, though sophisticated nonresponse models will reduce the potential for nonresponse bias, the potential for this bias still exists. Because we anticipate a response rate that is less than 80%, we intend to perform a nonresponse bias analysis, assessing the degree to which the nonresponse adjustments to the weights were effective in alleviating the potential for nonresponse bias. In particular, the nonresponse bias analysis will consist of the following steps:

  1. Compute response rates for the three key subpopulations (new enrollees, established enrollees, and recent disenrollees) and perhaps other key implicit stratification variables, such as metropolitan status and race/ethnicity.

  2. Using the original sampling weights, compare the weighted distributions of respondents and nonrespondents within the three subpopulations for key variables within those subpopulations.

  3. Identify the characteristics that best help predict nonresponse through a decision tree technique called CHAID and logistic regression modeling, and use this information to generate nonresponse weight adjustments.

  4. Compare the distributions of respondents using the fully nonresponse-adjusted analysis weights within the three subpopulations for key variables to the distributions for the full sample comparably weighted using the unadjusted sampling weights.



  1. Is it possible to calculate a ‘national’ or ‘project’ response rate?



Because the states are purposively rather than randomly selected, a national response rate is not really meaningful. However, we can compute a “project-wide” weighted response rate for each domain (new enrollees, established enrollees, and recent disenrollees). We recommend not combining the domains because ability to locate and get a response are likely to be different for each domain. The recent disenrollees would likely have the lowest weighted response rate primarily because of the issue of locating the sample member. The “project-wide” weighted response rate for new enrollees and established enrollees are expected to be similar, and in general, this was the case for the earlier study. Separate weighted response rates will be reported for the clustered and unclustered samples, using the original sampling weights. In addition, we will also report weighted response rates using composite weights that appropriately combine the clustered and unclustered samples.



There are two possible approaches for computing “project-wide” weighted response rates within domain:

  1. We can combine the weighted response rates across the states by taking the simple average of the state rates. This will give equal weight for each state to the project-wide weighted response rate.

  2. We can combine the weighted response rates across the states by weighting each state by the size of the respective populations. In this weighted response rate, the states with the largest populations would have a more dominant effect on the “project-wide” weighted response rate. This weighted rate would show for a survey estimate the proportion of the population in a domain that contributed to the survey estimate.

  3. Given that we are creating pooled estimates within each domain based upon weights which account for different population sizes in each state, the second option is preferred.

28921001 2

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authoradmindc1pc
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy