Parent Incentive Substudy Overview Document

Parent incentive substudy overview 7-12-2018 -final.DOCX

Variations in Implementation of Quality Interventions (VIQI)

Parent Incentive Substudy Overview Document

OMB: 0970-0508

Document [docx]
Download: docx | pdf

Proposed Incentives Sub-Study in

the Variations in Implementation of Quality Interventions (VIQI) Study


The VIQI Impact Evaluation is a randomized experiment that will test the relative effect of evidence-based early childhood interventions on the quality of care and instruction received by children in Head Start and community-based early childhood education (ECE) centers. In the VIQI Impact Evaluation, participating ECE centers will be randomly assigned to implement one of two interventions or to a “business as usual” group. In centers assigned to the two intervention conditions, selected classrooms within a center will adopt the same intervention. In preparation for the VIQI Impact Evaluation, the VIQI pilot study will implement the interventions with a smaller set of ECE centers. One of the goals of the pilot study is to examine the feasibility of key elements of the study design, and explore ways to improve the quality of data collected in the full VIQI Impact Evaluation.

In the final version of the OMB package submitted for the VIQI study, we proposed capitalizing on the VIQI pilot study to examine the utility of using parent incentives to encourage the completion of the parent baseline information form in the VIQI Impact Evaluation. This memo describes the proposed sub-study, which will test the effect on response rates of offering a $10 incentive to parents/guardians to complete baseline information form. Using various analytical strategies, the incentives sub-study will also explore whether offering incentives can potentially affect the characteristics of respondents and reduce non-response bias.

We submit this proposed study to OMB for review and approval as a non-substantive change request. Because we need to collect the parent/guardian information form during baseline data collection for the pilot study – that is, starting in August 2018 – this request is time-sensitive. We anticipate needing approval by
July 30, 2018, so that the baseline data collection can proceed on the timeline planned.


Proposed VIQI Incentives Sub-Study


As discussed in earlier submissions to OMB, baseline demographic information from parents/guardians of children in VIQI classrooms is a crucial component of the study design and planned analyses. Baseline information and consent forms will be sent to the parents/guardians of all children in the selected classrooms in participating study centers.


Achieving high global response rates on the parent survey is important for several reasons. First, it can improve the likelihood that the study has high internal and external validity. Because the VIQI Impact Evaluation is based on a random assignment research design, it is important to minimize differential response rates between the two intervention groups and the “business as usual” group to maintain the internal (causal) validity of the study design. To achieve external validity and enable inferences that can be generalized to the population of interest (centers serving children similar to those in the study), it is also important to minimize the difference between the characteristics of parents/guardians returning the baseline form and the characteristics of all parents whose children were eligible for the study (also called demographic non-response bias).

Further, a key goal of the VIQI Impact Evaluation is to examine the differential impact of the interventions across the diverse populations served by Head Start and community-based ECEs. To enable this analysis, we require a sufficiently large sample to detect differences in impacts across subgroups of children defined by characteristics such as family income, race/ethnicity, parents’/guardians’ level of education, and dual language learner background. Given that the pool of potential respondents is limited to the parents of children in the participating classrooms in study centers, the study population is finite. A high response rate would be the most cost-effective way to achieve a large enough sample to reliably estimate impacts for subgroups of students.

In the absence of strong evidence about expected parent/guardian response rates and potential demographic non-response bias in the participating classrooms in centers participating in the VIQI project, we propose adding an incentive experiment to the VIQI pilot study. For this experiment, parents/guardians in a randomly selected subset of centers will be offered a $10 incentive to return the baseline information form; parents/guardians in the remaining centers will not be offered an incentive. This embedded sub-study will examine whether providing a small incentive to parents/guardians improves response rates to the baseline information form. We also will analyze whether incentives appear to be related to the demographic composition of participating children and their parents. The sub-study will inform decisions about whether and how incentives should be used in the VIQI Impact Evaluation to achieve higher response rates and a more representative sample.1 The questions guiding the sub-study will be the following:


  1. Are response rates higher when parents/guardians are offered a small monetary incentive to return the baseline information form?

  2. Does offering an incentive affect the characteristics of the respondents? Does it reduce non-response bias?


The findings will be used to determine whether providing an incentive has the potential to improve response rates and the representativeness of the child sample in the VIQI Impact Evaluation. The remainder of this memo discusses the elements of the proposed incentive sub-study, including the study design, the measures to be used, and the proposed analyses.


Random Assignment

A cluster-level random assignment will be used to examine the effects of offering an incentive. Centers will be randomly assigned to one of two conditions: (a) an incentive condition, in which parents/guardian will receive a $10 gift card upon completing the 10-minute parent/guardian baseline information form; or (b) a no incentive condition. Half of centers participating in the VIQI pilot study will be assigned to each condition (about 20 centers per condition). Random assignment will be blocked by locality and centers’ treatment condition in the pilot study (Creative Curriculum, Connect4Learning, or Business as Usual).


The study team will work closely with designated center liaisons to distribute the baseline information forms to parents/guardians. Baseline forms will be included in a consent packet that references the data collection activities in which parents/guardians and their children will be asked to participate (see Appendix for a copy of the consent and baseline information forms2. (See Attachment B.5 in the OMB Package for additional information on communication to participants regarding the consent form packets). The forms will be distributed in hard copy. Parents/guardians in the incentive group will receive a $10 gift card upon returning the information forms to their center.3


Statistical Power

As noted above, the incentives sub-study will include all centers in the pilot study (about 40 centers and 120 classrooms). Half of the centers will be assigned to the incentive condition and the other half will be assigned to the no incentive condition. We expect these centers and their classrooms to enroll about 1,620 children. The target population for the sub-study will be the parent/guardians of these children. (See Statement A of the OMB Package for additional information on the expected characteristics of this respondent group in the pilot study.)


Given the sample size, the proposed incentives sub-study should be able to detect effect sizes that range from 0.13 to 0.28. For example, if response rates are around 70 percent on average (60 percent for the no incentive group and 80 percent for the incentive group), the sub-study should be able to statistically detect an impact of 6 to 13 percentage points on response rates.4


Data Sources and Measures

Three data sources will be used for the incentives sub-study: (1) the baseline information form, which will provide information on the characteristics of families who respond to the survey; (2) counts of the number of children in each participating classroom in the study centers, which will be provided by the centers and used to establish the size of the target population in each classroom, and (3) center-level aggregated information about the average characteristics of the full population of children and families in the study centers, which we will try to obtain from as many centers (and for as many characteristics of interest) as possible.5


For the analysis of response rates (research question 1), we will construct a classroom-level dataset of response rates. To calculate these response rates, we will compare the counts of children in each participating classroom (the target population) to the number of returned baseline information forms. The response rate will be defined as the percentage of children in each participating classroom whose parents/guardians returned a completed form. As discussed in the next section, this information will be used to compare the average response rate of centers in the two conditions (incentive versus no incentive).


For the analysis of
respondent characteristics (research question 2), we will construct two types of dataset. First, we will use data from the baseline information form to create a child-level dataset that includes information on the characteristics of each child whose parent/guardian returned a completed form. The dataset will include the following key characteristics for each child:

  • Family income (above the federal poverty level; at or below the federal poverty level; or 200% below the federal poverty level);

  • Race/ethnicity (White, Black, Hispanic, Asian, Other);

  • Parent/guardian’s level of education (at least a high school diploma versus not);

  • Dual language learner background (learning English as a second language versus not); and,

  • Free, reduced-cost, or subsidized child care versus not.


Second, we will construct a center-level dataset of the average characteristics of all children and families in the study centers, based on the aggregated data provided by the study centers. For each center, we will also include the average center-level characteristics of the respondents in that center, which will be calculated using information from the baseline information form. The resulting center-level dataset will include, for each center: (1) the characteristics of the full population of families and children, (2) the characteristics of respondents, (3) the difference between the full population and the respondents, for each available characteristic (a measure of non-response bias).


Each of these datasets has strengths and limitations. This first dataset (child-level) will include information about child and family characteristics that is consistent and available across all centers participating in the VIQI pilot study; however, this information will only be available for the respondents in the study centers. Conversely, the second dataset (center-level) will include information about the full population of families in the study centers (and not just the respondents), but this information may not be available for all study centers and only for a subset of the characteristics that are of interest. Therefore, as will be discussed in the next section, we will use both datasets to explore whether incentives have the potential to reduce non-response bias and improve sample representativeness and external validity.

Analytic Strategy

The first research question asks whether a $10 incentive improves parent/guardian response rates on the baseline information form. We will answer this question globally, by comparing response rates in the centers where an incentive was offered, to response rates in the centers offering no incentive. This comparison will be based on the dataset of response rates for each participating classroom (described in the previous section). In practice, the difference in response rates between classrooms in the incentive group and the no incentive group will be estimated using a statistical model that will regress the response rate for each classroom against an indicator for group membership (coded 1 if a family is in a center in the incentive group and 0 otherwise). The estimated regression coefficient on the indicator for group membership will provide an estimate of the effect of the incentive on response rates; if the estimated effect is positive, then this would suggest that offering an incentive increased response rates. The model will also include a set of indicators for the random assignment blocks, to account for the study design and to improve the precision of estimated effects.6 In addition to estimating the main effect of incentives on response rates, we will also use a variance decomposition to descriptively examine how much variation there is in response rates between the centers in each incentive group, and between classrooms within centers.

The second set of research questions addresses the potential of the incentives to reduce non-response bias and improve sample representativeness, using different types of analyses. As noted earlier, due to limitations in the available data, the results from these analyses will be considered together to more fully explore the relationship between incentives and respondent characteristics.


The first analysis, based on all study centers, will examine the effect of the incentive on the demographic characteristics of the respondents. As noted earlier, consistent child-level information on the characteristics of non-respondents will not be available, because this information cannot be obtained without parental consent. However, by comparing the characteristics of respondents in the incentive group and the no incentive group, we can still make informed inferences about whether the incentive has the potential to reduce non-response bias. The characteristics of respondents will be examined using the child-level dataset of the characteristics of children and families from the baseline information form. The dependent variable in the statistical model will be a specific family or child characteristic (e.g., whether the parent/guardian has a high school diploma), which will be regressed against an indicator for group membership (coded 1 if a family is in the incentive group and 0 otherwise). The estimated regression coefficient on the indicator for group membership will provide an estimate of the difference between the respondents in the incentive group and the no incentive group with respect to that characteristic. An omnibus test will also be used to examine whether there is a systematic difference across all characteristics between respondents in the incentive and no incentive groups. This omnibus test will regress the indicator for group membership (incentive versus no incentive) against the block indicators and the full set of child/family characteristics; a chi-square test will then be used to test whether these characteristics jointly predict group membership. If so, then this would suggest that the respondents in the incentive group are systematically different from respondents in the no incentive group, and that the incentive may reduce non-response bias.7 In addition to estimating the main effect of incentives on the characteristics of respondents, we will also descriptively examine how much variation there is in the characteristics of respondents in each incentive group. If the incentive reduces demographic non-response bias, then one would expect there to be greater variation in the characteristics of respondents in the incentive group.


The second analysis will examine whether non-response bias appears to be smaller in the incentive group than in the non-incentive group. This comparison will be based on the center-level dataset of child and family characteristics. As noted earlier, it may not be possible to obtain center-level aggregated characteristics for all centers and characteristics, so some of these analyses may be based on a subset of the study centers. The dependent variable in the statistical model will be the difference between the average value of a characteristic for the full center population and the respondent sample (i.e., the non-response bias for that characteristic), which will be regressed against an indicator for group membership (coded 1 if a family is in the incentive group and 0 otherwise). The estimated regression coefficient on the indicator for group membership will provide an estimate of the difference in the non-response bias for that characteristic between the incentive group and the no incentive group. If the incentive reduces non-response bias and improves representativeness, then one would expect this coefficient to be directionally negative. Like the previous analysis, an omnibus test will be used to examine whether there is a systematic reduction in non-response bias across all characteristics.


Interpreting the Results

The proposed incentive experiment will allow us to capitalize on the VIQI pilot to learn about two substantial anticipated challenges in the VIQI Impact Evaluation: parent/guardian response rates and the representativeness of responding parents/guardians. The first analysis will establish expected response rates with and without incentives for parent/guardian participation in baseline data collection. The second will suggest whether and how incentives are related to the characteristics of respondents, and by extension, whether incentives may improve the representativeness of the sample. If the results indicate that a) response rates are higher on average in centers that provided an incentive or b) that respondents in the incentive group are different, more diverse, and/or representative on average than respondents in the no incentive group, then it will be advisable to use an incentive in the VIQI Impact Evaluation. We anticipate that securing necessary responses will require substantial administrative effort in addition to any incentives, but incentives may be an important component of the efforts to secure participation.


High response rates will increase the likelihood that the VIQI Impact Evaluation meaningfully contributes to future evidence-based practice in ECE, for several reasons. Higher global response rates make it more likely that the VIQI Impact Evaluation will meet the standards used by evaluation clearinghouses that review early childhood studies, such as the What Works Clearinghouse at the US Department of Education. Higher response rates reduce the likelihood that non-respondents substantially differ from respondents on unobservable characteristics, such as parental engagement, that may also be related to the effectiveness of the interventions. High response rates among study participants is also essential to ensure that there are a sufficient number of children from key demographic subgroups to reliably examine VIQI Impact Evaluation questions about differential treatment effects.

1 We currently do not plan to offer incentives to participants in any of the other VIQI data collection activities.

2 The consent form for the no incentive condition, as well as the baseline information form, are identical to the final versions submitted to OMB. The consent form for the incentive condition, included here, is the same as the one for the no incentive condition, with the exception that it mentions an incentive for completion of the baseline information form.

3 If a parent/guardian would like their child to participate in the data collection activities for the study, s/he will sign the consent form, will complete the baseline information form and return this information to the study team.

4 These ranges are based on a conservative and liberal set of assumptions about the intraclass correlation in response rates and the explanatory power of the random assignment blocks. The liberal scenario assumes an intraclass correlation of 0.11 between centers and 0.01 between classrooms, and that the between-center variance explained by the blocks is 0.98. The conservative scenario assumes an intraclass correlation of 0.03 between centers and 0.28 between classrooms, and that the between-center variance explained by the blocks is 0.62. All calculations assume a 10 percent significance level (two-tailed test) and 80 percent power.

5 Center-level data will be requested because centers are often not able to share individual-level, identifiable administrative data (e.g., characteristics of a child) without agreed upon consent of study participants. In a small number of pilot study centers, some classrooms may not be eligible for the study; in these centers, we will request classroom-level data for the participating classrooms, and aggregate the classroom-level statistics up to the center-level (weighting by the number of children in each classroom).

6 The analysis will use a two-level modeling structure (classrooms nested in centers) to account for the clustered nature of random assignment and to obtain the right standard errors for hypothesis testing.

7 All analyses of respondent characteristics will be based on a three-level model that will account for the clustered nature of the data and of random assignment (children nested in classrooms, nested in centers).

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMartinez-Beck, Ivelisse (ACF)
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy