MIHOPE-K Supporting Statement B updated 9.21_CLEAN

MIHOPE-K Supporting Statement B updated 9.21_CLEAN.docx

Mother and Infant Home Visiting Program Evaluation (MIHOPE): Kindergarten Follow-Up (MIHOPE-K)

OMB: 0970-0402

Document [docx]
Download: docx | pdf



Mother and Infant Home Visiting Program Evaluation (MIHOPE):

Kindergarten Follow-Up (MIHOPE-K)



OMB Information Collection Request

0970 - 0402



Supporting Statement

Part B

Original Document May 2018

Revised November 2018

Revised February 2019

Revised July 2019

Revised September 2021






Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


Mary E. Switzer Building

330 C Street, SW, 4th Floor

Washington, DC 20201


Project Officers:

Nancy Geyelin Margie

Laura Nerenberg

Part B. COLLECTION OF INFORMATION USING STATISTICAL METHODS


B1. Respondent Universe and Sampling Methods


At baseline, MIHOPE recruited 4,229 families from 88 local programs (sites) in 12 states. Families were randomly divided between a program group, which could be enrolled in one of the home visiting programs being studied, or a control group, which was provided with referrals to other services in the community. Families were eligible for the study if (1) the mother was pregnant or the family had a child under six months old when they were recruited for the study, (2) the mother was 15 years or older at time of entry in the study, and (3) the mother was available to complete the baseline family survey. Local sites meeting several criteria were chosen to participate in the study: (1) operating programs that existed for at least two years by the time of study recruitment, (2) evidence of enough demand for home visiting services that they could provide a control group, (3) no evidence of severe implementation problems that would interfere with the program’s ability to participate in the study, and (4) a contribution to the diversity of sites and families for purposes of estimating effects for important subgroups of families.


Families were recruited into the study by Mathematica’s survey research staff, who visited families to obtain informed consent when home visitors determined whether a family was eligible for the study or soon after that determination had been made.


MIHOPE Sample Available for Follow-Up


The study plans to conduct kindergarten follow-up activities with all families who enrolled in the study, not just those who have completed previous rounds of follow-up data collection. The sample that is available to be fielded at MIHOPE-K is different from the baseline MIHOPE sample only because 1) a few families withdrew from the study and 2) in some cases, the child with whom the mother enrolled in MIHOPE is no longer alive.


As shown in Figure B.1, the study enrolled 4,229 families, all of whom completed the baseline interview.1 Between the baseline interview and the 15-month follow up, 11 families withdrew from the study, resulting in a fielded sample of 4,218 families at the 15-month follow up.2 At the 15-month follow up, we learned that 103 children had never been born (for example, because of miscarriages) or had died after birth. Therefore, the fielded sample for the kindergarten follow up includes 4,115 families.



Figure B.1

MIHOPE Sample at Baseline, 15-month Follow Up, and Kindergarten Follow Up



To provide additional context for the power calculations presented in the next section, Table B.1 summarizes the data collection efforts for each round of MIHOPE, including data collection currently underway when the children are in kindergarten.


Table B.1

MIHOPE Structured Interviews and In-Home Visits3


Baseline

MIHOPE

60-minute phone interview

 


 

First follow-up: Children are 15-months old

MIHOPE 2

60-minute phone interview

90-minute in-home data collection

 


 

Second follow-up: Children are 2.5 years old

MIHOPE Check-in

30-minute phone or web survey

 


 

Third follow-up: Children are 3.5 years old

MIHOPE Check-in

30-minute phone or web survey

 


 

Fourth follow-up: Children are in Kindergarten

MIHOPE-K

60-minute phone interview

120-minute in-home data collection



The information collected at the 15-month follow-up was designed to estimate the early effects of home visiting programs on a wide range of policy-relevant outcomes.4 The data collection at the 2.5 and 3.5 year old follow-up points was intentionally limited to a brief survey that could be self-administered on the web or answered over the phone, which collected only updated contact information and a few interim measures of child and family functioning. Because home visiting is intended to improve participating children’s school readiness, we return to more comprehensive data collection efforts at the kindergarten follow-up to assess the impact of home visiting programs at this critical point.


Power Calculations


Minimum Detectable Differences for the Full Sample


Table B.2 shows the minimum detectable differences of the study for analyses involving the full sample (subgroup analyses are shown in Table B.3). For reference, we have included the fielded sample, actual response rates, and minimum detectable differences for the 15-month follow-up.


For MIHOPE-K, minimum detectable differences are shown for three types of data that will be collected at kindergarten that may have different response rates:


(1) the one-hour caregiver interview,

(2) in-home assessments, and

(3) teacher surveys.


For the caregiver survey and in-home assessments, we show two different response rate scenarios:


  1. a 75 percent response rate for the caregiver survey and a 70 percent response rate for in-home data collection. Both are calculated among the fielded sample (as described above, the sample available to be fielded at kindergarten is 4,115), and

  2. a 50 percent response rate among the fielded sample.


As noted elsewhere, the use of in-person locating in kindergarten data collection makes it similar to the 15-month follow-up, and we therefore expect response rates at kindergarten to be similar to the 15-month follow-up. By contrast, the 2.5 and 3.5-year surveys generally did not include in-person locating, which is why response rates were only about 50 percent. More information about expected response rates is available in Section B.3 of this Supporting Statement.

The teacher survey is assumed to be completed with 75 percent of families who complete the caregiver survey, resulting in a 56 percent response rate under the expected scenario and a 38 percent response rate under a scenario with a 50 percent caregiver response rate.


The results shown in Table B.2 are the smallest true impact that would generate statistically significant impact estimates in 80 percent of studies with a similar design using two-tailed t-tests with a 10 percent significance level. We have assumed that baseline characteristics explain 20 percent of the variation in outcomes across individuals, which is similar to findings at the 15-month follow-up.


The minimum detectable differences shown for the full sample are the minimum detectable differences between the program and control group estimates, which is the main effect of interest in this study.


Minimum detectable differences are shown in three ways:


  • For an outcome that would be true for 20 percent of the control group. An example of an outcome with that sort of distribution in the MIHOPE sample is whether the caregiver is experiencing depressive symptoms at the time of the survey.

  • For an outcome that would be true for 50 percent of the control group. An example of an outcome with that sort of distribution in the MIHOPE sample is whether the caregiver is receiving SNAP benefits at the time of the survey.

  • As a percentage of the outcome’s standard deviation, which is commonly referred to as an effect size. The effect size can apply to any outcome measured through one of these data sources. Whether a given effect size is policy relevant depends on the context since small movements in some outcomes may be more meaningful than for other outcomes.


The minimum detectable differences for the full sample show the following:


    • Caregiver interview. At the 15-month follow up, the minimum detectable effect is 0.078 standard deviations, which translated into impacts of 3.1 and 3.9 percentage points for outcomes with a 20 prcent and 50 percent prevalence rate. At the kindergarten follow-up, the minimum detectable effect size is 0.080 with a 75 percent response rate and 0.098 with a 50 percent response rate. For an outcome with a 20 percent prevalence rate, this translates into differences of 3.1-3.9 percentage points while for an outcome with a 50 percent prevalence, they translate into differences of 4.0-4.9 percentage points.

    • In-home data collection. At the 15-month follow up, the minimum detectable effect was 0.081 standard deviations, which translated into impacts of 3.3 and 4.1 percentage points for outcomes with a 20 percent and 50 percent prevalence rate. At the kindergarten follow-up, the minimum detectable effect size would range from 0.83 with a 70 percent response rate to 0.098 with a 50 percent response rate. For an outcome with a 20 percent prevalence rate, this translates into effects of 3.3-3.9 percentage points while for an outcome with a 50 percent prevalence, they translate into effects of 4.1-4.9 percentage points.

    • Teacher surveys. Since teacher surveys were not part of the 15-month follow-up, the table shows only the minimum detectable effects for the two kindergarten response rate scenarios. As discussed in Supporting Statement A, we have assumed a 56 percent response rate for the teacher survey (assuming 75 percent of teachers respond

    • for the 75 percent of families who complete other parts of family data collection). The minimum detectable effect size ranges from 0.092 to 0.113, depending on the response rate. For an outcome with a 20 percent prevalence rate, this translates into effects of 3.7-4.5 percentage points while for an outcome with a 50 percent prevalence, they translate into effects of 4.6-5.7 percentage points.



A major goal of the MIHOPE kindergarten follow-up is to examine school readiness. Since measures of child development are typically scale scores without a natural interpretation, it is common to analyze results in terms of effect sizes. From that perspective, these minimum detectable effect sizes are in line with those found in randomized trials examining academic skills in elementary school, which range from 0.07 to 0.23 standard deviations.5 They translate into nearly 3-4 weeks of additional growth in oral language development. During the transition to kindergarten, children are learning language rapidly and three weeks of additional language development represents substantial growth in foundational skills over what would have been expected in the absence of the intervention.6 Given the importance of early language for literacy development and later academic outcomes,7 school districts have begun to implement resource-intensive 6-week programs to support children's developmental gains across the summer between academic years.8 The minimum detectable effect in the current study is equivalent to half the time spent in an additional summer program, over and above typical instruction during the academic year. In addition, early interventions may have the ability to help close achievement gaps between lower- and higher-income children that begin in early childhood and tend to stay stable throughout childhood and adolescence. The minimum detectable effect in the current study is equivalent to closing the gap in early language skills between lower- and middle-income children by about 20%.9


Another goal of the long-term follow-up in MIHOPE is to conduct a benefit-cost analysis of the home visiting programs included in the study. In past studies of home visiting, benefits to the government have come from increased maternal earnings and reductions in public assistance receipt. A common source of public assistance in MIHOPE is SNAP benefits, which is expected to have a prevalence range of 50 percent. For such an outcome, the study would be powered to detect impacts of 4.0-4.9 percentage points using information from the parent survey. The average SNAP recipient receives about $250 in benefits each month, a 4 percentage point reduction in SNAP benefits would translate into savings of about $120 per year, which, accumulated over a period of time, would be sufficient to contribute to findings about whether home visiting is an investment that provides long-term benefits to the government.


Minimum Detectable Differences for Subgroups


The MIHOPE design calls for an analysis of whether impacts differ across subgroups. To illustrate the statistical power regarding subgroups, Table B.3 presents the minimum detectable differences in effects between subgroups. Results are presented for two sets of subgroups, which were chosen because they illustrate how the statistical power varies with the size of a subgroup.


    • By maternal psychological resources. The concept of “psychological resources” is taken from the Nurse-Family Partnership, which found in two studies that effects were concentrated among for parents with low psychological resources.10 It is based on a composite of (1) mental health, (2) mastery (the extent to which a person thinks life chances are under her control), and (3) verbal abstract reasoning. By construction, half the sample is considered to have low psychological resources.

    • By presence of intimate partner violence (IPV). When most of the sample is in one subgroup, differences in impacts have to be somewhat larger for them to generate statistically significant differences. To illustrate this, Table B.3 shows power calculations for subgroups defined by whether there is IPV in the caregiver’s relationship, a subgroup comparison that is being made in the MIHOPE 15-month analysis.

Similar to Table B.2, Table B.3 contains three panels, showing statistical power for

  1. The 15-month follow-up

  2. The kindergarten follow-up with a 75 percent response rate for the caregiver survey

  3. The kindergarten follow-up with a 50 percent response rate for the caregiver survey

Also like Table B.2, Table B.3 shows results for the three main data sources: the caregiver interview, in-home data collection, and the teacher survey. For each row, the table shows how the sample is split between the two subgroups and the number of families in the fielded sample for each subgroup. It then shows minimum detectable differences in the effects across subgroups in three ways:

  1. For an outcome with a prevalence of 20 percent

  2. For an outcome with a prevalence of 50 percent

  3. Expressed as an effect size (that is, as the number of standard deviations of the outcome)

The minimum detectable differences across subgroups show the following:


    • Caregiver interview. At the 15-month follow up, the minimum detectable difference in effect sizes was 0.156 across the psychological resource subgroup and 0.176 across the IPV subgroups. These translate into differences of 6.2-7.0 percentage points for an outcome with a 20 percent prevalence (like maternal depression) and 7.8-8.8 percentage points for an outcome with a 50 percent prevalence (like SNAP receipt). At the kindergarten follow-up, the minimum detectable difference in terms of effect sizes is 0.160-0.180 standard deviations with a 75 percent response rate and 0.196-0.221 with a 50 percent response rate. For an outcome with a 20 percent prevalence rate, this translates into effects of 6.4-7.2 percentage points with a 75 percent response rate and 7.8-8.8 percentage points with a 50 percent response rate. For an outcome with a 50 percent prevalence rate, this translates into effects of 8.0-9.0 percentage points with a 75 percent response rate and 9.8-11.0 percentage points with a 50 percent response rate.

    • In-home data collection. The minimum detectable differences are slightly larger for in-home data collection than for the caregiver interview because the expected response rates are slightly lower. For example, for an outcome with a prevalence of 50 percent, data collect in the home could detect differences across subgroups ranging from 8.3 to 11.0 percentage points.

    • Teacher surveys. Depending on the response rate and subgroup, the minimum detectable differences range from 0.185 to 0.225 when expressed as effect sizes, 7.4

    • to 10.2 percentage points for an outcome with 20 percent prevalence, and 9.2 to 12.8 for an outcome with 50 percent prevalence.



B2. Procedures for Collection of Information


This section describes the collection of follow-up data for MIHOPE-K. Best practices will be followed for conducting the data collection, including training and certifying staff on data collection procedures and monitoring data collection to ensure that high quality data are collected, high response rates are achieved, and differential response rates are avoided. Our data collection method builds on the methods used in previous phases of MIHOPE to the greatest extent possible. In addition, as mentioned in Supporting Statement A, we have adapted our data collection methods that involve in-person contact so that they can be conducted during the ongoing COVID-19 pandemic. In particular:


  • Computer-assisted telephone interviewing (CATI) will be used to conduct the structured interview with caregivers.

  • In the virtual version of the caregiver-child interaction task, direct assessments of children, and direct assessments of caregivers, we will use Webex to connect with families, to administer assessments, and guide families through the “visit.” The study team will provide a laptop or tablet for families to use for the visit.Incentives and tokens of appreciation will be provided to increase families’ willingness to respond to each of the follow-up data collection components.

  • Contact information gathered during previous rounds of data collection will be used to inform the work of locators.

  • Design and text of respondent contact materials will be informed by principles of behavioral science.

  • Varied methods will be used to reach out to respondents (i.e., email, text messages, phone calls).


Conducting the Follow-Up Family Data Collection


The sample will be released for the follow-up family data collection annually. This sample release will include families in which the focal child reached kindergarten age by the cutoff date for the state in which the family lives.


The respondent notification plan will include the following (all caregiver-specific contact materials are included in Attachment 7):


  1. Pre-outreach package (information letter and gift). Before the sample release, we will mail the families a pre-outreach letter that asks them to update or confirm their contact information. The package will also include a small gift for the child (for example, a small book) as a thank you for their past participation in MIHOPE as well as a newsletter/infographic with information on the current status of the study. Since the newsletter/infographic will contain information about ongoing study activities, it has not yet been developed (however, a shell showing what it would look like and the kind of information that could be included is in Attachment 9). Families can update or confirm their contact information either online or by calling the toll-free number to contact the study team, which is listed in the letter. Families for whom we have an email address will also receive a pre-outreach email.


  1. Invitation letter. Once the sample is released, we will send the family an invitation letter with information about the follow-up data collection activities that we would like them to participate in, the gift card amount to be provided for completing the activities, a toll-free number to contact the study team, notification that we will be calling them soon to complete the structured interview with caregivers via telephone, and information about the website they can visit (included as Attachment 10). Participants can also call the toll-free number to complete the structured interview. The invitation letter will also include an FAQ with some more information about the study and study activities. Telephone interviewers at Mathematica’s Survey Operations Center (SOC) will begin trying to contact families who have not yet completed the structured interview about one week after the invitation letter is mailed. Telephone interviewers will call nonresponding families for approximately four weeks.


  1. Email notifications. During outbound dialing and during the fielding period, we will also email families for which we have email addresses. The first email will contain similar information to the invitation letter and will provide the participants a toll-free number in case they want to call the study team to complete the structured interview via telephone. It will also provide a toll-free number and website (included as Attachment 10) to schedule the structured interview and in-home visit and to provide consent for the survey of focal children’s teachers. Additional reminder emails will be sent to those that have not yet completed the structured interview.


  1. Text messages. Text reminders to complete the data collection activities will also be sent to participants.


  1. Field advance letter and field locating letter. If the respondent has completed the structured interview before in-person locating begins, we will send them a field advance letter that lets respondents know that field staff will be in their area soon. If the respondent has not completed the structured interview before the in-person locating efforts, their field locating letter will also remind them about completing the structured interview. We will also send an additional locating letter during the fielding period if we have not been able to contact respondents, which we plan to send via priority mail.


  1. Refusal conversion letters. If families have firmly refused to participate in the most recent data collection round in which contact has been established (prior to kindergarten), they will receive a tailored letter early in the data collection period. The letter acknowledges their refusal in the earlier data collection round and encourages them to reconsider taking part in the data collection activities. The letter also invites participants to contact the study team to ask any questions or share concerns about their participation. A similar letter will be sent to families who refuse to participate during the kindergarten follow-up data collection period on an as-needed basis.



  1. Reminder postcards. During outbound dialing and at interim periods after transitioning the case to the field for in-person efforts, we will send reminder postcards to those that we have not yet heard from and have not been able to contact.


  1. Caregiver website. Caregivers will be provided with a website they can visit to confirm the focal child’s participation in kindergarten or first grade, update their contact information, schedule a time for the structured interview with caregivers, provide information about the child’s school and teacher, and give consent. Once caregivers log into the website, they will be asked to confirm whether the focal child will be in kindergarten or first grade in the current or upcoming school year (if contacted before the school year has begun). If the focal child is not in kindergarten or first grade (or will not be at the start of the school year, if contacted in the summer), caregivers will be asked to update their contact information and told that they will be contacted again next year. Caregivers who have an eligible child will be asked to update their contact information, then provide their scheduling preferences for the structured interview and in-home visit, provide consent for the teacher survey, and give their child’s school and teacher contact information. At any point, caregivers can also go to the website to learn about any updates to MIHOPE and learn about the kindergarten data collection activities.


Prior to beginning administration of the structured interview, we will obtain verbal consent for the structured interview. We will also obtain consent from the teacher prior to administration of the survey of focal children’s teachers (electronic or paper – depending on the survey administration method). We will document consent for the home activities, and we will document the caregiver’s consent for the study team to contact the child’s teacher for the survey of focal children’s teachers.



Conducting the Teacher Data Collection


In the aforementioned caregiver contact materials, we will include letters and/or emails for the caregiver to give to the focal child’s teacher that explain the study and inform the teacher that we will be contacting him/her to complete the survey of focal children’s teachers. We will also send the teacher versions of these materials directly because the caregiver may not pass them along to the teacher. All teacher-specific contact materials are included in Attachment 8. Specifically, the teacher notification plan will include the following:


  1. Introduction letter/email/text. After we have documented the caregiver’s consent to contact the child’s teacher and have received the teacher’s contact information from the caregiver, we will send the teacher the survey or a link to the survey with an introduction letter/email/text explaining the study. The communication mode will depend on the contact information we have received from the caregiver.


  1. Email notifications. For teachers for whom we have email addresses, we will also email them with a reminder to fill out the survey if we have not heard back from them. The first reminder email will contain the link to the survey, and an FAQ will be attached with more information on the study and the survey. Additional reminder emails will be sent to those that have not yet completed the survey.



  1. Reminder letter. For teachers whose school addresses we have, we will also send them a reminder letter to follow-up with them if we have not heard back. This letter will also contain a link to the survey in case the teacher prefers to complete it online.


Before contacting the teachers, we will send the school districts and principals FAQs with information about the study in case the teachers need district/principal approval before participating in the study.


Honoraria will be offered to teachers for completing the teacher survey.


B3. Methods to Maximize Response Rates and Deal with Nonresponse


Expected Response Rates


We are expecting a response rate of 75 percent for the structured interview and 70 percent for the in-home activities (i.e., child and caregiver assessments, assessor observation, and videotaped caregiver-child interaction). We anticipate that we may be able to achieve a 75 percent response rate among teachers for whom we have contact information and for whom we have received consent to make contact from caregivers. This expectation is in line with the response rates from MIHOPE 2, which were 78.7 percent and 70.5 percent for the structured interview and in-home activities, respectively. The response rates for MIHOPE Check-in were lower (51 and 48 percent for the 2.5 and 3.5 year follow-ups respectively) because MIHOPE Check-in did not include in-person locating for most of the sample.


The 2.5 and 3.5 year follow-up points did not include field locating (also referred to as in-person locating) for most of the sample for budget-related reasons. Instead, most sample members were encouraged to complete the survey only through a combination of email, text message, and phone call reminders. The current data collection effort will be more similar to the 15-month data collection both in terms of the locating efforts and the format of the data collection. Thus, we expect the response rates for the Kindergarten data collection effort would more closely resemble those of the 15-month data collection. Since 77 percent of the sample completed the caregiver interview at 15 months, we predict that 70-75 percent will complete the interview at kindergarten with the use of in-person locating.


\


Dealing with Nonresponse


All efforts will be made to obtain information on a high proportion of families, including the potential for $75 in incentives during the kindergarten data collection ($25 for the structured interview and $50 for the in-home assessment) offered to respondents and the other steps listed in section B2. Efforts to engage participants in the semi-structured interviews, which occurred when focal children were somewhat younger than kindergarten age, included the potential for $50 in incentives offered to respondents and the other steps listed in section B2. The steps listed in the next subsection for managing response rates will also be used to mitigate the potential for non-response bias in the kindergarten data collection by helping to ensure a high response rate in both the program and the control groups.

Following the close of data collection, a non-response analysis will be conducted to determine whether the results of the study may be biased by non-response. In particular, two types of bias will be assessed: (1) whether estimated effects among respondents apply to the full study sample, and (2) whether program group respondents are similar to control group respondents. The former type of bias affects whether results from the study can be generalized to the wider group of families involved in the study, while the second assesses whether the impacts of the programs are being confounded with pre-existing differences between program group and control group respondents.


To assess non-response bias, several tests will be conducted:


  • The proportion of program group and control group respondents will be compared to make sure the response rate is not significantly higher for one research group.


  • A logistic regression will be conducted among respondents. The “left hand side” variable will be their assignment (program group or control group) while the explanatory variables will include a range of baseline characteristics. An omnibus test such as a log-likelihood test will be used to test the hypothesis that the set of baseline characteristics are not significantly related to whether a respondent is in the program group. Not rejecting this null hypothesis will provide evidence that program group and control group respondents are similar.


  • Baseline characteristics of respondents will be compared to baseline characteristics of non-respondents. This will be done using a logistic regression where the outcome variable is whether someone is a respondent and the explanatory variables are baseline characteristics. An omnibus test such as a log-likelihood test will be used to test the hypothesis that the set of baseline characteristics are not significantly related to whether a respondent is in the program group. Not rejecting this null hypothesis will provide evidence that non-respondents and respondents are similar.


  • Impacts from administrative records sources – which are available for the full sample – will be compared for the full sample and for respondents to determine whether there are substantial differences between the two. This analysis can be done using early impacts from administrative data from MIHOPE 2 or new administrative data collected during MIHOPE-LT.


If any of these tests indicate that non-response is providing biased impact estimates, a standard technique such as multiple imputation or weighting by the inverse probability of response will be used to determine the sensitivity of impact estimates to non-response. Of note, we engaged in an examination of non-response bias for the sample that participated in an incentive experiment in the 2.5 year data collection. Some results of these analyses are presented in Supporting Statement A and are included in a separate memorandum documenting the 2.5 year experiment results.


Maximizing Response Rates


Minimizing sample attrition is of utmost importance to any longitudinal study. It is likely that many MIHOPE families will be highly mobile, and therefore there will be the risk of attrition at follow-up.


Several strategies will be adopted to mitigate the risk of attrition at the kindergarten follow-up:


  1. Implementing a multi-pronged tracing effort to minimize attrition from outdated contact information


We will use the detailed information collected in MIHOPE 1, MIHOPE 2, and MIHOPE Check-in (including names, dates of birth, Social Security numbers, addresses and phone numbers (home and work), and email addresses for the family, as well as addresses and phone numbers for up to three relatives or friends who will know how to reach the family) and employ Mathematica’s highly effective locating techniques to reach families.


Updating Participant Contact Information. Mathematica’s Sample Management System (SMS) will be the central clearinghouse for all contact information on MIHOPE families, and will also be used to track structured interview response rates. Contact between rounds of the structured interview will increase sample retention and reduce the level of effort needed to locate families. To reduce the loss of families between follow-up points, we plan to send families a study information packet that will contain a newsletter with updates about the study and a small token of appreciation such as a refrigerator magnet or book of sticky notes with the study’s name and toll-free number listed on it. Additionally, we will send a birthday card to each child on a yearly basis (Attachment 7) and a seasonal greeting (either in the winter or spring) (Attachment 7). Since the newsletter will contain information about ongoing study activities, the specific content has not yet been developed. However, an example of the format and likely topics that would be covered is included in Attachment 9.


If any updated contact information is provided after the mailing of the letters, postcards, birthday cards, or holiday cards, or they are returned from the post office with an updated address, we will document the new address for the family in the SMS and re-mail the materials to the updated address.


Locating Participants. Although the outlined strategies to track participants between follow-up rounds will likely result in lower attrition rates, additional techniques will be used to ensure a high response rate is achieved at each follow-up round from this mobile population. Mathematica has extensive experience conducting studies with mobile and hard-to-reach populations and has developed several techniques to locate these populations. Locating can be costly, depending on which methods are used. In general, mailing letters and receiving updated information via returned mail is less expensive than electronic database searches; electronic database searches are less expensive than locators calling neighbors or other contacts; and telephone locating is less expensive than in-person field locating. The least expensive methods (mailing and electronic locating) will be used before moving to more expensive methods (telephone and in-person locating). As preparations to conduct follow-up data collection get underway, the following process for locating participants will be employed: (1) multiple pre-field mailings, (2) in-house locating, and as needed, (3) field locating. All materials used for locating and contacting participants are included in Attachment 7.


  1. Pre-Field Mailings. Any letters or postcards that are returned to Mathematica with updated information will be re-mailed to the new address and the new address will be entered into the SMS. Families will then be sent an additional mailing, an invitation letter, directly before calling for the structured interview begins. (We will also send an email version of the letter.)

  1. In-House Locating. Custom database searches and telephone calls to contacts provided by the family during prior rounds of MIHOPE data collection will be conducted when the existing contact information we have for a family is not accurate and pre-field mailing does not yield an updated telephone number or address. Mathematica’s specialized locating staff uses searchable databases, directory assistance services, reverse directories, and contacts with neighbors and community organizations to obtain current contact information. Mathematica’s locating staff will also search the Web and social networks such as Facebook and Instagram to find sample member contact information.


  1. Field locating. Some families will not be locatable using in-house locating methods. These families will be assigned to field locators who will employ proven techniques for finding hard-to-find populations. For instance, field staff may approach neighbors residing in close proximity to the families’ last known address or the contact persons provided during prior structured interview rounds. They will also rely on neighborhood resources such as local post offices, churches, bars, homeless shelters, or community centers as sources of information. Field staff will be trained not to reveal any private information about the participant to any informants, including the study’s name or unique details about the study.


  1. Training telephone interviewers and field workers on techniques for building participant buy-in and converting caregivers to participation. Field staff and assessors will be trained to establish rapport with families so that they will have a positive impression of the study and be more willing to participate in the future.


  1. Utilizing multimodal reminders based on behavioral science principals. We plan to use the email addresses and cellular telephone numbers of participants that have been collected in MIHOPE 1, MIHOPE 2, and MIHOPE Check-in to send email and text message reminders about the follow-up data collection during the fielding period.


  1. Providing incentives and tokens of appreciation, as discussed in Supporting Statement A.


  1. Providing a study web page to relay information about the study to participating families. (The website, included as Attachment 10, will also allow families to provide consent for the study to contact their child’s teacher.)


B4. Tests of Procedures or Methods to be Undertaken


As part of MIHOPE-K, the study team used pretesting to identify revisions to be made to materials, procedures, and instruments for follow-up data collection. We identified 9 families with a child in kindergarten (including both English and Spanish-speaking participants) in several locations (including New Jersey, Ohio, and South Carolina) and recruited them to pretest the structured interview. Six families also participated in the pretest of the direct assessments. The study team attempted to recruit participants that represent the diversity of the MIHOPE sample (including linguistic, ethnic, and racial diversity). We also had 4 kindergarten teachers in several locations (New Jersey and North Carolina) pretest the survey of focal children’s teachers. Each of these groups received different measures, and no individual question was asked of more than 9 people.


The pretest included debriefings after the structured interview with caregivers and the survey of focal children’s teachers to investigate caregivers’ and teachers’ understanding of questions, and ease or difficulty of responding, and any questions or confusion they may have had. The pretest interviews and surveys were timed so that accurate estimates of the length of the interview and the survey could be obtained.


After completing the kindergarten data collection with the first set of MIHOPE families, we identified a number of updates to improve data collection efforts and submitted the proposed changes as a nonsubstantive change (approved by OMB in August 2019). The updates included some minor changes to the following elements of our data collection: direct assessments of children; direct assessments of caregivers; videotaped caregiver-child interaction; structured interview with caregivers; caregiver contact materials; teacher contact materials; and parent website. Specifically, weReduced the length of the structured interview because it is taking slightly longer per family than originally estimated, and cut some of the direct assessments of children and caregivers to reduce burden; changed wording to improve flow and administration in the structured interview with caregivers, direct assessments of children, direct assessments of caregivers, and the videotaped caregiver-child interaction; andRevised contact materials to further simply the language, gain respondents’ attention, and encourage them to participate, as well as added a few materials that are tailored to respondents who may be less likely to participate in data collection.


After completing the kindergarten data collection with the second set of MIHOPE families, we identified updates to make to facilitate the virtual option for the activities that have taken place in person in the past (see Memo Justification for NonSubstantive Change Request from September 2021).



B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


  • MDRC

  • Mathematica

  • Todd Little (Texas Tech University)

  • Nancy Geyelin Margie (OPRE/ACF)

  • Laura Nerenberg (OPRE/ACF)

  • Rachel Herzfeldt-Kamprath (MCHB/HRSA)

  • Alicia Vooris (MCHB/HRSA)

  • Caroline Dunn (MCHB/HRSA)



REFERENCES



Bowne, J. B., Yoshikawa, H., & Snow, C. E. (2017). Relationships of teachers’ language and explicit vocabulary instruction to students’ vocabulary growth in kindergarten. Reading Research Quarterly52(1), 7-29.


Duncan, Greg J., Chantelle J. Dowsett, Amy Claessens, Katherine Magnuson, Aletha C. Huston, Pamela Klebanov, Linda S. Pagani et al. "School readiness and later achievement." Developmental psychology 43, no. 6 (2007): 1428.


Hill, Carolyn J., Howard S. Bloom, Alison Rebeck Black, and Mark W. Lipsey. 2008. “Empirical Benchmarks for Interpreting Effect Sizes in Research.” Child Development Perspectives 2(3).


Kitzman et al., “Effect of Prenatal and Infancy Home Visitation by Nurses on Pregnancy Outcomes, Childhood Injuries, and Repeated Childbearing.” Journal of the American Medical Association 278, 8: 644-652, 1997.


Quinn, D. & Polikoff, M. (2017). Summer learning loss: What is it, and what can we do about it? Brookings Report: Washington, DC. Retrieved from https://www.brookings.edu/research/summer-learning-loss-what-is-it-and-what-can-we-do-about-it/.


Reardon, S. F., & Portilla, X. A. (2016). Recent trends in income, racial, and ethnic school readiness gaps at kindergarten entry. AERA Open2(3), 2332858416657343.


Silva, M., & Cain, K. (2015). The relations between lower and higher level comprehension skills and their role in prediction of early reading comprehension. Journal of Educational Psychology107(2), 321.



1 The study was designed to enroll 5,100 families but enrolled 4,229 families because some sites fell short of their enrollment target. Of the 4,229 families enrolled, 2,111 were assigned to the treatment group and 2,118 were assigned to the control group.

2 3,373 participants completed the 15-month follow-up data collection activities. 3,318 (1,650 treatment, 1,668 control) completed the structured interview; 2,979 (1,484 treatment, 1,495 control) completed the in-home assessments, and 2,924 participants completed both the structured interview and the in-home assessments.

3 As described in Supporting Statement A and elsewhere in Supporting Statement B, the kindergarten in-home visits may be conducted virtually.

4 See materials presented at the September 21, 2015, meeting of the Secretary’s Advisory Committee on the Maternal, Infant and Early Childhood Home Visiting Program Evaluation: https://www.acf.hhs.gov/opre/resource/secretarys-advisory-committee-maternal-infant-early-childhood-home-visiting-evaluation-9-21-2015

5 Hill et al., 2008.

6 Bowne, Yoshikawa, & Snow, 2017

7 Duncan et al., 2007; Silva & Cain, 2015

8 Quinn & Polikoff, 2017

9 Reardon & Portilla, 2016

10 Kitzman et al., 1997.

6


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorNerenberg, Laura (ACF)
File Modified0000-00-00
File Created2021-10-04

© 2024 OMB.report | Privacy Policy