Supporting Statement B_OHS TIC Prog Eval_111221_rev

Supporting Statement B_OHS TIC Prog Eval_111221_rev.docx

Head Start Evaluation of a Trauma-Informed Care Program

OMB: 0970-0590

Document [docx]
Download: docx | pdf


Head Start Evaluation of a Trauma-Informed Care Program



OMB Information Collection Request

0970 - NEW




Supporting Statement Part B –

Statistical Methods

November 2021















Submitted By:

Office of Head Start

Administration for Children and Families

U.S. Department of Health and Human Services



SUPPORTING STATEMENT B – STATISTICAL METHODS


  1. Respondent Universe and Sampling Methods

The respondent universe of those who may potentially be eligible include 792 Head Start Centers within Region V (parts of Illinois, Indiana, Michigan, and Wisconsin). This specific area (and the geographic constraints for sites) is based on the need for on-site implementation support provided by the intervention developer, which is based in Chicago. Based on our sampling criteria, we have estimated that 72 sites would qualify for recruitment. Selection criteria were developed in collaboration with Regional partners as well as the intervention developers to enhance the likelihood that sites will be able to fully implement the program, with stability in personnel and adequate resources. More specifically, we will only include sites that are/have:

  • Located within 100 miles of Chicago

  • Center and key support personnel stable (no changes 2 years prior to recruitment and no upcoming transitions anticipated):

    • Head Start grantee since at least 2020

    • Same Center director since at least 2020

    • Education Manager or Coordinator since at least 2020

  • Interested and available coach who has an existing relationship with Center staff

  • No active Head Start regulation deficiencies

  • Not actively training on another program designed to impact teacher practices

  • At least three 3-5 year old classrooms at the Center

  • Average or better teacher turnover rate for the region (28% for 2019)


Of the 72 eligible, we expect that approximately 30 may be interested and will apply to participate, of which we will select 10 sites based on the following considerations, which are intended to obtain a diverse sample and to explore implementation factors that may be related to outcomes:

  • Site readiness as defined by average scores on the Attitudes Related to Trauma Informed Care (ARTIC) and the Trauma-Informed System Change Instrument (TISCI), completed by all potential teacher leads and assistants, administrators, and coaches at each site.

  • Type/intensity of coaching provided (e.g. practice-based, research model)

  • # of hours per month a Mental Health Professional is onsite

  • Type of funding recipient (e.g., contract partners, delegated)

  • Number of years Head Start program director has been in their role

  • Type of program (school, community, etc.), full or half-day, and number of days/week

  • Urbanicity/state

  • % of children who have received mental health consultation and who have an Individualized Educational Plan for an identified disability


We will also consider race and ethnicity of both staff and children from applicant sites to ensure a sample that reflects regional socio-demographics.



  1. Procedures for the Collection of Information

Based upon all the selection data reviewed, a consensus rating of “readiness” will be created for each site by the UNC-CH research team. Through this process, they will also identify the specific indicators on which to match sites (that cannot determine this a priori given lack of data on potential variability). Randomization will be conducted by a UNC-CH statistician. Research staff at UNC-CH will provide the statistician with an anonymized list of site IDs, and they will use Proc Surveyselect in SAS (v 9.4) to randomize matched programs to participate in the high- or low-intensity RLR intervention.


Because we will be collecting PII (names and emails) to match participant data across time and because we hope to contribute to generalizable knowledge, we will be requesting written consent from teachers, coaches, and site directors in accordance with UNC-Chapel Hill Institutional Review Board (IRB) guidelines. Written consent forms will be sent to teachers and coaches in advance of a virtual meeting where key points and privacy protections will be reviewed and questions answered without the presence of the site director or any administrator/supervisor. We will emphasize that this is voluntary and that no information they provide will be shared with their sites in an identifiable manner. We will ask teachers to return consent forms to us via email within one week indicating their interest (yes or no). If an inadequate number of teachers agrees to participate (< 3), we will communicate to school administrators that their site is no longer under consideration to participate but will not share the specific reason. In prior work, we have rarely had teachers who decline, but want to reduce any opportunity for coercion or negative consequences. A similar virtual consent process will be used with coaches. Site directors will be consented separately, and following confirmation of site eligibility and participation from teachers and coaches.

Although our sample of teachers is relatively small (n = 30), our repeated measures methodology significantly reduces variance in the estimate of treatment effects, thereby greatly enhancing power relative to a traditional pre-post design. Based on 10 Centers randomly assigned to two conditions, 4 teachers per center, and 100 EMA surveys per teacher (our primary outcome measure), and assuming an alpha of 0.05, power of 0.8, an ICC of 0.02, and that 10% of the variance is explained by covariates, we can detect effect sizes as small as 0.149. This is a conservative estimate of the difference we anticipate between low and high intensity effects.


UNC-CH research staff will collect baseline data on all teachers at the beginning of the school year (August) prior to randomization and initiation of the intervention. Unless otherwise specified, data will be collected using secure web-based surveys (e.g., REDCap). UNC researchers will provide instructions for participants to complete evaluation activities and will monitor survey completion remotely and provide technical assistance as needed.

All data collection activities will be conducted virtually, except the classroom observations, and will be identical across randomization groups.


Primary Outcome Measures (administered at baseline and four additional occasions across the school year (October, December, February, and April).

  • Ecological Momentary Assessment (EMA) survey: EMA is a novel methodology using brief, repeated self-report surveys that involves repeated sampling of subjects’ current behaviors and experiences within their natural environment, an approach that will provide sensitive assessment of teacher practice change. The surveys (<5 mins. each) will be administered on a secure Web-based phone application using the Ilumivu EMA Mobile platform. Teachers will be text-prompted to complete surveys four times per day for one school week at each of five assessment periods, reporting on their practices during the previous hour (or day, as relevant). Questions ask about the four specific teacher practice change areas aligned with specific toolkit strategies to promote the primary TIC outcome areas: safe spaces, building relationships, supporting emotion regulation, and practitioner self-care.

Implementation Activities – recorded by intervention providers and coaches as implementation activities are completed (at least monthly).

  • Coaching logs: Coaches will complete a checklist of topics discussed and activities completed for each meeting with their teachers, and also report on the duration of each meeting and its format (i.e., virtual, in-person). This log includes Likert ratings assessing their perceptions of each teachers’ completion of essential toolkit activities. Key open-ended questions will also assess for implementation challenges and facilitators such as perceived leadership support. Logs will be administered through a secure online data platform.

  • Classroom Observations1: Lurie staff will observe the classroom environment and rate its alignment with recommended Safe Spaces indicators using a measure they have developed that demonstrates sensitivity to self-regulation activities and displays of materials that support social-emotional development in the classroom. Observers do a “distraction-free” walk-through and then observe for 10-15 minute periods at least twice in a day and then rate each global indicator on a 0-3 scale.

  • Training and TA office report1: Lurie staff will record information on how often coaches and teachers at each site call them for TA and the topic of the request, as well as on staff attendance at trainings, Implementation Support meetings, and coaching sessions as an indicator of fidelity.

Baseline-only (August, 2022)

    • Background Questionnaires – Participating teachers and coaches will complete a brief survey on their training, education, and years of experience.

Pre-Post Questionnaires (August, 2022 and May, 2023)

    • Attitudes Related to Trauma-Informed Care (ARTIC) – The ARTIC Scale is a valid measure of an individual’s attitudes towards trauma-informed care that assesses: (1) Underlying Causes of Problem Behavior and Symptoms, (2) Responses to Problem Behavior and Symptoms, (3) On-The-Job Behavior, (4) Self-Efficacy at Work, and (5) Reactions to the Work.

    • Professional Self-Care Scale (Dorociak et al.; 2017). This is a 5-factor, 21-item scale designed to measure personal and professional self-care activities.

Post-Only Measures (May, 2023)

    • Satisfaction measures (developed for this study) will be administered to all participating teachers and coaches (different versions). Both versions will include questions of the acceptability of the approach, it’s perceived relevance to practice, and utility for impacting students (rated on a 1-5 scale), implementation barriers and facilitators, and perceived leadership support.

    • Interviews and Focus Groups: At the end of the intervention, UNC researchers will conduct semi-structured virtual interviews or focus groups with all site administrators, TTA providers and coaches, as well as a sample of teachers within each randomized group (with at least some teachers from each site).

  • Leadership Interviews: Interviews with Center directors and TTA providers will assess their perspectives on teacher implementation, any challenges and areas for improvement, fit and impact of the program, and other organizational and contextual factors they perceive as relevant.

  • Coach Interviews: Interviews with coaches will assess their perspectives on teacher implementation, fit and impact of the program, challenges and areas for improvement, and feedback on their role and the blending learning TTA approach.

  • Teacher Focus Groups: Approximately 10-12 teachers will be randomly selected to participate in focus groups, with representation across sites and condition. Focus group questions will assess teachers’ experiences with the intervention, including coaching and their perceived benefits for their practice as well as for students. It will also include questions for coaches about acceptability and perceived impact of virtual vs. in-person coaching.



  1. Methods to Maximize Response Rates and Deal with Nonresponse

Research staff will provide instructions for participants to complete all evaluation activities and will monitor survey completion remotely to provide technical assistance as needed. To support collection of EMA data, a research assistant will review responses every two days and send texts to any teachers who skip more than one survey per day to problem solve any difficulties.


Site application forms will ask how teachers will be supported with time and resources to participate in evaluation activities. In addition, site administrators will be asked to sign a MOU indicating their support of the project.


Measures included in this evaluation all demonstrate adequate psychometric characteristics for the intended use. Qualitative data is not intended to be generalizable, but will complement quantitative data by facilitating interpretation and generating hypotheses for future research.


To assess nonresponse bias, we will descriptively examine characteristics of those sites selected for the study and those who were eligible but did not apply based on PIR data.

For key quantitative outcome measures (e.g., EMA surveys, classroom observations, and ARCTIC) that are designed to yield statistically interpretable results, we anticipate minimal missing data with our data collection approaches. Any missing data at the item or survey level will be addressed through joint multiple imputation, which has been identified as the preferred technique for small samples.2 If significant group differences are found between groups at baseline on key pre-test and/or demographic variables, we will consider the use of weights or inclusion of covariates.


The interviews, focus groups, and implementation data are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. We will qualitatively assess characteristics of nonrespondents, although based on previous work anticipate nonresponse within a study like this will be very low.



  1. Test of Procedures or Methods to be Undertaken

Implementation and qualitative measures (e.g., coaching logs, satisfaction surveys, interviews/focus groups) have been developed for this study specifically in order to obtain the relevant information. These are based on similar measures used by the UNC research team in other studies3, and may thus be considered to have been ‘pre-tested’. Other questionnaires have been used in prior published work (e.g., ARTIC, Professional Self-Care Scale). The EMA survey and classroom observation measure have been piloted by Lurie as part of other, non-federal studies.


  1. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Desiree W. Murray, PhD., Senior Research Scientist, Center for Health Behavior and Disease Prevention, UNC-Chapel Hill (919) 923-2896 – designed evaluation and will serve as Principal Investigator of the project, overseeing individuals collecting data and conducting analyses


Weilin Li, PhD, Senior Data Scientist, Child Trends, (240)-22-9200 – consulted on the statistical analysis plan


UNC School of Medicine Biostatistics Core, North Carolina Translational and Clinical Sciences Institute (919.966.6022) – will provide consultation on analytic methods to Dr. Murray



1 As cited in 44 USC, 5 CFR Ch. 11 (1-1-99 Edition), 1320.3: Definitions: Facts obtained through direct observation by an employee or agent of the sponsoring agency or through nonstandardized oral communication in connection with such direct observations are considered an exclusion to the definition of “information” under PRA regulations (5 C.F.R. 1320.3(h)(3))

2 McNeish, D. (2017). Missing data methods for arbitrary missingness with small samples. Journal of Applied Statistics44(1), 24-39.

3 LaForett, D., Murray, D.W., Reed, J.J., Kurian, J., & Mills-Brantley, R. (2019). Delivering the Incredible Years® Dina Treatment Program in schools for early elementary students with self-regulation difficulties. Evidence-Based Practice in Child and Adolescent Mental Health. doi.org/10.1080/23794925.2019.1631723

Murray, D.W., Rabiner, D.L., Kuhn, L., Pan, Y., & Sabet, R. (2017). Investigating teacher and student effects of the Incredible Years® classroom management program in early elementary school. Journal of School Psychology. doi.org/10.1016/j.jsp.2017.10.004




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAlexandra Verhoye
File Modified0000-00-00
File Created2021-12-15

© 2024 OMB.report | Privacy Policy