Alternative Supporting Statement for Information Collections Designed for
Research, Public Health Surveillance, and Program Evaluation Purposes
Supporting Youth to be Successful in Life (SYSIL) Study
OMB Information Collection Request
New Collection
Supporting Statement
Part B
March 2023
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers: Mary Mueggenborg
Part B
B1. Objectives
Study Objectives
The goal of the Supporting Youth to be Successful in Life (SYSIL) study is to expand the evidence base on programs intended to prevent homelessness among youth and young adults with experience in the child welfare system through a summative evaluation of Colorado’s Pathways to Success (Pathways) comprehensive service model. Pathways is an intensive, coach-like case management model for youth and young adults who are or have been involved in foster care. The summative evaluation includes an impact study and an implementation study.
The Pathways impact study will provide evidence of program effectiveness on a large number of policy-relevant outcomes, including stable housing, education, employment, permanent connections to caring adults, and social-emotional well-being. It will show the effectiveness of Pathways at short- and long-term follow-up periods and estimate the extent to which the program is more or less effective for key subgroups. The study will also link features of program implementation (for example, dosage, quality, or adherence of the program delivery) to youth outcomes.
The Pathways implementation study will support interpretation of the model’s impacts on outcomes and identify factors that contributed to or inhibited implementation of Pathways services in different sites1; these findings will aid in the replication or improvement of future Pathways service delivery. The implementation study will systematically assess different contexts in which Pathways is being implemented and the fidelity to which Pathways is being implemented.
Generalizability of Results
The impact study is intended to produce estimates of the Pathways intervention’s impact in the subset of sites in Colorado participating in the study, not to promote statistical generalization to other sites in Colorado or to a broader population. The study will not randomly sample sites for inclusion in the evaluation, a necessary condition for findings to generalize beyond the subset of sites participating in the evaluation. The implementation study is intended to present an internally valid description of the implementation of the Pathways intervention in the selected site, not to promote statistical generalization to other sites or service populations. The study does not require broader generalizability for it to be policy relevant – if Pathways is shown to be effective in the participating sites, it is possible that Colorado may scale up its implementation, and that other states may adopt Pathways.
Appropriateness of Study Design and Methods for Planned Uses
Both qualitative and quantitative data sources will be used to address the key research questions for SYSIL. Quantitative data will be used to understand the impact of Pathways on youth in selected sites. We will use qualitative data to understand what program implementation looks like and how Pathways differs from business-as-usual services in order to inform future efforts to prevent homelessness among this population. The Pathways implementation study data will be used to support interpretation of the model’s impacts on outcomes and identify factors that contributed to or inhibited implementation of Pathways services in different sites. Study reports will be made available to the public. Key limitations will be included in written products associated with the study.
As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.
B2. Methods and Design
Target Population
For the impact study, we will collect data for youth and young adults ages 14 to 23 in the participating sites who are currently in the foster care system or were previously in the foster care system and have at least one risk factor for homelessness.2 We expect to enroll approximately 700 youth and young adults into the study.
As part of the implementation study, the evaluation team will conduct interviews during site visits and focus groups. The target population for the interviews that will be conducted during the site visits includes leaders and their staff who either deliver the Pathways Program (in treatment sites) or comparison services (in comparison sites). We will also conduct two “check-ins” by telephone with program directors from 16 sites (all Pathways sites and comparison sites) to ask about current service delivery. We expect to conduct interviews with up to 30 participants in both the Pathways and comparison sites. Target respondents for the focus groups include youth and young adults participating in the study who are receiving Pathways or comparison services. We expect up to 50 participants for both the Pathways and comparison site focus groups.
Sampling and Site Selection
Pathways was piloted in an urban, a suburban, and a rural site in Colorado. For the SYSIL study, we are expanding that group of sites. When identifying sites to participate in the summative evaluation, the project team has tried to balance two goals: (1) to identify treatment and comparison sites that are well matched in terms of background characteristics and the variables expected to influence youth outcomes (for example, a baseline assessment of the outcomes of interest), and (2) to identify comparison sites that provided services substantively different from Pathways.
Within each site, eligible youth and young adults will be identified through using the Pathways Screening Assessment, which will identify risk factors for homelessness and determine study eligibility among youth and young adults ages 14 to 23 who are currently in foster care. For the purposes of the impact study, we will build on this Pathways eligibility screening process to identify potential sample members in both treatment and comparison sites. 3
Eligible youth and young adults will be invited to participate in the SYSIL study. Their assigned Chafee worker4 will describe the study opportunity and the benefits of participation and assure the youth that they will receive services even if they choose not to participate in the study. For the youth who express interest in participating, the Chafee worker will obtain consent and/or assent to enroll them in the study (Appendix A).
For the implementation study, we will select 6 of the 21 Pathways sites and 6 of the 16 comparison sites for the site visits. We will purposefully select sites that vary on a range of characteristics, including poverty, urbanicity, population without health insurance, proportion of homeless students, and proportion of youth and young adults who receive Chafee services.
Prior to conducting the site visits, we will establish a point of contact at each of the sites. We will work with the point of contact to learn about the site structure and select participants for the staff interviews. Potential participants include program leadership, supervisors, and Chafee workers who deliver services to youth. We will also work with the point of contact at each site to discuss the most effective approach for recruiting focus group participants. Through these discussions, we will emphasize the importance of recruiting participants with both high and low participation rates in the Pathways and comparison services. We plan to recruit enough youth and young adults for each focus group to provide a range of perspectives on their experiences with the Pathways and comparison services.
B3. Design of Data Collection Instruments
Development of Data Collection Instruments
Development of the youth survey began with a focus on the four main outcome domains: (1) housing, (2) permanent connections, (3) education and employment, and (4) social-emotional well-being. We worked with a group of stakeholders, including some YARH-2 grantees, to identify six additional outcome domains that emerged as potentially important and relevant to policy at the start of the SYSIL contract: (1) connections between youth and their peers, (2) involvement with criminal justice system or juvenile justice system, (3) access to available system resources, (4) child welfare history and status, (5) readiness for independence, and (6) parenthood. We then explored various measures to operationalize these constructs and domains for the impact study, which resulted in a draft youth survey. Appendix C presents a list of the sources referenced in development of the youth survey.
We consulted with stakeholders on the draft survey before conducting a pre-test with six youth and young adults who had experience in the child welfare system. Pre-test participants included both males and females; they ranged in age from 15 to 21 years old. The youth and young adults completed hard copy surveys on their own and then participated in a virtual debriefing session with Mathematica staff. We used the pre-test to ensure that questions were understandable and that the language and terms used were familiar to respondents, as well as to identify typical instrumentation problems such as unclear question wording and incomplete or inappropriate response categories. We also used the pre-test to help measure the response burden. We made revisions to the survey based on pretest participant feedback. The survey will be programmed into Confirmit, a web-based software application. Mathematica staff will thoroughly test the web survey prior to fielding it with youth.
To develop the interview and focus group discussion guides, Mathematica reviewed publicly available documents about the Pathways program to identify key topics of interest and tailor the guides accordingly.
Table B.1 presents a crosswalk between the data collection instruments and the study’s objectives.
Table B.1. Crosswalk Between Data Collection Instruments and Study Objectives
|
SYSIL Youth Survey— Baseline Survey |
SYSIL Youth Survey— Follow-Up Survey 1 (6 Months) |
SYSIL Youth Survey— Follow- Up Survey 2 (12 Months) |
SYSIL Youth Survey—Follow-Up Survey 3 (24 Months) |
Interview Guide for Pathways Sites (Treatment Sites) |
Program Director Check-ins for Pathways Sites (Treatment Sites) |
Interview Guide for Comparison Sites |
Program Director Check-ins for Comparison Sites |
Focus Group Discussion Guide for Pathways Youth (Treatment Youth) |
Focus Group Discussion Guide for Comparison Youth |
Objective 1: Provide evidence of program effectiveness on policy-related outcomes |
x |
x |
x |
x |
|
|
|
|
|
|
Objective 2: Provide estimates to determine the extent to which the program is more or less effective for key subgroups |
x |
x |
x |
x |
|
|
|
|
|
|
Objective 3: Create links from features of program implementation to youth outcomes |
|
x |
x |
x |
x |
|
x |
|
x |
x |
Objective 4: Support interpretation of the model’s impact on outcomes |
|
|
|
|
x |
|
x |
|
x |
x |
Objective 5: Assess Pathways implementation |
|
|
|
|
x |
X |
x |
x |
x |
x |
B4. Collection of Data and Quality Control
Impact Study
As previously mentioned, Chafee workers in the treatment and comparison sites will use the Pathways eligibility screening process to identify youth and young adults who are eligible for the study. Eligible youth and young adults will be invited to participate in the SYSIL study. Their assigned Chafee worker will describe the study opportunity and the benefits of participation and assure the youth and young adults that they will receive services even if they do not to participate in the study. If the youth and young adults are interested in participating in the study, then their Chafee worker will collect consent (or assent) and move to the first stage of data collection: using a phone or tablet completing their baseline survey.5 The baseline survey is a self-administered, web-based survey that youth and young adults will complete using a Mathematica-provided cell phone or tablet computer. If youth are unable to complete the survey on the phone, they will be given the option to call in to Mathematica’s Survey Operations Center and complete the survey with a trained interviewer over the phone. Chafee workers will also oversee administration of the 6-month follow-up survey to the youth and young adults following the same procedures as the baseline survey.
The 12- and 24-month follow-up surveys will not be presided over by Chafee workers. Mathematica will reach out directly to the youth via email, text, phone, and mail to invite them to complete the surveys on their own over the web (Appendix C). For the 12- and 24-month follow-up surveys, the youth will also have the option to complete the survey over the phone with a trained Mathematica interviewer. For hard-to-reach youth, trained local Mathematica staff will go into the field6 to locate youth in-person and ask them to complete the survey either over the web or by phone.
To maintain updated contact information and continue to familiarize youth with the study, Mathematica has been reaching out to youth via text and email asking youth to contact them if their contact information has changed with previously approved messages (Appendix C). Moving forward, Mathematica will continue contacting youth via text and email at 9, 15, 18 and 21 months after enrollment asking them to provide updated contact information through the contact information requests (see Appendix C and Instrument 5).
When enrolled, youth are grouped with other youth who enrolled around the same time. The group – or cluster – of youth is then used to send contact information reminders at months 9, 15, 18 and 21 after enrollment. As a separate sub-study within the broader Pathways impact evaluation, we will conduct a small, randomized experiment to understand the effectiveness of incentivizing contact information responsiveness. Within both Pathways and comparison counties, we will randomly assign each cluster of youth (youth enrolling in the same month) to either intervention (receiving $5 gift card for completing each of the 9, 15, 18 and 21 month reminders, a total of $20) or control (receiving no money for the 9 month reminder, $5 for each of the 15 and 18 month reminders and $10 for the 21 month reminder, resulting in $20). This randomized experiment will allow for a test of the effect of a $5 token of appreciation at the 9 month contact information reminder period, while maintaining the goal of offering all youth the same overall level of tokens of appreciation in the study. The random assignment of youth to condition for this token of appreciation experiment will be documented within Mathematica’s secure systems.
Mathematica will train staff to collect consent and administer the surveys with evaluation-specific training materials, which will be developed in advance of data collection and updated as needed. Mathematica will provide staff with various tools throughout the study and periodically conduct refresher trainings as needed.
Implementation Study
Mathematica will conduct the interviews, focus groups7, and check-in calls. During the site visits, Mathematica will conduct one-on-one interviews with key staff and stakeholders, such as leaders of child welfare agencies and Chafee workers. Check-in calls will be with program directors to document the current services being offered to youth and young adults. Focus groups will be conducted with participating youth and young adults. The site visitors will record each interview, check-in, and focus group for transcription so that data are collected verbatim for analysis.
We will take several steps to ensure consistent, high quality data collection across implementation study sites. Before conducting the site visits and check-in calls, we will provide training to all implementation study staff to review the implementation study’s research questions as well as interview pitfalls and best practices.
B5. Response Rates and Potential Nonresponse Bias
Response Rates
Generalizability
The surveys, focus groups, interviews, and check-ins are not designed to produce statistically generalizable findings – the findings will not generalize beyond the participating individuals and sites in the study.
Comparability
For the surveys, we will review and analyze response rates between treatment and comparison groups. We assume the following overall response rates: 90 percent for Follow-up 1 (6 Months); 85 percent response rate for Follow-up 2 (12 months); and a 70 percent response rate for Follow-up 3 (24 months). While we will track overall response rates, the focus will be on obtaining similar response rates between the treatment and comparison groups for comparability. For the follow-up surveys, we will conduct non-response follow-up via phone and field, as needed. Field staff will be trained on data collection and privacy procedures. These trained field staff will assist in achieving desired response rates by using locating efforts to find study participants to completes their surveys.
NonResponse
Participants will not be randomly sampled and findings are not intended to be representative. Demographics will be documented and reported in written materials associated with the data collection by respondent status.
B6. Production of Estimates and Projections
Data will not be used to generate population estimates, either for internal use or dissemination.
B7. Data Handling and Analysis
Data Handling
Impact Study
The youth surveys will be programmed with Mathematica’s Confirmit software. Error messages will be programmed into Confirmit to alert respondents to inconsistencies between data elements, values beyond the expected range, and similar issues. Respondents will have an opportunity to correct such errors before the data are submitted. Surveys completed over the phone will be completed with trained Mathematica staff, who will enter the responses directly into the web-based survey. The use of a web-based survey eliminates the need for an additional step for data entry, thus minimizing potential errors that may occur during that process.
Once a sufficient number of responses have been received, we will conduct an initial quality check to identify any potential issues with the data. Additional data quality checks will be conducted throughout the study.
The study participants will not be randomly sampled. In addition, the findings from this study are not intended to be generalizable to or representative of a broader target population. We will focus our analysis on the internal validity of the findings, rather than on the generalizability of the findings. As a result, we will not attempt to produce population estimates of program effectiveness by using survey weights and we will not estimate sampling error. However, we will estimate the standard error of the impact estimate, as described in the Data Analysis section below.
Implementation Study
After each round of site visits, we will systematically code the data using the components of the Pathways service model and the Consolidated Framework for Implementation Research (CFIR).
In the initial stages of coding, the members of the study team will together review the interview transcripts and code data. To ensure reliability across coders, each study team member will independently code a transcript. The team will then meet to compare codes applied to the transcript to identify and resolve discrepancies. The team will continue this process until consistency in the application of codes across coders is achieved.
Information from the check-ins will be used to describe services offered to youth and young adults. We will not use CFIR to code the data.
Data Analysis
Impact Study
To examine the impact of Pathways on key outcomes—including but not limited to housing, educational attainment, employment, permanency, and well-being (Research Question 1) we will estimate a regression model that includes an indicator of the Pathways treatment status as well as all baseline characteristics used to assess balance, to improve the precision of the impact estimates and statistically adjust for any differences. Because assignment to the Pathways program is at the site level and our analyses will be conducted at the youth level, we will adjust the estimated standard errors for clustering in all models. This will allow us to estimate the appropriate standard errors and p-values for all inferential analyses.
Assuming the analytic sample for a given outcome satisfies the baseline equivalence requirements, this general analytic approach will provide unbiased estimates of two policy-relevant effects: (1) the impact of Pathways on the target population and (2) the impact of Pathways on program participants. The first estimates the impact of the offer to receive Pathways, also known as the intent-to-treat (ITT) impact estimate. The ITT estimate could be diluted because it could include youth assigned to the treatment group (that is, in a treatment site) who did not actually use Pathways services. The second estimates the impact for youth who actually participate in Pathways—the treatment-on-treated (TOT) impact estimate—which is calculated by dividing the ITT impact by the proportion of youth who participate the program (Bloom 1984).
To examine whether Pathways is particularly effective for key subgroups of the target population (Research Question 2), we will use the same approaches described above for the full analytic sample. We will estimate separate impacts for key subgroups, and potentially explore the intersection of two or more subgroups, such as race and sexual identity. Given that the sample for the subgroup analyses in research question 2 will be a subset of the full sample used to answer research question 1, the study will have reduced power to detect these impacts as statistically significant. We will assess whether impacts vary across subgroups by interacting subgroup indicators with the treatment status indicator (interaction models), then use an F-test to assess whether the subgroup differences are statistically significant.
To examine whether features of Pathways implementation influences youth outcomes (Research Question 3), we will estimate the relationship between the Pathways program and related outcomes by first regressing each outcome of interest on the measures of Pathways implementation while adjusting for baseline characteristics that are likely to influence the outcome. In other words, this model will estimate whether youth with better or more exposure to the implementation have better outcomes (after adjusting for baseline characteristics as proxies for potential omitted variables that might produce bias in the observed relationship between implementation and outcomes). To get a reliable metric of implementation, we will start with the full set of implementation measures and use principal components analysis to identify a smaller set of implementation measures that capture much of the variability in implementation. Second, we will examine which individual components of implementation have the strongest relationship with the outcomes of interest. This approach will be comparable to the approach described above. However, instead of using Pathways implementation as a single predictor variable of interest, we will decompose the implementation into individual core components. We will have implementation data on features of Pathways, such as these key components: (1) the dosage and duration of regular case management meetings, (2) the types of goals that youth choose and the services offered to meet those goals, and (3) financial assistance youth may receive. After creating implementation measures for each of these key components, we will use them as separate predictors of participant outcomes. The benefit of this additional approach is that it will help us understand whether, for example, it was the case management or the financial assistance that had more influence on participant outcomes.
In addition to estimating traditional inferential tests, we will offer a more nuanced interpretation of the results through a Bayesian presentation of the findings. We will report the Bayesian posterior probability—the probability that Pathways truly has positive (that is, favorable) impacts—given the observed impact estimates for each outcome. In doing so, we will be able to present results that say, for example, there is a 77 percent probability that Pathways has a favorable effect on participant outcomes—even if the inferential test shows that there is a nonsignificant difference in the average outcomes across conditions. The approach we recommend is described in more detail in Deke and Finucane (2019).
To inform the prior distribution used for the Bayesian presentation of findings, we will draw on multiple sources of credible evidence on the effectiveness of programs that attempt to improve outcomes for a broad range of at-risk youth. This may include programs reviewed by the new Title IV-E Prevention Services Clearinghouse, dropout prevention programs from the What Works Clearinghouse, and potentially evidence on teen pregnancy prevention from the Teen Pregnancy Prevention Evidence Review sponsored by the U.S. Department of Health and Human Services.
To understand the effectiveness of a $5 token of appreciation for updated contact information, we will compare the response rates for the contact information requests across the youth assigned to receive the token of appreciation at 9 months against those youth who did not receive the token of appreciation.
The design and analytic approach for this study was registered at the Open Science Framework (OSF) under the title YARH-3: Building Capacity to Evaluate Interventions for Youth/Young Adults with Child Welfare Involvement At-Risk of Homelessness - Summative Evaluation.
Power analysis: Sample size and Minimum Detectable Impacts
One of the goals of the impact study is to detect statistically significant impacts on youth outcomes, assuming that the comprehensive service model being tested is truly effective. We calculated the minimum detectable impact (MDI) and the minimum detectable effect size (MDES) for the proposed study, assuming the goal is to have 80 percent power and a two-tailed hypothesis test with = 0.05.
We present MDIs and MDESs for two outcome categories: (1) a continuous outcome such as readiness for independence, based on ratings obtained from a survey; and (2) a dichotomous indicator of the incidence of an outcome, such as an episode of homelessness or a diagnosis of clinical depression, based on either survey or administrative data. See Table B.2 for examples of MDIs and MDESs for these two outcome measures at the immediate post-test and at the long-term follow-up.
Colorado inputs and MDIs
As inputs for this calculation, we assumed an expected sample enrollment of 700 youth. Among these youth, we expected an 85 percent response rate from baseline to the 12-month follow-up (an effective sample size of 595 youth). In addition, we expected a 70 percent response rate for the longer-term follow-up (24-month follow-up), for an effective sample of 490 youth. Therefore, we will primarily focus our power calculations on the immediate post-test estimates of program effectiveness at 12 months, because this will be our largest and most powerful test of the full dose of Pathways.
Focusing on the 12-month follow-up, the MDES is 0.25 standard deviations for continuous variables and 0.28 standard deviations for dichotomous outcomes. For dichotomous outcomes, MDIs are 11 percentage points (PP) relative to a 20 percent (or 80 percent) comparison group prevalence rate, or 14 PP relative to a 40 percent (or 60 percent) comparison group prevalence rate.
Table B.2. Minimum detectable impact and effect size calculations for proposed QED of Pathways
|
12-month
follow-up |
24-month
follow-up |
||
|
Continuous outcome |
Binary
|
Continuous outcome |
Binary
|
MDES |
0.25 |
0.28 |
0.27 |
0.30 |
MDI for a 20/80% prevalence rate |
|
11 PP |
|
12 PP |
MDI for a 40/60% prevalence rate |
|
14 PP |
|
15 PP |
Note: For continuous outcomes, we assumed an individual-level R2 of 0.40. For dichotomous outcomes, we assumed an individual-level R2 of 0.15. These assumptions were based on a draft analysis of impact data from a cross-site evaluation of youth in the child welfare system whose parents or caregivers had a substance use disorder. We also assumed a group-level R2 of 0.50 for outcomes in both categories and an intraclass correlation coefficient of 0.02 across sites.
MDES = minimum detectable effect size; MDI = minimum detectable impact; PP = percentage points; QED = quasi-experimental design.
Effectiveness findings from comparable interventions
Few rigorous studies have evaluated the effectiveness of a coach-like, strength-based intervention for homeless youth or youth who are in or transitioning out of foster care (see Morton et al. 2020 for a systematic review of a broad set of interventions to address youth homelessness). We identified five studies with comparable service models and/or populations that we could use to anchor our estimates of MDI (Power et al 2012; US DHHS ACF 2008; Valentine et al 2015; Skemer and Valentine 2016; Theodos et al. 2016). The five studies examined populations and interventions similar to the proposed study. Estimated impacts ranged from 2 to 13 percentage point differences or .18 and .83 standard deviations. Judging from our review of these studies, we believe that the proposed QED impact study of Pathways may be sufficiently powered to produce statistically significant effects for some but not all outcomes, if recruitment and response rate targets are achieved. Next, we discuss additional analytic approach for the impact study.
Additional analytic approach to complement the main design for the impact study
We propose an additional impact study design that uses the administrative data sources to supplement the main impact study. Specifically, we will use a larger pool of potential sample members to expand the comparison group and conduct an analysis that obviates the need to do a clustering correction. This change will address the chief limitation of the main study design: statistical power. However, this approach will only provide evidence about the effect of Pathways on the subset of outcomes that are available in administrative data. Data source
This analysis will use administrative data. To boost the sample size and statistical power relative to the main study, the analysis will include additional sample members in a pre-intervention period, and noneligible youth will be added in the estimation strategy.
Difference-in-differences within a natural experiment
The timing of the introduction of Pathways in Colorado counties creates the appropriate circumstances for a natural experiment. Pathways was initially introduced in five counties in Colorado as part of the YARH-2 grant, in July 2016. We expect that Pathways will become available in additional expansion counties, starting in summer 2021 and continuing throughout the study period. Prior to these periods, the same youth did not have the chance to participate in Pathways. Because youth did not choose when Pathways would be introduced—analogous to an RCT in which youth cannot determine their treatment condition—the situation constitutes a natural experiment. Following from the difference-in-differences approach used in Asheer and colleagues (2017), we will leverage this natural experiment to estimate the effectiveness of Pathways on a larger pool of youth than possible in the main impact design. We will use administrative data for three years before the introduction to Pathways in a given site and three years after the introduction.
To estimate the impacts, we will use a three-step approach common in difference-in-differences estimation. The first step estimates the differences in outcomes among eligible youth in the post-Pathways period relative to those for youth in the pre-Pathways period. This first difference would be the impact of Pathways, provided nothing else changed in treatment sites and in the environment of all youth when Pathways was introduced. However, this circumstance is unlikely, and such an estimate probably cannot be attributed solely to Pathways. To address this limitation, the second difference calculates the change in outcomes in the post-Pathways period relative to the pre-Pathways period among ineligible youth, who should not be impacted by Pathways but could be impacted by other factors. The third step subtracts these two differences, the change in outcomes for ineligible youth and the change for eligible youth. This will be the difference-in-differences estimate of the impact of Pathways on youth outcomes.
Our estimation strategy will be based on a linear regression approach, limited to those individuals who are well matched according to the propensity model. We will statistically adjust for available demographic characteristics in our regression model to account for potential changes in the characteristics of eligible youth that could bias our impact estimates. As with the main impact analysis, we will define the Pathways status in two ways. We will estimate the impact of the offer of Pathways based on all eligible youth in the post-Pathways period (ITT-like effect) and the impact among youth in the post-Pathways period who actually enroll in Pathways (TOT-like effect), because only a subset of eligible individuals in participating sites will actually be offered and receive the program. We will estimate the TOT-like impact by using the Bloom (1984) adjustment, dividing the ITT-like impact by the take-up rate to produce a credible TOT-like impact estimate.
Impact Study Reporting Approach
Given the large number of impacts to be estimated in this impact study, it is important to specify a plan for reporting and interpreting these impact findings. The main study design includes three follow-up survey assessment points; ten outcome domains (with multiple outcomes of interest within each domain); a plan to estimate both ITT and TOT impacts in the main study approach (and to conduct impact analyses using a difference-in-difference technique); and analyses to explore the heterogeneity of impacts across key subgroups. A prespecified reporting approach to distill the key findings and summarize conclusions from this large number of impact estimates will enable a focused interpretation of findings and mitigate concerns with multiple hypothesis tests.
The first aspect of our proposed reporting approach is to separate findings by assessment period and analytic approach. The second aspect of our approach is to prespecify a relatively small number of impact estimates as confirmatory tests in the registry at OSF. We will highlight those confirmatory tests in the main body of the report and use those test results to guide interpretation and conclusions about the effectiveness of Pathways. For the purposes of the impact study, we will use the full-sample, ITT analyses of one or two key outcomes per domain as the confirmatory tests.8 The results of this relatively small number of confirmatory hypothesis tests, along with the Bayesian interpretation of the findings, will guide the summary of the evidence of Pathways. We will integrate the information from both the traditional hypothesis tests and the Bayesian posterior probabilities associated with these tests when summarizing and interpreting the evidence for these confirmatory findings.
We will complement the reporting of the confirmatory tests by summarizing the TOT impact estimates and subgroup findings in the main body. These findings will help illustrate the populations for whom Pathways appears to work best, and the extent to which program take-up influences results. We will also summarize the results of the exploratory analyses for research question 3, which link features of implementation to participant outcomes. Importantly, we will not use nonconfirmatory results (the TOT or subgroup or exploratory findings linking implementation to outcomes) to draw conclusions about the effectiveness of Pathways, unless confirmatory tests corroborate the findings.
We will ensure a fully transparent presentation of all impact findings by including all nonconfirmatory test results in appendices to the reports. In doing so, the main report(s) will fully answer research questions 1, 2, and 3. However, the main body of the report(s), as well as the conclusions that guide interpretation for the report, will be based on the small set of confirmatory tests outlined above.
Implementation Study
For the implementation study, we will analyze qualitative data from visits to Pathways sites to describe factors that either contributed to or inhibited implementation and youth and young adult’s responsiveness to Pathways. We will analyze qualitative data from visits and check-ins to comparison sites to describe the services offered to youth and young adults. We will use a template analysis approach to code and organize the data collected during the site visits. Template analysis uses a coding template (or codebook) to balance the structure involved in using a framework for data analysis with the flexibility necessary to adapt the codebook to the study context (King 2012).
To analyze the coded data, we will generate reports from NVivo for each collaborating site in the implementation study. These reports will include all the data segments coded for each combination of model component and CFIR code. We will develop analytic summaries for each combination of model component and CFIR code for each of the six Pathways sites and determine whether the CFIR constructs exerted negative, positive, or neutral influence on implementation. We will then populate analytic matrices with these summaries for cross-case analysis of patterns of barriers and facilitators related to each model component (Miles and Huberman 1994). This approach supported our coding and analysis in the YARH-2 process study, and we found that it balanced the structure of a framework to guide data analysis with the flexibility necessary to adapt our coding and analysis to the study context.
Data Use
We will develop a final report that outlines the short-, and long-term impacts of the Pathways comprehensive service model. There may also be additional dissemination products generated for public use (such as briefs, interim reports, infographics, and so on). There will be an archived data set for the impact study, likely with the National Data Archive on Child Abuse and Neglect (NDACAN) (hhs.gov).
B8. Contact Person(s)
ACF
Mary Mueggenborg, [email protected]
Catherine Heath, [email protected]
Mathematica
M.C. Bradley, [email protected]
Russell Cole, [email protected]
Menbere Shiferaw, [email protected]
Melissa Thomas, [email protected]
Nickie Fung, [email protected]
Liz Clary, [email protected]
Attachments
Instrument 1: SYSIL Youth Survey (Baseline and Follow-Ups 1-3)
Instrument 2: Interview Guide
2a: Interview Guide (Treatment Sites)
2b: Interview Guide (Comparison Sites)
Instrument 3: Program Director Check-ins
3a: Program Director Check-ins (Treatment Sites)
3b: Program Director Check-ins (Comparison Sites)
Instrument 4: Focus Group Guide
4a: Focus Group Guide (Treatment Youth)
4b: Focus Group Guide (Comparison Youth)
Instrument 5: Contact Information Update Request
Appendix A: Consent and Assent Forms
Appendix B: List of Surveys Referenced
Appendix C: Emails and Text for Outreach to Youth
Appendix D: One-page Informational Documents for Implementation Study
References
Bloom, H.S. “Accounting for No-Shows in Experimental Evaluation Designs.” Evaluation Review, vol. 8, no. 2, 1984, pp. 225–246.
Deke, J., and M. Finucane. “Moving Beyond Statistical Significance: The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations.” OPRE report 2019-35. Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2019.
King, N. “Doing Template Analysis.” In Qualitative Organizational Research: Core Methods and Current Challenges, edited by G. Symon and C. Cassell. London: Sage, 2012.
Miles, M.B., and A.M. Huberman. “Qualitative Data Analysis: An Expanded Sourcebook,” 2nd ed. Thousand Oaks, CA: Sage Publications, 1994.
Morton, M.H., S. Kugley, R. Epstein, and A. Farrell. “Interventions for Youth Homelessness: A Systematic Review of Effectiveness Studies.” Children and Youth Services Review, vol. 116, September 2020, p.105096.
Powers, L.E., S. Geenen, J. Powers, S. Pommier-Satya, A. Turner, L.D. Dalton, D. Drummon, and P. Swank. “My Life: Effects of a Longitudinal, Randomized Study of Self-Determination Enhancement on the Transition Outcomes of Youth in Foster Care and Special Education.” Children and Youth Services Review, vol. 34, no. 11, 2012, pp. 2179–2187.
Skemer, M., and E.J. Valentine. “Striving for Independence: Two-Year Impact Findings from the Youth Villages Transitional Living Evaluation. New York: MDRC, 2016.
Theodos, B., M.R. Pergamit, A. Derian, S. Edelstein, and A. Stolte. 2016. “Solutions for Youth: An Evaluation of the Latin American Youth Center’s Promotor Pathway Program.” Washington, DC: Urban Institute.U.S. Department of Health and Human Services, Administration for Children and Families (HHS/ACF). “Evaluation of the Life Skills Training Program: Los Angeles County.” Washington, DC: HHS/ACF, 2008.
Valentine, E.J., M. Skemer, and M. Courtney “Becoming Adults: One-Year Impact Findings from the Youth Villages Transitional Living Evaluation.” New York: MDRC, 2015.
1 A “site” is a service unit that may include one or more counties. The 37 counties form 15 sites (9 intervention and 6 comparison) due to coordination of services across small adjacent counties.
2 The youth eligibility will be identified through the Pathways Screening Assessment, which will identify risk factors for homelessness and determine eligibility among youth ages 14 to 20 who are currently in foster care. This assessment is administered to youth by their case managers and is part of the regular service enrollment procedures; it is not part of the SYSIL study. For the purposes of the SYSIL study we will use the established Pathways eligibility screening process to identify potential sample members in both the treatment and comparison sites.
3 The screening assessment is not specific to the SYSIL study and is not included as part of this Information Collection Request. Chafee workers in treatment and comparison sites administer the same screening assessment with all potentially eligible youth. This consistency in approach and instrumentation will help minimize the threat of differential screening results across conditions.
4 We use the term “Chafee worker” throughout in reference to the Chafee Foster Care Independent Living program workers. These are state employees who assist youth and young adults in emancipating from the foster care system. Chafee workers in the treatment sites will be trained to be Pathways Navigators, who will use coach-like strategies to engage with youth and young adults. Chafee workers in the comparison sites will not be trained as Pathways Navigators during the impact study; in those sites, youth and young adults will receive business-as-usual services.
5 For the purposes of this design, we assume that the child welfare agency will provide consent for youth younger than age of 18 whose parent or guardian consent cannot be obtained. We will first contact parents or guardians to obtain consent for youth younger than age 18. We will work with the child welfare agency to obtain consent where we cannot reach parents or guardians to consent youth younger than age 18.
6 The field work will be dependent upon and according to public health protocols related to COVID-19.
7 Mode of administration for the interviews and focus groups will be determined based on existing public health guidelines concerning COVID-19 at the time of data collection.
8 If there are multiple key outcomes of interest in a domain, and selecting one or two is infeasible, we will develop a composite outcome that pools information across all measures within the domain to be used as the confirmatory test.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Melissa Thomas |
File Modified | 0000-00-00 |
File Created | 2023-07-29 |