Supporting Statement B_FUP_0970-0514_FINAL

Supporting Statement B_FUP_0970-0514_FINAL.docx

Evaluation of the Family Unification Program

OMB: 0970-0514

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes




Evaluation of the Family Unification Program



OMB Information Collection Request

0970 – 0514





Supporting Statement

Part B



AUGUST 2021










Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Kathleen Dwyer, Ph.D.


Part B


B1. Objectives

Study Objectives

The primary objective of this study is to test, through a rigorous evaluation, the impact of the Family Unification Program (FUP) on the child welfare involvement of families in six sites across the country. FUP aims to improve child welfare outcomes of homeless and unstably housed child welfare-involved families by providing them with permanent Housing Choice Vouchers. This information collection request (ICR) is for a three-year extension without change of a currently approved information collection (OMB No. 0970-0514). The study has two main components that support this objective: an impact study to assess the impact of FUP on the outcome of interest and an implementation study to describe how FUP is implemented in each of the sites. The specific research questions for the two components are outlined in SSA section A2.


Generalizability of Results

Impact Study

This randomized study is intended to produce internally-valid estimates of the intervention’s causal impact in chosen sites, not to promote statistical generalization to other sites or service populations. The chosen sites are Bucks County, PA; Chicago, IL; King County and Seattle, WA; Phoenix, AZ; Orange County, CA; and Santa Clara County, CA. The results of this study will be generalizable to these sites, though the small number of sites will limit generalizability beyond these sites. The design is limited in that findings may not hold for other PHAs where families have different demographic characteristics or with a different policy or service context than the six sites included in the study. Lack of generalizability will be acknowledged in reports of the study’s findings.


Implementation Study:

This implementation study is intended to present an internally-valid description of the implementation of FUP in the six sites, not to promote statistical generalization to other sites or service populations. The results of the implementation study are not designed to be representative of or generalizable to all providers or families who obtain housing through FUP vouchers in the six study sites but are intended to reflect variation in stakeholders’ experiences. The design is limited in that it will not capture every potential stakeholder, and each stakeholder’s participation is voluntary. Important information needed to answer the study’s research questions may not be collected if stakeholders decline to participate. Limitations of the qualitative study design and lack of generalizability will be acknowledged in reports of the study’s findings.


The planned study design combining a quantitative impact study with a qualitative implementation study is the best approach for obtaining the information OPRE needs to better understand the impact of FUP, how the program is currently being delivered, and how implementation varies across sites. Despite these generalizability limitations, ACF and HUD may use these results to improve the FUP program. ACF will use the results to contribute to the evidence base on whether and how housing helps families in the child welfare system. The results from this study will inform HUD about ways to improve the program. Furthermore, ACF and HUD may use the results to identify how to serve unstably housed families in the child welfare system.


Appropriateness of Study Design and Methods for Planned Uses

Impact Study

The use of an RCT study design will allow us to learn about the causal impacts of FUP and whether families who receive FUP vouchers experience better child welfare outcomes than those who receive services as usual. Public Child Welfare Agency (PCWA), Homeless Management Information System (HMIS) and Public Housing Authority (PHA) administrative data and program data from the referral form will be used to ensure that randomization produced groups that are equivalent on key baseline measures. The project team will use PCWA administrative data and HMIS data to measure the outcomes in the research questions listed in section A2.


Implementation Study

The accompanying implementation study will allow us to learn about the implementation of FUP, how the context for families experiencing homelessness and housing instability in each site influences implementation, and how FUP compares to other services families may receive in each site. Special attention will be paid to determining the differences in FUP implementation across sites and the extent to which sites are implementing the same program model. The current planned three site visits to each site will allow the project team to interview PCWA administrators and managers, public housing authority PHA administrators and managers, and administrators of local referral and services providers. The project team will also hold focus groups with front line staff from the PCWA, PHA and any referral or service providers. These interviews and focus groups will allow us to gain a detailed understanding of the different aspects of FUP’s implementation, and the housing and service context in each site.

Data from the impact and implementation studies are not intended to be representative of all families eligible for all FUP programs. As such, findings from this study are not generalizable to all FUP programs. Key limitations will be included in written products associated with this study. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.


B2. Methods and Design

Target Population

Impact Study

The target population for the impact study includes all families identified as referred for FUP. The FUP program serves families involved in the child welfare system whose lack of adequate housing is a primary factor in the imminent removal of a child from the household (preservation families) or where housing is a barrier to reunification (reunification families). FUP also serves youth transitioning from foster care who do not have adequate housing; however, this population is not the focus of this evaluation. Families must meet the minimum criteria for voucher eligibility, which are: the family has inadequate housing, has an income below 30 percent of the area median income, has no adult sex offender living in the household, and has no adult living in the household who has been convicted of producing methamphetamines in public housing. Individual public housing authorities and child welfare agencies may have additional eligibility criteria around criminal history or housing history (e.g., no felonies in the last 3 years or the family does not owe arrears to the housing authority). The unit of analysis for the primary outcomes of interest, reunification and preservation, are the families. The study will include all families referred to FUP in the six study sites. However, sites themselves have been purposively selected, as discussed below (see Sampling and Site Selection). As of August 2021, there have already been 822 families randomized into the study across the six sites, and we anticipate an additional 200 families will be randomized into the study across the 2 sites still randomizing.

Implementation Study

The target population for the implementation study includes child welfare agency, public housing authority, continuum of care, referral provider and service provider administrators and staff, as well as parents who have obtained housing with FUP vouchers. For administrator and staff interviews and focus groups, the sampling frame will consist of the roster of staff. For parent interviews, the sampling frame will consist of parents who obtained housing with FUP and signed a consent form agreeing to be contacted for an interview. The unit of analysis is the individual, and the project team will use purposive sampling to select respondents in each category who are recommended by program and agency administrators. This will allow the project team to speak with respondents who have the role and experience necessary to give sufficient information per the research questions. Parents will be asked to consent to share their contact information for scheduling the interviews by the Public Housing Authority. We will randomly select families to participate in these interviews from among those who consented. The families will not be representative of the population of families who obtain housing with FUP because not all families will consent.



Six sites are included in the current study. As of August 2021, the project team has completed interviews with 31 administrators and managers and completed focus groups with 60 front line staff. We will interview or conduct focus groups with up to an additional:

  • 23 Administrators and managers

  • 186 Frontline staff

  • 72 Parents


Sampling and Site Selection

Approximately 242 PHAs have administered FUP vouchers at some point. However, many of these sites do not have any vouchers available and therefore were not appropriate candidates for random assignment. In April 2018, HUD released the first Notice of Funding Availability (NOFA) for FUP since 2010. In October 2019, HUD released an additional NOFA for FUP. From the list of awardees of each of these NOFAs, the evaluation team selected sites that received at least 40 vouchers and planned to distribute at least 40 of these vouchers to families. For cost reasons, we did not include any sites that planned to distribute fewer than 40 vouchers to families. The evaluation team assessed sites on these criteria using information from HUD’s award announcement and available information from sites’ applications. Evaluation sites were then randomly selected from this initial eligible list. When potential sites had been selected, the evaluation team held site-specific follow-up conversations with the PHA and PCWA agency heads, or their designates, to get information about their FUP programs. These conversations used the guide for recruitment with PHA and PCWA administrators (appendix F). In particular, the information gained from these conversations was used to assess the site’s ability and willingness to participate in an RCT. We used two primary criteria to evaluate whether the site can support an RCT:

  • Size of the eligible population. Based on a site’s estimate of their eligible population, a site must have enough FUP eligible families expected within twelve months of the project implementation to provide a reasonable size control group while utilizing all their vouchers.

  • Referral process. The site had to have a referral process, or be willing to adopt a referral process, that allowed for appropriate randomization of families to treatment or control groups. If the PCWA had an existing waitlist, they had to be willing to reassess the family’s housing status and randomly assign those on the waitlist.



If a site did not meet all the criteria or chose not to participate, we randomly selected a replacement site to reach out to. At the end of this process, five awardees from the 2018 NOFA and two awardees from the 2019 awardees season agreed to participate in the study. Two of the awardees, King County Housing Authority and Seattle Housing Authority, are treated as one site because they partner with the same PCWA and they operate as one program.



Impact Study

The impact study sample includes all families referred for FUP vouchers and randomized in each of our six selected sites.



Implementation Study

The implementation study sample includes a sample of administrators, staff and families from each site. This sample includes a selected sample of agency management, all the FUP managers, a selected sample of frontline staff recruited by the FUP managers, and a random sample of families who obtained housing using FUP and consented to be contacted for interviews.



B3. Design of Data Collection Instruments

Data collection is taking place in the form of phone interviews, three site visits, program data collection, and administrative data collection. Table A1 in Supporting Statement A summarizes all of the data collection that will be conducted for the evaluation and Table B1 below includes a crosswalk between the appropriate research question and each instrument.



Table B1. Data Collection Instruments and Research Questions Crosswalk







 

Instruments

Research Questions

Guide for Implementation Study for PCWA Management

Guide for Implementation Study for PHA Management

Guide for Implementation Study for CoC Management

Guide for Implementation Study for Referral Provider Administrators

Guide for Implementation Study with PCWA FUP Management

Guide for Implementation Study for PHA FUP Management (Second)

Guide for Implementation Study Focus Groups for PHA Frontline Workers (focus groups)

Guide for Implementation Study for Parents

Guide for Implementation Study Focus Groups with Frontline Workers

Guide for Implementation Study for PCWA FUP Management (Third)

Guide for Implementation Study for Service Provider Management

Housing Status Form

Referral Form

Randomization Tool

Housing Assistance Questionnaire

Ongoing Services Questionnaire

Dashboard

Administrative Data List

Primary impact study research questions

Do FUP vouchers improve child welfare outcomes?


















X

Does FUP reduce the probability that a child is removed and placed into out-of-home care (removal)?


















X

Does FUP increase the probability that a child in out-of-home care is reunified with the child’s family? Does FUP decrease the time to reunification?


















X

Does FUP reduce the number of new reports of child maltreatment?


















X

Supplemental impact study research questions

Does FUP increase the probability that a child welfare case will be closed?


















X

Does FUP decrease the amount of time a child welfare case is open?


















X

Does FUP reduce emergency homeless shelter stays?


















X

Implementation study questions

Which families are targeted by the public child welfare agency for FUP?




X

X




X



X

X






How is the public child welfare agency identifying eligible families?




X

X




X



X







What types of services are provided along with the FUP housing subsidy?









X

X

X



X





Which agency provides these services?










X

X



X





What is the nature and frequency of the services?









X

X

X



X





What data are the public housing authority and public child welfare agency collecting as part of the FUP program?



X


X

X





X








How is the partnership between the PHA, the PCWA, and the CoC structured?

X

X

X


X

X



X

X









What are the major implementation challenges and key facilitators to successful implementation of the model?





X

X

X


X

X









What share of families who receive FUP vouchers sign a lease and maintain their housing?















X




What are the barriers and facilitators to a family signing a lease and to maintaining their housing?






X

X



X









What are the relevant aspects of the local demographic, housing, economic, and service environment?

X

X

X


X





X









How do these relevant aspects shape the FUP program in each site?

X

X

X


X





X









How do families experience FUP?








X











Which families benefit most from the program and under what conditions?








X











How do differences across sites in each aspect of their FUP models (target population, identification process, partnerships, housing assistance, case management, support services, and local context) relate to possible outcome differences across sites?

X

X

X

X

X

X

X

X

X

X

X








How has the COVID-19 pandemic impacted the structure, implementation, or coordination of services?

X

X

X

X

X

X

X

X

X

X

X














Development of Data Collection Instruments

Impact Study

Administrative data (Instrument 18):

We will collect administrative data from three sources: PCWA’s administrative data system, the PHA’s administrative data system, and the CoC’s Homelessness Management Information System (HMIS). The list of data elements to be collected from each agency is outlined in the administrative data list (instrument 18). This list was developed based on past impact studies of FUP (Pergamit et al. 2017), supportive housing for child welfare involved families (Cunningham et al. 2016), and supportive housing (Cunningham et al. 2016).



Implementation Study

Preliminary calls

In the first two weeks after awards were made through the 2018 NOFA and 2019 NOFA, we conducted phone interviews to collect information relevant for site selection and recruitment (appendix F) and evaluation plan (appendix G and appendix H). The phone interviews focused on understanding how an evaluation can be integrated into the site’s FUP model. These protocols were developed based on past evaluability assessment protocols used in an evaluation of supportive housing for child welfare involved families (Cunningham et al 2014). Prior to each phone conversation, the awardees application and publicly available information and past phone conversations were reviewed to avoid asking unnecessary questions. These materials are no longer in use and associated burden has been removed from the burden estimates in Supporting Statement A.



Interviews and Focus Groups (Instruments 1-11)

The implementation study guides were developed based on a past implementation study of FUP (Cunningham et al 2015) and a past implementation study and in-depth parent interviews from an evaluation of a supportive housing program for child welfare involved families (Cunningham et al 2014). The project team did not pilot the discussion guides. For interviews with administrators and staff, interviewers reviewed past phone conversations and information provided by the site to avoid asking unnecessary questions. The implementation study relies on triangulation, as stakeholders are asked about similar topics to give a full picture of the questions the project team is attempting to answer.



Program data collection (Instruments 12-17)

The evaluation team will use a variety of program data to understand each site’s program model. We plan to have caseworkers across all sites complete two forms as part of program operations. The housing status form (instrument 12) collects the families’ housing status, and the referral form (instrument 13) collects information on the family being referred, including a household roster, housing status, and child welfare status. The housing status form and referral form were developed based on referral forms used by FUP sites studied by Cunningham et al. (2015) and a past implementation study of supportive housing for child welfare involved families (Cunningham et al. 2014). The randomization tool (instrument 14) is an online system we will have the PCWA’s staff use when a family is referred to FUP. This randomization tool was developed based on a past random assignment study of supportive housing for child welfare involved families (Cunningham et al. 2016). The dashboard (instrument 17) collects information on how the family moves through the referral and leasing process including key dates, such as referral date, voucher issuance date, and lease signing date. The dashboard was developed based on a dashboard used for a past impact study of supportive housing for child welfare involved families (Pergamit et al 2016) and a past evaluation of supportive housing (Cunningham et al. 2016). The housing assistance questionnaire (instrument 15) and ongoing services questionnaire (instrument 16) collect information on which services the family is receiving through FUP. The housing assistance questionnaire and on-going services questionnaire are newly developed instruments based on the application evaluation criteria in the 2018 FUP NOFA and services observed in FUP sites in Cunningham et al. (2015).



B4. Collection of Data and Quality Control

Impact study

The administrative data will be pulled from existing administrative records by a data administrator at each agency. The data will be transferred in electronic form (e.g. csv, excel, etc.) via Secure File Transfer Protocol to the Urban Institute. To ensure quality and consistency in the data collection, for each agency, this activity will consist of four components. The first will be having site-specific conversations with the staff most familiar with the data to understand what data are available and its structure and quality. During this conversation, we will establish a timeline and procedures for transferring the data to the evaluation team. The second component will be the first data pull, which will occur at one-year after the last family is randomized. This first pull will require the data administrator at the agency to identify families randomized in the study and extract the relevant data elements. The PCWA should have a list of families randomized in the study. However, the PHA and the CoC may need to have the PCWA share the list of families to identify them in their data. Data will be checked for completeness (e.g. ensuring that all randomized families are present in the PCWA data and all leased up families are present in the PHA data, and that all variables requested are included) and quality (e.g. ensuring there are no inconsistencies within or across data sets). The third component will be a follow-up conversation with the data staff to answer any questions or address any concerns that have come up around the first pull. The fourth component will be the second and final data pull, which will occur at two years after the last family is randomized.



Implementation study

Staff interviews, staff focus groups and family interviews

Urban Institute staff is conducting all interviews and focus groups across the sites. Two staff members, one junior and one senior, are and will continue to be involved in each interview or focus group. For staff interviews, Urban Institute staff reach out to staff members at each site directly. For staff focus groups, the Urban Institute staff have the FUP program manager at each agency identify frontline staff who referred or worked with FUP families and recruit them for the focus groups. For the in-depth interviews with parents, families will be randomly selected from the pool of families that signed a lease and consented to be contacted for the interview. The interviews are and will either be completed in person or virtually. Prior to each site visit, we have and will hold an interviewer training for the Urban Institute staff. With respondents’ permission, the evaluation team record the interviews and focus groups to ensure that we accurately capture what is said. Interview notes and transcripts are and will be reviewed and coded by a team of staff members trained to check for quality and consistency.


Referral form, Housing Assistance Questionnaire, and On-going Services Questionnaire

Agency frontline staff are filling out the referral form, housing assistance questionnaire and on-going services questionnaire. For the referral form, the referral provider or PCWA FUP program manager notified frontline workers either through emails or trainings to utilize the referral form when referring a family. Families are not accepted for referral to FUP without a completed referral form. Urban Institute staff identify when families meet the relevant milestone to trigger completion of the Housing Assistance Questionnaire or On-going Services Questionnaire. Agency program managers then either email the form to the frontline worker who works with the family or provide their email to the Urban Institute staff to email directly. Depending on the preference of the site, the referral form, housing assistance questionnaire and on-going service questionnaire are either paper, pdf, or online. During our first site visit, the evaluation team trained staff on program data collection. Urban uses the dashboard to monitor the completion of the questionnaires. Online and pdf referral forms and questionnaires include logical restrictions on the data to guard against error (e.g. ensuring dates are entered in date fields). Referral forms are reviewed for completeness by PCWA FUP program managers prior to randomization. Both referral forms and questionnaires are reviewed for quality by Urban Institute staff.


Randomization tool and dashboard

The PCWA FUP program manager completes the randomization tool, the online tool used to randomize families to the treatment or control group. The PCWA FUP program manager and the PHA FUP program manager complete the dashboard. The dashboard is an excel document used to track how families are moving through the process from referral to obtaining housing with the FUP voucher. During our first site visit, the evaluation team set up the randomization process and trained staff on the randomization tool and the dashboard. Urban Institute staff check to ensure that a referral form is uploaded for every randomized family. In addition, the randomization tool is used to ensure that appropriate families are listed on the dashboard. The dashboard is used to ensure controls are not referred to the treatment group. Finally, administrative data will be used to verify information entered into the randomization tool and dashboard.



B5. Response Rates and Potential Nonresponse Bias

Response Rates

Impact Study:

For the child welfare agency administrative data, we expect a 100 percent response rate. As noted in SSA, section A4, a majority of the data the project team intends to collect is already tracked by states, as it is required for submission to the federal government through the Administrative Foster Care and Adoption Reporting System (AFCARS).1


We also expect a 100 percent response rate for administrative data from the public housing authorities. As noted in SSA, section A4, HUD requires PHAs to collect a majority of the information the project team intends to request. The project team expects to have child welfare administrative data for all families in the study and PHA data for all families who submit a voucher application. Past studies conducting impact studies on child welfare involved families using administrative data (Pergamit et al. 2019, Pergamit et al. 2017) have had 99 percent response rates on the impact study data collection.


Implementation Study:

The interviews and focus groups are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported, though ACF anticipates a high response. The project team expects that program administrators and other staff will be interested in sharing their insights with ACF. Additionally, the project team will conduct interviews on site or virtually and select dates that are most convenient for program staff. To make participating as easy as possible, the project team will work collaboratively with respondents to schedule the interviews at times that are most convenient. Past evaluations conducting implementation studies on housing programs for child welfare involved families (Cunningham et al. 2015 and Cunningham et al. 2014) have had 100 percent response rates on implementation study data collection with staff. As discussed in Part A (A9), a past study with a similar population (Holcomb et al. 2015) achieved an average of a 75 percent response rate for in-depth parent interviews. As with that study, by including a token of appreciation, we assume a similar response rate for this study.


NonResponse

Impact study

As participants will not be randomly sampled and findings are not intended to be representative, non-response bias will not be calculated. Respondent demographics will be documented and reported in written materials associated with the data collection.



Implementation study

As staff will not be randomly sampled and findings are not intended to be representative, non-response bias will not be calculated. Any substantial nonresponse from staff will be documented and reported as a study limitation.


Families will be randomly sampled from among those who obtained housing through a FUP voucher and consented to be contacted. Since the data collected will be qualitative, nonresponse bias will not be calculated. Respondent demographics will be documented and compared with those of nonrespondents.



B6. Production of Estimates and Projections

The impact study will produce internally valid estimates of the program’s impact and will be released to the public. The estimates will not be representative of the population of all families with inadequate housing and as such will not be used to make policy decisions. Publications will clearly state that findings are not generalizable beyond the specific study population.


Impact estimation methods: The project team will conduct an Intent-to-Treat (ITT) analysis to estimate the impact of program participation on outcomes. The ITT estimate is measured as the average individual outcomes for all randomized to the treatment group less the average individual outcomes for all randomized to the control group. The analysis will control for pre-randomization covariates using a regression framework. For continuous outcomes, the team will utilize an ordinary least squares (OLS) regression model and for binary outcomes, the team will use a logit or probit model for estimation. The exact covariates will be finalized after reviewing the data for quality and completeness. In addition, the sample will be evaluated for equivalence between the treatment and control groups on observable variables collected before and shortly after randomization. Variables that show significant differences between the two groups at p<.05 will be included as covariates in the regressions. The project team anticipates conducting exploratory subgroup analyses of program impacts on substantively important subpopulations such as by family type and sites as discussed below in B7. Depending on the take-up and crossover rates for the evaluation, they may also estimate the Treatment-on-the-Treated (TOT) estimate using an "instrumental variable" estimate (IV) (Angrist, Imbens, & Rubins, 1996). The IV estimate is a “per-person served” estimate, among those who comply with their random assignment that accounts for take up and crossovers.


Data archiving: Quantitative datasets from the impact study will be archived at the National Data Archive on Child Abuse and Neglect (NDACAN) at Cornell University. Files will be stored as Restricted Access Files. Files will be deidentified, including removal or masking of personally identifiable information (PII) as well as both direct and indirect identifiers. Documentation such as a detailed codebook, user manual, and data collection instruments will also be submitted to NDACAN to increase accuracy of any secondary analysis performed by individuals who were not part of the project team.


Implementation Study:

The data will not be used to generate population estimates, either for internal use or dissemination. The information gathered from the implementation interviews and focus groups with administrators and staff will be combined and quantified based on patterns across sites and will also be archived at NDACAN.



B7. Data Handling and Analysis

Data Handling

Impact study

For the impact study, the administrative data will be checked for detectable errors, such as birthdates after the data pull or before 1900. To the extent possible, we will also look for inconsistencies across data sets to correct for any errors.


Implementation study

The project team will fully transcribe audio recordings of interviews and focus groups. In cases when participants did not consent to being recorded, the project team will clean the typed notes taken during the interview or focus group.


As discussed below, the interview and focus group data will be qualitatively coded. Once the coding scheme has been established, the project team will ensure inter-rater reliability by having multiple coders code several transcripts and re-code until a kappa coefficient of over 0.80 is achieved (considered a high level of agreement between raters) (McHugh, 2012). If the initial level of agreement is below 0.80, the coders will meet to discuss the definitions of each code before returning to recode the transcripts.


Data Analysis

Impact study

We will conduct ITT and TOT analyses of the outcomes. The ITT estimate is defined as the difference between the average outcomes for those randomized to FUP, the treatment group, and those randomized to the control group, adjusting for pre-randomization covariates. All eligible families randomized to the treatment population will be counted in the treatment population, regardless of whether they engage with FUP. All eligible families randomized to the control population will be counted in the control population, even if they inadvertently are enrolled in FUP.



One key issue when estimating the effects of FUP on child welfare involvement is the level of analysis. The program provides vouchers and services to families; however, the outcomes are at the child level (e.g. removed, reunified). We intend to estimate the impacts at both the family level and the child level as a robustness check but to primarily report outcomes at the child level because it is more intuitive to model the outcomes at this level.



The ITT estimate is measured as the average child outcomes for the treatment population less the average child outcomes for the control population. Specifically, the ITT estimate would be measured using the regression equation below:





Where is the outcome for each child, i, that was randomly assigned; is an indicator equal to 1 for children in families who were assigned to the treatment group and 0 for children in families assigned to the control group; is the parameter of the ITT effect on the outcome ( ); is a vector of pre-randomization covariates; is the vector of coefficients on the covariates, ; and ε is the regression error term. For continuous outcomes, we will estimate an OLS regression model and for binary outcomes, we will estimate a logit or probit model. The inclusion of the pre-randomization covariates is intended to improve the precision of the estimates. The exact covariates will be finalized after reviewing the data for data quality and completeness. This regression will be estimated with clustered standard errors at the family level.



The sample will be evaluated for equivalence between the treatment and control groups on observable pre-randomization variables. Although random assignment is intended to create two equivalent groups, small samples can result in some differences between the groups by chance. Variables that show differences between the two groups at p < = .05, that is, with at least 95 percent confidence they are different, will be included as covariates in the regressions.



As discussed above, not all families referred for FUP vouchers will obtain a lease. These families are in the treatment group, but do not receive the treatment. Many program and practice stakeholders will want to know whether the program helped those who received vouchers. To estimate the effect of FUP for families who actually sign a lease we will also estimate the TOT estimate using an "instrumental variable" estimation procedure (IV) (Angrist, Imbens, & Rubins, 1996). The IV estimate is per child served, among those who comply with their referral assignment, which accounts for the fact that some families referred to FUP may not sign a lease and that some people in the control group may end up leasing up through FUP. For example, all study participants can be divided into three types of individuals: (1) those who will always sign a lease with FUP regardless of whether they are referred to it or not; (2) those who will never sign a lease with FUP even if they are referred to it; and (3) those who comply with whatever referral assignment they are given, whether it is to sign a lease with FUP or to remain in the control group. The IV estimate represents the effect of signing a lease with FUP on study outcomes among this third group, the compliers. In the special circumstance where decisions to comply are independent of the study outcomes, the IV estimate also represents the average treatment effect.



The IV estimate scales up the ITT estimate by the difference between the treatment and control groups’ fractions enrolled in FUP. Conceptually, we will estimate the effect of referring a family to FUP on leasing up with FUP in the same manner as calculating the ITT above, except that the dependent variable in the model will be enrollment:





where 𝑃𝑖 is 1 if the child, i, enrolled in the program, regardless of whether they were in the treatment group or the control group. Enrollment will be defined as the participant having an initial housing lease-up date through FUP. is an indicator equal to 1 for children in families assigned to the treatment group and 0 for children in families assigned to the control group. is the parameter of the effect of getting randomly assigned into treatment on actual enrollment ( ). is a vector of prerandomization covariates, and is the vector of coefficients on the covariates, . ε is the regression error term. The IV estimate is the ratio of the two estimates:



TOT estimate =



In practice, the two equations are estimated simultaneously using a two-stage least squares estimation procedure. In the first stage, the dependent variable (enrolling in the program) is regressed on the exogenous covariates plus the instrument (randomization into treatment). In the second stage, fitted values from the first-stage regression are plugged directly into the structural equation in place of the endogenous regressor (enrolling in the program). We will include the same covariates as used in the ITT regression.



In addition to our main analysis, we plan to conduct subgroup analysis by family type and site. Specifically, we would like to see how the program effects vary for preservation families and reunification families. There are many reasons that FUP may affect these families differently. Preservation and reunification families may be very different. Reunification cases are likely to be more severe since the caseworker decided to remove the child. In addition, there are different mechanisms for preservation families to remain intact than for reunification families to return to being intact. For a preservation family to remain intact implies preventing a removal, which typically will be largely based on caseworker judgement, while for a reunification family to become intact through a child returning home involves a court decision. We will run regressions separately for preservation and reunification families using the same methodologies described above.



There are many reasons that FUP may affect families differently across sites. One is that program implementation could vary widely. For instance, one site could provide many intensive support services, whereas another site could provide few support services. Child welfare practices may differ across sites (e.g. when a case is opened, when a child is removed), leading to differences in the child welfare population from which families are identified. Subjective interpretations of inadequate housing could systematically differ by sites leading to differences in families deemed eligible. On the housing side, PHAs may differ in their voucher eligibility criteria and application processes. Furthermore, one site could have a very tight housing market leading to longer waits and/or lower rates of signing a lease than other sites. We will run regressions separately for each site using the same methodologies described above to explore potential differential impacts across sites. The implementation study will document site program differences and help explain why we might see different impacts across sites.



Implementation study

The implementation study will use a combination of qualitative and quantitative data analysis. Qualitative data analysis will combine information from the various data sources. The semi-structured interview guides and focus group protocols we have developed to guide qualitative data collection include discussion topics and questions that reflect key implementation study research questions, as will the tools used for extracting information from program documents. The evaluation team will take detailed notes during qualitative data collection. We will develop a coding scheme to organize the data into themes or topic areas. Notes will be coded (tagged based on the theme or topic for which they are relevant) and analyzed using a qualitative analysis software package, such as NVivo.

Although analysis of data for the implementation study will primarily draw on qualitative methods, the evaluation team will also produce descriptive statistics based on program data and measures of fidelity. Specifically, the evaluation team will look at how families progress through the leasing process using the quantitative data collected through the dashboard. This analysis will present the share of families randomized that complete a housing application, receive a voucher, are deemed eligible, sign a lease, lose their voucher, and exit housing. In addition, it will include descriptive statistics on the reasons for voucher denial, voucher loss, and housing exit. If there is sufficient variation, we will run regression analyses to determine what factors were correlated with obtaining a voucher, leasing up, and exiting housing. Additional descriptive analyses will look at the service receipt reported in the housing assistance questionnaire and ongoing services questionnaire.


Data Use

The project team will use the collected data to inform a technical report, a practitioner-focused brief, and a journal article.

  • The technical report will describe the study design and findings. The report will contain information about both the impact study and the implementation study. The report will include a detailed study methodology that will help the public understand and properly interpret the information derived from the data collection. The methodology section will include, but not be limited to, interview and focus group discussion topics, qualitative data analysis technique, and administrative data analysis techniques.

  • The practitioner-focused brief will describe evaluation findings of relevance to practitioners and in a manner consistent with a non-technical audience.

  • Finally, the project team will also publish the findings in a peer-reviewed journal, so they can be reviewed by evidence clearinghouses.

As discussed in section B1, the study’s limitations will be included in all written products and public materials associated with the study,



B8. Contact Person(s)

The information for this study is being collected by the Urban Institute on behalf of ACF. Principal Investigator Michael Pergamit ([email protected]) and FUP evaluation lead Devlin Hanson ([email protected]) led the development of the study design plan and data collection protocols and will oversee collection and analysis of data gathered through on-site interviews and telephone interviews.


Attachments

Instrument 1 -- Guide for Implementation Study for PCWA Management

Instrument 2 -- Guide for Implementation Study for PHA Management

Instrument 3 -- Guide for Implementation Study for CoC Management

Instrument 4 -- Guide for Implementation Study for Referral Provider Administrators

Instrument 5 -- Guide for Implementation Study with PCWA FUP Management

Instrument 6 -- Guide for Implementation Study for PHA FUP Management (Second)

Instrument 7 -- Guide for Implementation Study Focus Groups for PHA Frontline Workers

Instrument 8 -- Guide for Implementation Study for Parents

Instrument 9 - Guide for Implementation Study Focus Groups with Frontline Workers

Instrument 10 - Guide for Implementation Study for PCWA FUP Management (Third)

Instrument 11 - Guide for Implementation Study for Service Provider Management

Instrument 12 – Housing Status Form

Instrument 13 – Referral Form

Instrument 14 – Randomization Tool

Instrument 15 - Housing Assistance Questionnaire

Instrument 16 – Ongoing Services Questionnaire

Instrument 17 – Dashboard

Instrument 18 – Administrative Data

Appendix A – Informed Consent Script

Appendix B – Informed Consent Form

Appendix C – Informed Consent for Parents

Appendix D – Informed Consent for Staff

Appendix E – Outreach Call Script for Parents

Appendix F – Guide for Recruitment with PHA and PCWA Administrators

Appendix G – Guide to Develop an Evaluation Plan for PCWA FUP Management

Appendix H – Guide to Develop an Evaluation Plan for PHA FUP Management

Appendix I – IRB Approval Letter



References

Angrist, Joshua, Guido W. Imbens and Donald Rubin. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association. 91.434 (1996): 444-455.

Cunningham, Mary, Michael Pergamit, Maeve Gearing, Simone Zhang, Brent Howell. 2014. Supportive Housing for High-Need Families in the Child Welfare System. Urban Institute. https://www.urban.org/research/publication/supportive-housing-high-need-families-child-welfare-system

Cunningham, Mary, Michael Pergamit, Abigail Baum, Jessica Luna. 2015. Helping Families Involved in the Child Welfare System Achieve Housing Stability: Implementation of the Family Unification Program in Eight Sites. Urban Institute. https://www.urban.org/sites/default/files/publication/41621/2000105-Helping-Families-Involved-in-the-Child-Welfare-System-Achieve-Housing-Stability.pdf

Cunningham, Mary, Michael Pergamit, Sarah Gillespie, Devlin Hanson, Shiva Kooragayala. 2016. Denver Supportive Housing Social Impact Bond Initiative: Evaluation and Research Design. Urban Institute. https://www.urban.org/sites/default/files/publication/79041/2000690-Denver-Supportive-Housing-Social-Impact-Bond-Initiative-Evaluation-and-Research-Design.pdf

Pergamit, Michael, Mary Cunningham, Julia Gelatt, Devlin Hanson. 2016. Analysis Plan for Interim Impact Study: Supportive Housing for Child Welfare Families Research Partnership. Urban Institute.

https://www.urban.org/sites/default/files/publication/80986/2000802-Analysis-Plan-for-Interim-Impact-Study-Supportive-Housing-for-Child-Welfare-Families-Research-Partnership.pdf


Pergamit, Michael, Mary Cunningham, and Devlin Hanson. "The impact of family unification housing vouchers on child welfare outcomes." American Journal of Community Psychology 60.1-2 (2017): 103-113.

1 OMB #0970-0422

4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPergamit, Mike
File Modified0000-00-00
File Created2021-08-29

© 2024 OMB.report | Privacy Policy