0990-0458_Supporting Statement A-OMB comments - Final 7.31.19

0990-0458_Supporting Statement A-OMB comments - Final 7.31.19.docx

Domestic Violence Housing First Demonstration Evaluation

OMB: 0990-0458

Document [docx]
Download: docx | pdf

Supporting Statement for

Domestic Violence Housing First Demonstration Evaluation

Part A (OMB Control #0990-0458)


July 2019


The Office of the Assistant Secretary for Planning and Evaluation (ASPE) within the U.S. Department of Health and Human Services is submitting a revised information collection request for the study entitled, “Domestic Violence Housing First (DVHF) Demonstration Evaluation” (OMB Control #0990-0458). The proposed revision to the collection would add a fourth follow-up data collection to be administered 24 months after study enrollment (Time 1) to examine longer-term impacts of the Domestic Violence Housing First Demonstration program. The follow-up survey is identical to the one used at the 6, 12, and 18 month follow-up. The contractor for the evaluation study is the Washington State Coalition Against Domestic Violence (WSCADV) and its subcontractor, Michigan State University.


A. Justification


  1. Circumstances Making the Collection of Information Necessary

Domestic violence (DV) is a leading cause of homelessness for women and children, and, in turn, the lack of stable housing increases women’s risk of victimization. Unfortunately, little evidence exists about effective strategies to assist DV survivors as they work to avoid homelessness while freeing themselves and their children from the abuse of partners and ex-partners. The current DVHF evaluation study will build on prior empirical and practice evidence suggesting that mobile advocacy, which involves an advocate working with clients when and where they need it to secure housing and other services, has multiple and positive impacts on survivors and their children. Principal Investigator Sullivan’s prior experimental research (funded by National Institute of Mental Health 1989-1997) first demonstrated the positive impact of this type of intervention on survivors’ quality of life, social support, and ability to access community resources. Survivors who received the intervention were also more than twice as likely to remain free of further physical abuse during the two-year post-intervention follow-up. Positive effects have been found for the children as well, with their self-competence increasing and their internalizing problems decreasing. That longitudinal study is now 20 years old and it did not focus on housing/homelessness to the extent that the current study will.

Further evidence supporting the importance of mobile advocacy, flexible funding and housing supports for domestic violence survivors can be found in the Domestic Violence Housing First (DVHF) pilot project. This pilot was the result of an investment by the Bill & Melinda Gates Foundation which funded mobile advocacy and flexible financial assistance for the participating agencies. The Washington State Coalition Against Domestic Violence (WSCADV) oversaw this 5-year project through which advocates provided flexible, survivor-driven advocacy supports to domestic violence survivors from 13 diverse programs across the state of Washington. The majority of families in both rural and urban communities reported being effective at accessing and retaining housing at six, twelve and eighteen months after program entry. Unfortunately, this project did not have permission to interview families over time, and relied on agency service providers to collect the data. The pilot project also did not systematically examine the types of services received by survivors.

In 2015, the Bill & Melinda Gates Foundation funded WSCADV to implement DVHF in two regions within Washington state (one urban and one rural). This funding goes through 2019 and is being used to support participating organizations to offer this model. The Bill & Melinda Gates Foundation has been working with WSCADV to identify federal funds to rigorously evaluate this effort. This large-scale demonstration project provides an unparalleled opportunity to significantly increase the level of evidence documenting the complex interrelationships among domestic violence advocacy, housing stability, and improved well-being for survivors and their children.


Legal or Administrative Requirements that Necessitate the Collection


The funding for this project comes from the U.S. Department of Justice, with ASPE managing the contract, as specified through an interagency agreement. Authority to fund the DVHF evaluation comes from 42 U.S.C. 10603(c)(l)(A}, 4U.S.C. 10603(c)(4), and 28 U.S.C. 530C.

    1. 42 U.S.C. I 0603(c)(1)(A) authorizes the Office of Victims for Crime (OVC) Director to make grants for demonstration projects, program evaluation, compliance efforts, training and technical assistance services.

    2. 42 U.S.C. I 0603(c)(4) authorizes the OVC Director to reimburse other instrumentalities of the Federal Government and contract for the performance of functions authorized under this subsection.

    3. 28 U.S.C. 530C specifically authorizes activities of the Department of Justice to be carried out through any means, including, but not limited to, use of details of personnel and use of reimbursable agreements.


To accomplish the objectives of the legislative authority, ASPE seeks OMB approval of the revised DVHF Evaluation data collection activities to add a 24 month follow-up in order to examine the longer-term impacts of the DVHF intervention.

  1. Purpose and Use of Information Collection


The requested revision will add a fourth follow-up data collection at 24-months post-baseline. The justification for this request is based on an earlier longitudinal evaluation of an advocacy intervention for domestic violence survivors. The researchers found that noteworthy changes were still occurring for survivors between the 18- and 24-month time periods.

Evidence provided through this study will have important policy and practice implications for both the domestic violence and housing arenas. Numerous federal agencies (including the Office for Victims of Crime, the Office on Violence Against Women, the Family Violence Prevention and Services Administration, and the U.S. Department of Housing & Urban Development) are seeking evidence that can guide their policy and funding decisions. They recognize the relationship between domestic violence and homelessness, and are seeking answers regarding which community practices lead to safety and housing stability for DV survivors. There has been considerable national interest in the pilot project preceding the current demonstration project. The anecdotal data from that pilot were promising. More rigorous evidence can assist these federal agencies in making evidence-informed policy and funding decisions.

The current demonstration project, funded by the Bill & Melinda Gates Foundation, began in 2015 and ends at the end of 2019. If data are not collected within a timely manner we will lose this unprecedented opportunity to examine whether and how Domestic Violence Housing First impacts the safety, housing stability, and well-being of survivors and their children.

The study includes four domestic violence agencies funded through the Gates Foundation. All eligible clients receiving services at any of the four domestic violence agencies participating in the research are invited to participate in the study. Eligibility criteria include (1) being a recent survivor of intimate partner violence, (2) being homeless or at immediate risk of becoming homeless, (3) having entered services within the prior three weeks, and (4) speaking English or Spanish, or one of the languages that the interviews have been translated into or for whom we have an interviewer. Careful procedures are being followed, under the guidance of the Project Coordinators, to assure that all eligible participants are offered the opportunity to participate in the study. The expected sample is 320 participants – an anticipated 80 from each of the four agencies – over the course of 15 months of participant recruitment.



  1. Use of Improved Information Technology and Burden Reduction


Technology is being used in a variety of ways within this evaluation project to reduce burden and maximize data quality. Data about service provision (gathered from the service providers) will be collected through online surveys. All of the data from the domestic violence survivors will be collected through face-to-face or telephone interviews, using computer-assisted technology. Only data that are necessary to answer the evaluation questions are being collected. Data about programs (e.g., how much money they provide through flexible funds, length of service provision to clients) are already being collected by programs so this is not an added burden.


  1. Efforts to Identify Duplication and Use of Similar Information


The principal investigator, Dr. Cris Sullivan, who is a faculty member at Michigan State University, has been conducting research and evaluation in this area for over 25 years and keeps abreast of the state of the evidence in the field. Dr. Sullivan has conducted literature searches and data base searches as part of a systematic review conducted for the National Resource Center on Domestic Violence, and she has consulted with individuals within the Office for Victims of Crime, the Office on Violence Against Women, the Family Violence Prevention and Services Administration, and the US Department of Housing & Urban Development to ensure these data are not being collected elsewhere. In addition, the evaluation contract includes periodic convening of a federal technical expert panel in order to share information and obtain input on the study. The first convening of federal experts occurred on April 25, 2017 and included experts from multiple agencies within HHS (e.g., ASPE, ACF, NIH, Office of Women’s Health, SAMHSA, and HRSA), the Department of Justice, the Department of Housing and Urban Development, the U.S. Interagency Council on Homelessness, and contractor staff. Dr. Sullivan also attends national professional meetings as a means of learning what data are being collected that might pertain to this project. The proposed evaluation is not duplicative of any other efforts, and will provide significant new information to the field.


  1. Impact on Small Businesses or Other Small Entities


The four domestic violence service organizations that have agreed to participate in this evaluation project might be considered “small entities.” We have minimized any burden for them in participating in this project by minimizing their participation in data collection. The evaluation contractor is hiring data collectors who will be responsible for collecting data from domestic violence survivors. We are gathering minimal information from the community organizations, and are minimizing the amount of time their staff members participate in being surveyed (and such participation is voluntary).


  1. Consequences of Collecting the Information Less Frequent Collection


The domestic violence survivors participating in this evaluation will be interviewed every six months through 24 months post-baseline (an extension beyond the originally approved project to interview survivors through 19 months post-baseline). The six month time frame was chosen to be long enough for change to occur but short enough that participants can recall events accurately. If data were collected less frequently we would lose valuable information about event timing and causality. The six month time frame has been successfully used to document change over time in the following studies that included low income domestic violence survivors:


Anderson, D.K., Saunders, D.G, Yoshihama, M., Bybee, D.I., & Sullivan, C.M. (2003). Long-term trends in depression among women separated from abusive partners. Violence Against Women, 9(7), 807-838.

Beeble, M.L., Bybee, D., Sullivan, C.M., & Adams, A. (2009). Main, mediating, and moderating effects of social support on the well-being of survivors of intimate partner violence across two years. Journal of Clinical and Consulting Psychology, 77, 718-729.

Bybee, D.I., & Sullivan, C.M. (2002). The process through which a strengths-based intervention resulted in positive change for battered women over time. American Journal of Community Psychology, 30(1), 103-132.

Bybee, D.I., & Sullivan, C.M. (2005). Predicting re-victimization of battered women three years after exiting a shelter program. American Journal of Community Psychology, 36(1/2), 85-96.

Clough, A., Wagman, J., Rollins, C., Barnes, J., Connor-Smith, J., Holditch-Niolon, P., ... & Glass, N. (2011). The SHARE project: Maximizing participant retention in a longitudinal study with victims of intimate partner violence. Field Methods, 23, 86-101.

Greeson, M., Kennedy, A.C., Bybee, D.I., Beeble, M., Adams, A.E., & Sullivan, C.M. (2014). Beyond deficits: Intimate partner violence, maternal parenting, and child behavior over time. American Journal of Community Psychology, 54, 46-58.

Kennedy, A.C., Bybee, D., Sullivan, C.M., & Greeson, M. (2009). The effects of community and family violence exposure on anxiety trajectories during middle childhood: The role of family social support as a moderator. Journal of Clinical Child and Adolescent Psychology, 38, 365-379.

Sullivan, C.M., & Bybee, D.I. (1999). Reducing violence using community-based advocacy for women with abusive partners. Journal of Consulting and Clinical Psychology, 67(1), 43-53.

Sullivan, C.M., Bybee, D.I., & Allen, N.E. (2002). Findings from a community-based program for battered women and their children. Journal of Interpersonal Violence, 17(9), 915-936.


  1. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5


This request fully complies with the guidelines of 5 CFR 1320.5


  1. Comments in Response to the Federal Register Notice/Outside Consultation


A 60-day Federal Register Notice was published in the Federal Register on March 5th, 2019, vol. 84, No. 7909; pp. 7909-7910 (see attachment 60-Day FRN Template- DVHF Revision 1.2019). There were no public comments.


We consulted with numerous persons outside the agency to obtain their views on the availability of data, frequency of collection, clarity of instructions and the data elements to be reported.


From 2015 through 2018, we have consulted with representatives from the four agencies who have agreed to participate in this evaluation. They offered their views on all aspects of the data collection, including how to minimize burden. No major problems arose, and we have all agreed on all aspects of data collection. Specifically, we consulted with:


Amy Flynn, Executive Director

Yakima YWCA

818 W Yakima Ave

Yakima, WA 98902

[email protected]

(509) 248-7796

 

Leticia Garcia, Executive Director

LVCSS

600 North Ave

Sunnyside, WA 98944

[email protected]

(509) 837-6689

 

Susan Segall, Executive Director

New Beginnings

8760 Greenwood Ave N

Seattle, WA 98103

[email protected]

(206) 926-3035

 

Maria Williams, Director of Programs

LifeWire

1401 140th Pl NE

Bellevue, WA 98007

[email protected]

(425) 562-8840


The evaluation team also discussed their plans with key individuals from various federal agencies interested in the nexus of domestic violence and homelessness. These consultations focused on program design, data collection strategies, dissemination efforts, and whether any other data collection efforts were underway or planned elsewhere. Specifically, we consulted with:


Carrie Bettinger-López, Esq.

White House Advisor on Violence Against Women

Office of the Vice President

1600 Pennsylvania Avenue NW

Washington, DC 20500

[email protected]

202-456-3268


Lisa Coffman

SNAP Specialist

U.S. Dept of Housing and Urban Development

Office of Community Planning & Development

451 7th Street SW

Washington, DC 20410

[email protected]

202-402-5908


Rosie Hidalgo

Deputy Director for Policy

Office on Violence Against Women

145 N Street NE, Suite 10W.122

Washington, DC 20530

[email protected]

202-307-6026


Marylouise Kelley, PhD

Director, Family Violence Prevention Division

Family Violence Prevention & Services Program

Administration on Children and Families

U.S. Department of Health and Human Services

1250 Maryland Ave., SW Suite 8412

Washington, DC 20024

[email protected]

202-401-5756


Ann Oliva

Deputy Asst Secretary for Special Needs

U.S. Dept of Housing and Urban Development

Office of Community Planning & Development

451 7th Street SW

Washington, DC 20410

[email protected]

202-402-4497


Susan Williams

Associate Director, Office for Victims of Crime

US Department of Justice

810 Seventh Street NW.,

Washington, DC 20531

[email protected]

(202) 307-5290


Kristin Weschler

Office for Victims of Crime

US Department of Justice

810 Seventh Street NW.,

Washington, DC 20531

[email protected]

(202) 616-5127



We also consulted with key researchers and evaluators who have prior experience and expertise in conducting community-based evaluation. Specifically, we consulted with:


Lisa Goodman, PhD

Professor, Counseling, Developmental & Educational Psychology Department

Campion Hall, Room 310

Boston College

140 Commonwealth Avenue

Chestnut Hill, MA 02467

[email protected]

617-552-1725


Taryn Lindhorst, PhD

Professor, School of Social Work

University of Washington

15 Avenue NE

Seattle, WA 98105

[email protected]

206-616-2152


Rubén Parra-Cardona, PhD

Associate Director, MSU Research Consortium on Gender-based Violence

Associate Professor, Human Development & Family Studies

3D Human Ecology Building

Michigan State University

E. Lansing, MI 48824

[email protected]

517-432-2269


Josephine Serrata, PhD

Director of Research and Evaluation

National Latin@ Network for Healthy Families and Communities

PO Box 5010

Atlanta, GA 30302

[email protected]

512-298-3719


Marybeth Shinn, PhD

Professor, Dept of Human and Organizational Development

Vanderbilt University

Peabody #329

230 Appleton Place

Nashville, TN 37203-5721

[email protected]

616-322-8735


Finally, we have consulted with national leaders with expertise in the areas of domestic violence and homelessness or housing. The following individuals are representatives from the entities comprising the Domestic Violence and Housing Technical Assistance Consortium, funded and supported by an unprecedented partnership among the U.S. Department of Health and Human Services, Department of Justice, and Department of Housing and Urban Development:


Christie Bevis, Program Manager

Collaborative Solutions, Inc.

PO Box 130159

Birmingham, AL 35213

[email protected]

205-939-0411 x 207


Peg Hacskaylo, CEO

District Alliance for Safe Housing

PO Box 91730

Washington, DC 20090

[email protected]

202-462-3274 x 110


Larisa Kofman, JD

Director, National Alliance for Safe Housing

PO Box 91730

Washington, DC 20090

[email protected]

202-462-3274 x 114


Anne Menard, CEO

National Resource Center on Domestic Violence

3605 Vartan Way, Suite 101

Harrisburg, PA 17110

[email protected]

800-537-2238


Monica McLaughlin, Deputy Director of Public Policy

National Network to End Domestic Violence

1325 Massachusetts Avenue NW, Floor 7

Washington, DC 20005

[email protected]

202-540-9985


  1. Explanation of any Payment/Gift to Respondents


The domestic violence survivors who agree to participate in this evaluation will receive a token of appreciation for their time. Participants received $50 for initial interviews, and are receiving $50 for each subsequent interview, including the newly proposed 24 month follow-up. The incentive amounts for the surveys are in line with OMB’s suggested cap of $40 for a one-time, one hour in-person interview or focus group. The interviews last about 75 minutes for most survivors, and we are incentivizing them $50 for this interview. The amount is also in line with other longitudinal studies involving low income survivors of domestic violence. Prior similar studies with domestic violence survivors include:


Clough, A., Wagman, J., Rollins, C., Barnes, J., Connor-Smith, J., Holditch-Niolon, P., ... & Glass, N. (2011). The SHARE project: Maximizing participant retention in a longitudinal study with victims of intimate partner violence. Field Methods, 23, 86-101.

Sullivan, C.M., Rumptz, M.H., Campbell, R., Eby, K.K., & Davidson, W.S. (1996). Retaining participants in longitudinal community research: A comprehensive protocol. Journal of Applied Behavioral Science, 32(3), 262-276.


There are no empirical studies that have examined the optimal level of incentives for this particular vulnerable population of domestic violence survivors. However, we examined the literature connecting incentives to research participation and retention, and believe that our amounts are in line with these findings. We’ve identified studies that provide quantitative evidence demonstrating that these incentive amounts are necessary to reduce non-response bias or improve access to difficult-to-reach respondents. Examples from the literature include:


Festinger, D.S., Marlowe, D.B., Dugosh, K.L., Croft, J.R., & Arabia, P.L. (2009). Higher magnitude cash payments improve research follow-up rates without increasing drug use or perceived coercion. Drug Alcohol Dependency, 96(1-2), 128-135.


This randomized controlled trial examined the differential impact on retention of using incentives of $70, $100, $130, and $160 with individuals using an urban, substance abuse outpatient treatment program. As hypothesized, they concluded that higher payments and cash payments resulted in significantly higher follow-up rates and fewer tracking calls. This study was conducted with a population that would be considered difficult-to-reach: all were substance abusers, 60% were African American, 58% were unemployed, and mean annual income was under $7000 per year.


Walter, J.K., Burke, J.F., & Davis, M.M. (2013). Research participation by low-income and racial/ethnic minority groups: How payment may change the balance. Clinical Translational Science, 6(5), 363-371.


In response to the problem of minorities being underrepresented in clinical research trial, the researchers conducted a crosssectional study with nationally representative data to examine perceived fairness of research incentive amounts. At lower levels of payment, nonHispanic whites were consistently overrepresented in believing payment was fair, and Hispanics and nonHispanic blacks were typically underrepresented. At payment levels above $349, proportions of participants' perceptions of fairness remained matched to census distributions of racial/ethnic groups.


Booker, C. L., Harding, S., & Benzeval, M. (2011). A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health, 11(1), 1.


This review article included 28 studies published through January 2011. Eleven were randomized controlled trials of retention strategies. Incentives were associated with an increase in retention rates, which increased with the amount of the incentive.


Heinrichs, N. (2006). The effects of two different incentives on recruitment rates of families into a prevention program. Journal of Primary Prevention, 27(4), 345-365.


This experimental study is relevant to the proposed study in that it involved low-income families. Monetary incentives increased the number of low-income families interested in participating in the research.


Laurie, H., & Lynn, P. (2009). The use of respondent incentives on longitudinal surveys. Methodology of longitudinal surveys, 205-233.


This article comprehensively reviews the literature on incentives in longitudinal surveys, including the effect of incentives on recruitment, sample composition, data quality and retention.


Rodgers, W. (2011). Effects of increasing the incentive size in a longitudinal study. Journal of Official Statistics, 27, 279-299.


This study varied whether participants received $20, $30, or $50 to participate in a longitudinal study. Providing the $50 incentive resulted in the highest response rates.


Zagorksy, J.L., & Rhoton, P. (2008). The effect of promised monetary incentives on attrition in a long-term panel survey. Public Opinion Quarterly, 72, 502-513.


This study is quite relevant to the proposed study in that in involved in-person interviews with women. Incentives had a positive effect on response rates for those who had previously participated in the survey but had then initially refused to participate in the current wave.


  1. Assurance of Confidentiality Provided to Respondents


The proposed study received Human Subjects approval from Michigan State University’s Institutional Review Board on August 17, 2016. The consent form for this study does not promise total and absolute confidentiality to respondents; rather, it states:

Your responses will be kept private to the extent allowed by law. Your interviews will be kept separately from any identifying information about you. The results of this study may be published or presented at professional meetings, but the identities of all research participants will remain anonymous.


A copy of the approval letter from Michigan State University’s IRB is enclosed with these materials.


We have consulted the HHS’ Senior Advisor, Privacy Policy, and OMB’s guidance on the Privacy Act of 1974, and do not believe that the Privacy Act applies to the proposed data collection for two reasons. First, according to the 1975 OMB Privacy Act Guidance, which was reaffirmed in the recent issuance of Circular A-108, the Privacy Act only applies to systems of records that are required to be managed by the agency. The data collection for this study is discretionary. Second, the Privacy Act is invoked only when “records” consisting of information connected to personal identifiers, is retrieved using those personal identifiers. The records in this study will not be retrieved by personal identifiers. Below we explain in more detail.


The collection is contracted and discretionary, therefore not covered by the Privacy Act.

The proposed data collection will not be done by the Office of Human Services Policy/ASPE, but rather, through a contractor, WSCADV, and its subcontractor MSU. OMB Privacy Act Implementation: Guidelines and Responsibilities (July 9, 1975) clearly describes terms under which data collected by a contractor under contract to the Federal Government is covered by the Privacy Act:


Not only must the terms of the contract provide for the operation (as opposed to design) of such a system, but the operation of the system must be to accomplish an agency function. This was Intended to limit the scope of the coverage to those Systems actually taking the place of a Federal system which, but for the contract, would have been performed by an agency and covered by the Privacy Act.” (40 FRN 28976)


The proposed data collection does not create a system to “accomplish an agency function,” and is not a system that is “taking the place of a Federal system which, but for the contact, would have been performed by an agency.” ASPE has discretion as to whether and how to carry out this demonstration. Thus, the proposed data collection is discretionary, not required, and the Privacy Act does not apply.


The evaluation data is not retrieved by personal identifiers

Even though discretionary data collections carried out by a contractor are not subject to the Privacy Act regardless of whether data are retrieved by identifiers, below we describe the data collection process for the study in some detail


How are identifiable data collected? The study will collect identifying data from three different types of study participants: agency POCs; primary service providers; and domestic violence survivors.

1 - Agency POCs will be asked basic information about the number of advocates working in the agency, advocate caseloads, whether and how much money the agency has for flexible funding, start and end dates of study participants’ receipt of services, and the agency’s current decision making process for determining services. This is information about the agencies, and not about individuals, so it could not be covered by the Privacy Act.


2 – Primary Service Providers (advocates) will be surveyed one time during the first year of the study about their work history and demographics, then will complete a survey for each survivor in their caseload that is a participant in the study.


3 – The main study participants are domestic violence survivors from whom MSU will collect the respondent ‘s name, address, phone number, and any other information they choose to provide that would help the research team locate them over time (e.g., social media contact). Any and all of this information is voluntary to provide on the part of the survivor.


At the end of each interview, members of the research team will fill update this information on the Contact Forms. These forms are MS Word documents that will be password-protected and saved on a secure, password-protected server managed by the Psychology Department at Michigan State University. In order to access the server, the research team members must go through four layers of separate password-protection: first, logging on to their password-protected computers; second, logging onto the MSU virtual private network; third, logging into the server; fourth, opening the password-protected Contact Sheet word document.


The survey is administered in a private setting by research staff who enter the data into a secure online application. Qualtrics, an online survey system, will be used to capture initial interview data. Interviewers will be using laptops during the interviews, entering information directly into Qualtrics. On a weekly basis, a research team member at Michigan State University will download data from the password-protected Qualtrics program into a file on the password-protected secure server hosted by the Psychology Department at Michigan State University.


How are PII Data Separated and Stored? Any PII that is collected at the same time as survey response data, (such as in the case of the contact information) is stored separately. Forms and databases used to manage the sample, and which contain PII collected about the sample members, do not contain survey questionnaire data. The survey housed on Qualtrics does not include any questions that ask for identifiable information.


How are PII Data Used and Retrieved? PII will be used to re-contact sample members for follow-up interviews at six month intervals. PII data, including names and phone numbers, will also be used to collect follow-up data from local agency administrative record systems about the survivor respondents, such as whether they are still clients of the agency and how much time agency staff have spent providing services to them.  PII is also used to confirm sample members’ identity prior to follow-up data collection. Retrievals of identifiable information are performed only for the purposes of locating sample members for follow-up data collection and for gaining additional information about the services received by the sample members at the agencies. 


For follow-up surveys at 6-month, 12-month, and 18-months, the cases will be released on a monthly basis based on the baseline survey date (e.g., oldest cases will be grouped by date of completed baseline survey each month and released to be called first, second oldest cases next, etc.).  This means that while the contractor will have to have identifiable information to call a particular respondent for follow-up, the cases will be retrieved by a chronological case number to be matched with the contact information, and called by chronological cohort. During these re-contact efforts, the contractor will seek to gain updated contact information for each respondent in the survey sample, and, ultimately, to conduct follow-up surveys with the entire sample. The individual members of the sample will be contacted using name and phone number (i.e., the contractor will call Jane Doe who is a member of the study to get her updated contact information and then to conduct her follow-up survey), the update is carried out on a chronological cohort, not alphabetically by name, or by any other personal identifier. Only the chronological case number indexes the survey data with the contact information.


In cases where the sample member’s telephone number on record is no longer valid, or they do not respond to the phone call or email, and they are no longer at the address given, the contractor will either contact the agency or alternate contacts (i.e. family members, friends, schools, organizations) provided by the sample member.


If sample members cannot be located using the information provided upon enrollment or from prior interviews, the contractor will work with the agency in order to locate the sample members (if permission is granted by the research participant). 


All PII data are stored separately and securely from de-identified study data. One password protected folder on the department server will be kept linking the survivors’ contact information to their ID #’s and the names of their advocates. Only the principal investigator, project coordinators and senior study staff will have access to this folder. This information is used to track participants over time in this longitudinal study, so is needed by senior staff to accurately locate participants and populate their online survey with information participants provided in earlier interviews (e.g., name of person who abused them). A separate electronic folder on MSU’s secure server will contain data linked only to participant ID numbers.


  1. Justification for Sensitive Questions


There are questions within the interviews that ask about private matters such as experience of abuse, financial barriers, housing barriers (including having a felony record), and misuse of substances. These questions are important to this study in that they are all factors that relate to housing stability and they are factors that the intervention being evaluated is trying to influence.


Respondents will also be asked about their race/ethnicity, gender identification, age, disability status and sexual orientation, which are all sensitive questions. They are important to the study in that the intervention may be more or less effective for different domestic violence survivors and it is critical to capture this information.


Respondents will be informed, through the consent form as well as verbally throughout the interview, that they may refuse to answer any question without penalty and that they can end their participation at any time. They will be told why sensitive questions are being asked, and will be aware that their answers will be kept private to the extent allowed by law.


We are not requesting respondents’ social security numbers.


  1. Estimates of Annualized Hour and Cost Burden


Exhibit 12A.1 summarizes the reporting burden on study participants. Study enrollment of domestic violence survivors for the DVHF evaluation took place over 18 months, so the annualized burden for the baseline Time 1 survey is based on 12/18 (216) of the sample (324). Follow-up surveys are administered 6, 12, 18, and 24 months after the baseline Time 1 interview, meaning participants complete two follow-up surveys the first year and two follow-up surveys the next year. Each follow-up survey is being administered over an 18-month period to correspond to the study enrollment period. Therefore, the annualized burden for the follow-up surveys is based on 12/18 (216) of the sample (324).


The primary service providers working with the domestic violence survivors were surveyed one time during the first year of the study about their work history and demographics. They also complete a survey for each domestic violence survivor in their caseload that is a participant in the study (approximately 16 survivors per provider) six months after a survivor enrolls in the study. Since study enrollment occurs over 18 months, the annualized burden for the service provider is based on 12/18 (216) of the expected sample (324).


The study also includes data collection from an agency point of contact (POC) that works with the agency's data. On a monthly basis for the first 24 months of data collection, the agency POC verified the number of advocates working in the agency, advocate caseloads, whether and how much money the agency has for flexible funding, start and end dates of study participants' receipt of services, and the agency's current decision making process for determining services.


Questionnaire response times were generated based on consultation with agency staff, previous experience, as well as through piloting both the in-person interviews (with domestic violence survivors) and the online surveys (with service providers). The cost burdens were generated based on consultation with agency staff and 25 years of conducting similar research.


The annualized burden for questionnaire response is estimated from the total number of completed questionnaires proposed and the time required to complete the questionnaires. The total annualized burden was originally 913 hours and is now expected to be 1,183 hours.



12A.1 Estimated Annualized Burden Hours

Revised Total Estimated Annualized Burden Hour Table (requested revision

highlighted in yellow)

Form Name



Type of Respondent

Annual
Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Total Annual Burden Hours

Time 1 (Baseline) Interview1

Domestic violence survivors

216

1

1.25

270

Follow-up Interviews

Domestic violence survivors

216

2

1.25

540

24 Month Follow-up Interview

Domestic violence survivors

216

1

1.25

270

Online survey about advocates’ work history and demographics


Victim service advocates

20

1

15/60

5

Online survey of advocates’ work

Victim service advocates

20

13

20/60

86

Form for community agency points of contact to verify agency information (monthly)


Community agency point of contact

4

12

15/60

12

Original Total





913

Revised Total with 24 Month Follow-Up interview





1,183









12A.2 Estimated Annualized Burden Costs

Revised Total Estimated Annualized Burden Costs (requested revision

highlighted in yellow)

Form Name



Type of Respondent

Total Annual Burden Hours

Hourly Wage Rate

Total Respondent Cost

Time 1 (Baseline) Interview2

Domestic violence survivors

270

$15.00

$4,050.00

Follow-up Interviews

Domestic violence survivors

540

$15.00

$8,100.00

24-Month Follow-Up Interview

Domestic violence survivors

270

$15.00

$4,050.00

Online survey about advocates’ work history and demographics


Victim service advocates

5

$20.00

$100.00

Online survey of advocates’ work

Victim service advocates

86

$20.00

$1,720.00

Form for community agency points of contact to verify agency information (monthly)


Community agency point of contact

12

$21.00

$252.00

Original Total


913


$14,222.00

Revised Total with 24 Month Follow-Up interview


1,183


$18,272.00

























13. Estimates of other Total Annual Cost Burden to Respondents or Recordkeepers/Capital Costs


There are no capital costs or costs of maintaining capital for the participating agencies or study respondents.


  1. Annualized Cost to Federal Government


The original clearance request for the Domestic Violence Housing First demonstration evaluation was specifically for collecting data from intervention participants at 4 time points: baseline at study enrollment, 6 month follow-up, 12 month follow-up, and 18 month follow-up. In addition, it included implementation data collection from service providers at one time point and it included monthly contact for 19 months from agency points of contact to report agency implementation data.

The total original estimated cost to the federal government for the DVHF demonstration evaluation was $1,686,110 over a three-year period, meaning the estimated annualized cost to the government for the data collection was $562,036.67. The estimated cost for the new 24-month follow-up data collection activities is $843,000 over two years, for an annualized cost of $421,500. Added together, the total costs for the DVHF demonstration evaluation is $2,529,110 over a 5 year period and the annualized cost is $505,822. These costs include payments to the evaluation contractor for data collection, payments to participants and sites, management of the evaluation, and labor for data analysis and reporting.


  1. Explanation for Program Changes or Adjustments


This is a revision to an existing data collection effort that was approved by OMB in August 2017. The requested revision will add a fourth follow-up data collection at 24-months post-baseline. The justification for this request is based on an earlier longitudinal evaluation of an advocacy intervention for domestic violence survivors. The researchers found that noteworthy changes were still occurring for survivors between the 18- and 24-month time periods.


  1. Plans for Tabulation and Publication and Project Time Schedule


Impact Analyses

Overview. The impact of DV Housing First on each of the major outcomes will be assessed through the use of mixed effects longitudinal regression, also known as longitudinal multilevel modeling or longitudinal MLM (Hedeker & Gibbons, 2006; Willet and Singer, 2003). This method will allow us to examine change over time on each outcome from pre-intervention through 24-month follow-up and to test differences in the trajectory of change between those who received mobile advocacy and flexible funding and those who received “standard services.” Because services will be provided in a naturalistic way, without artificial research assignment of participants to specific components, we will examine and adjust for pre-existing differences between participants who receive mobile advocacy and flexible funding and those who do not. This analytic approach will allow us to generate causal average treatment effects (analogous to conditional difference-in-difference estimates), as well as examine the effects of baseline covariates and their potential interaction with type of service received (Abadie, 2005).

Data preparation and scaling. All raw data will be examined to verify quality and identify potential outliers and distributional issues. Psychometric properties of existing scales will be verified, and psychometric analyses will be conducted on modified measures. To reduce Type I errors due to comparisons on multiple outcomes measures, data reduction (e.g., second-order confirmatory factor analysis) will be used where feasible to guide the combination of individual scales and variables into meaningful constructs. For example, it may be appropriate to combine measures of depression, anxiety, and PTSD to form a composite construct assessing mental health more generally. Significant findings on composite constructs will be followed up with parallel analyses on component variables. In addition, where possible we will estimate multivariate models examining change on multiple constructs in omnibus analyses.


Analyses of specific hypotheses and exploratory research questions.

Hypothesis 1: Survivors receiving mobile advocacy and flexible financial assistance will show greater improvement in housing stability, economic stability, safety, quality of life, and mental health and substance abuse compared to survivors receiving “standard services” that either do not include mobile advocacy or flexible funding, or include minimal levels.

Longitudinal MLM will be used to model outcome trajectories over 4 time-points -- pre-intervention followed by 6-, 12-, 18-, and 24-month follow-up for each outcome variable or construct. Repeated assessments of each survivor will be modeled at level 1 of the MLM; both linear and quadratic slope terms will be included if needed to reflect acceleration or slowing of change over time. Type of service received (mobile advocacy and flexible funding vs. standard services) will be added to the model at level 2, allowing tests of the significance of trajectory differences between the two service types (i.e. by estimating Service type x Slope interactions). To reduce the risk of Type I errors due to testing of multiple outcome variables, we will also estimate multivariate multilevel models (MMLM), with multiple construct measures at level 1, time at level 2, and service type at level 3, in order to assess the overall impact of service type on time trends across and within the multivariate constructs. For all of these analyses, type of service received will be determined through interviews with survivors and their advocates about the services that were received/provided; see Implementation Evaluation. MLM analyses will be conducted using Stata (StataCorp, 2015), HLM (Raudenbush, Bryk, Cheong, & Congdon, 2011), and/or Mplus (Muthen & Muthen, 2016).

Although we believe that much of the variation in services provided is likely related to fluctuations in agency resources (e.g., advocates’ caseloads, availability of flexible funds), some variation in services received may be related to characteristics of survivors themselves (e.g., their history of homelessness, barriers to stable housing, family characteristics). We will use multivariate analysis of variance to assess the extent to which there are significant baseline differences between survivors who receive mobile advocacy and flexible financial assistance and those who receive standard services and to determine the amount of between-groups variance explained by personal characteristics.

Because baseline differences may affect outcome trajectories (i.e., it may not be reasonable to assume that outcome trajectories will be parallel despite survivors’ starting at different baseline levels), it may be necessary to adjust for baseline differences in order to reduce this potential source of bias (Abadie, 2005). To accomplish this, we will use propensity score weighting or adjustment on the propensity score -- methods more practicable in small sample research than propensity score matching. Propensity scores will be derived from logistic regression models predicting service type from survivors’ baseline characteristics. The propensity model will be carefully examined to ensure that it is effective in balancing the groups on baseline covariates (Austin, 2011; Austin & Stuart, 2015). Once adequacy has been verified, propensity scores or weights derived from these models will be incorporated into the level 2 models for each outcome, along with type of service received.


Hypothesis 2: As parents’ housing stability and well-being increase, so too will children’s outcomes. Specifically, children will demonstrate positive changes over time in school attendance and achievement, behavioral problems, and social-emotional skills.

Children’s outcomes over time will be analyzed in the same manner as outcomes for survivors, with repeated assessments of each child modeled at level 1 of the MLM, including both linear and quadratic slope terms if needed to reflect acceleration or slowing of change over time. Service type and propensity scores or weights will be incorporated at level 2, and tests of outcome change will involve the Service type x Slope interaction. Because not all survivors will have children, it is possible that a different propensity model will be needed to adjust for baseline differences in service type within the subsample with children. If so, the same procedures outlined above will be applied.

Mediation analysis will be conducted to test whether improvement in children’s outcomes can be explained by improvement in their parents’ housing stability and well-being. Specifically, a potentially mediating parent outcome will be entered into each MLM child outcome model as a time-varying covariate at level 1. For example, to test whether parent housing stability mediates the impact of service type on child school attendance, housing stability at each time point will be added at level 1 to the MLM modeling child school attendance over time. The statistical test of the indirect effect of service type (level 2) -> housing stability (level 1) -> child school attendance (level 1) will assess whether the impact of service type on change in child school attendance is mediated by change in housing stability (Bauer, Preacher, & Gil, 2006).


Exploratory Question 1: Can advocates accurately predict which survivors will be stably and safely housed over time?

Advocates’ prediction of each survivor’s likelihood of remaining safely and stably housed (see advocate interview, Implementation Evaluation) will be added at level 2 to the MLMs modeling housing stability and safety over time. The strength and significance of the advocate’s prediction in relation to the survivor’s trajectory over time will be tested, both as a main effect and in interaction with service type, to test whether advocates who have provided more intensive mobile advocacy may be more accurate predictors.


Exploratory Question 2: Does this type of intervention work better for some survivors than for others?

To test whether baseline characteristics moderate the impact of service type on outcome trajectories over time, interaction terms involving baseline characteristic (e.g., extent of homelessness history) and service type will be added to each MLM tested under Hypotheses 1 and 2. Significant interactions will be further examined to determine which service type (mobile advocacy and flexible funding vs standard service) is associated with more positive outcome trajectories for which survivors (e.g., those with more or less extensive histories of homelessness).


Exploratory Question 3: Are there particular agency characteristics that are associated with better outcomes (e.g., procedures for determining services, number of advocates available)?

To test whether the characteristics of a given agency are associated with better outcomes, agency identifiers will be added as level 2 covariates to each MLM tested under Hypotheses 1 and 2. In addition to examination of main effects, interactions with service type will be tested to assess whether service type differences vary across agencies.


Exploratory Question 4: Are there particular community characteristics that are associated with better outcomes (e.g., more available housing)?

To test whether the community characteristics are associated with better outcomes, community (Seattle vs Yakima) will be added as a level 2 covariate to each MLM tested under Hypotheses 1 and 2 and tested for significant effects on outcome trajectories, conditional on service type. Both main and interaction effects with service type will be tested.


Missing data handling. Missing data will be minimized through the use of proven methods of participant retention and careful, face-to-face interviewing. In addition, one of the advantages of MLM analytic approaches is the ability to retain in analysis all individuals with level 2 data, including those with missing or mistimed interviews. It should be possible to include all individuals who complete initial interviews in the analyses. Pattern mixture modeling (Little, 2009) will be used to determine whether missing data affect study conclusions or are “ignorable” (i.e., conditionally missing at random). Ignorable missing data will be estimated using expectation maximization and multiple imputation procedures appropriate for longitudinal data (Enders, 2010). Sensitivity analysis will be used to examine the possible impact on study conclusions involving any missing data found to be nonignorable (Daniels & Hogan, 2008).


Statistical power.

Hypothesis 1. (Survivors receiving mobile advocacy and flexible financial assistance will show greater improvement in housing stability, economic stability, safety, quality of life, and mental health and substance abuse compared to survivors receiving “standard services” that either do not include mobile advocacy or flexible funding, or include minimal levels.) The sample of 320 will provide greater than 80% power at 2-tailed p < .05 for a minimum detectable difference of d=.25 SD (a small effect size) on outcome trajectories (both linear and quadratic) across time (Spybrook et al, 2011), assuming approximately 50% of the sample receive mobile advocacy and flexible financial assistance. The N will provide adequate power even if the proportion receiving mobile advocacy and flexible financial assistance is as low as 30%, with the minimum detectable difference in slopes rising to d=.38 SD, which is still a small-to-medium effect size. The anticipated minimum detectable difference in slopes of d=.25 SD translates into the following differences in raw score metric, which are based on modal standard deviations from published studies of similar populations, where available: 8.50 points on the Community Composite Abuse Scale; 0.53 points on the Housing Instability Index; 1.50 points on the PHQ-9 depression scale; 1.15 points on the GAD-7 anxiety scale; 0.30 points on Quality of Life; 0.25 points on Social Support. For child outcomes, the minimum detectable difference in slopes will be larger (d=.43, assuming that 50% of their parents receive mobile advocacy and flexible financial assistance) due to the anticipated smaller sample size; this translates into a raw score difference in slopes of 1.12 points on the Strengths and Difficulties total score. These power estimates take into account the use of propensity score covariates, assuming that they account for as much as 30% of the variance in the outcome trajectory.


Hypothesis 2. (As parents’ housing stability and well-being increase, so too will children’s outcomes. Specifically, children will demonstrate positive changes over time in school attendance and achievement, behavioral problems, and social-emotional skills.) Power will be lower for tests of whether child outcomes are mediated by parent outcomes, both because these tests involve indirect effects and because the sample of survivors with children will be somewhat smaller than the total of 320. Assuming that the standardized direct effects comprising the indirect effect (i.e., service type -> parent outcome and parent outcome -> child outcome) are both at least .21 and that the sample of participants with children is at least 150, power will exceed 80% to detect these mediated effects.


Exploratory Questions. Power to test the main effect in Exploratory Question 1 (Can advocates accurately predict which survivors will be stably and safely housed over time?) should be similar to the power for Hypothesis 1, (i.e., 80% power to detect a minimum slope difference of d=.25), assuming adequate variability in advocates’ predictions of housing stability and housing stability. Power to test the interaction between service type and advocates’ predictions will be lower, but the sample of 320 should be adequate to detect moderate-sized interaction effects (d=.50 SD). For Exploratory Question 2 (Does this type of intervention work better for some survivors than for others?), power to test interactions between survivor characteristics and type of service received will be dependent largely on the distribution of the survivor characteristic in question. Characteristics that have adequate variance (e.g., survivor age) or can be grouped into a small number of categories of adequate size (e.g., extent of homelessness history) should present power adequate to detect moderate-sized (d=.50 SD) interaction effects. For Exploratory Question 3 (Are there particular agency characteristics that are associated with better outcomes?), power to test the main effect of an agency characteristic on outcome slopes should be adequate to identify a small-to-moderate-sized difference in slopes (i.e., d = .25 to .50), assuming adequate variance on the agency characteristic. Several meaningful differences among the agencies have already been identified, so it seems likely that there will be adequate variance on at least some important characteristics. Power to test the interaction between service type and agency characteristic will depend on the distribution of service type, conditional on agency characteristic; if, as could be expected, service type is highly correlated with agency characteristic, power to test the interaction effect may be low. For Exploratory Question 4 (Are there particular community characteristics that are associated with better outcomes (e.g., more available housing)?), power should be adequate to find moderate-to-large sized main effect differences across the two communities. As with Exploratory Question 3, power to test the interaction between service type and community characteristics will depend on the distribution of service types across the two communities; if it is roughly proportional, there should be adequate power to detect small-to-moderate sized interaction effects.


Implementation Analyses

Analysis of the extensive implementation data will serve 4 purposes: 1) Description of the array of services provided and examination of variability, both within and across agencies. 2) Aggregation of intensity of advocacy services and use of flexible funding to determine which survivors received “mobile advocacy and flexible funding” vs standard service for use in the impact analysis. 3) Examination of agreement between survivor and advocate on the nature of survivor need and barriers as well as services provided. 4) Incorporation into the impact analysis of variables derived from the implementation analysis in order to examine which aspects of services received are associated with positive outcomes over time. Each of these is described more fully below:

  1. Descriptive analyses of services. All implementation data collected from survivor, advocate, and agency records will be carefully examined and summarized in order to document the services provided. Variation among survivors and across agencies and communities will be examined, as well as associations between survivor characteristics and types/amounts of service received. Variability in services across time will also be examined in relation to fluctuations in availability of flexible funds, advocate time (i.e., waiting lists), and other possible constraints on service availability.

  2. Aggregation of services into service types. Amount and intensity of advocacy services provided and flexible funds received will be aggregated to determine service type. Cluster or latent class analysis may be helpful in this process if classification differences are not clear from more straightforward methods such as cross-tabulation.

  3. Examination of agreement between survivor and advocate. Although survivors and advocates have different perspectives and access to different types of information, it will be useful to examine the extent to which they agree on basic elements such as survivor barriers and needs, whether these needs were met, amount of time spent together, and use of flexible funds. Cohen’s kappa and percent agreement will be used to assess agreement.

  4. Incorporation of implementation data into impact analysis. For a more fine-tuned examination of elements of service that may be associated with positive change in outcome, implementation variables will be added to the MLM models described under Hypotheses 1 and 2 of the impact analysis. For example, to test whether outcome change is associated with the extent to which advocacy services were trauma-informed and culturally aware, scores on the TIPS will be added to the MLM and tested for association with outcome trajectories.


Publication Plans

We do intend to publish the findings from this study through various means designed to reach academics, policy makers, practitioners, and general audiences interested in the intersection between domestic violence and housing. To that end, we will offer webinars, present findings at appropriate conferences, publish findings in peer-reviewed journals, and disseminate findings through social media, infographics, and other appropriate outlets. While it is premature to provide a full list of potential publications that might arise from this study until data are collected and analyzed, we anticipate publishing findings from the baseline data as well as longitudinal findings that focus on our outcome and implementation analyses.


Timeline of Data Collection

The baseline data collection will occur over 15 months, beginning in August 2017 and ending in November 2018. Follow-up data collections are projected to occur between February 2018 and November 2020.


  1. Reason(s) Display of OMB Expiration Date is Inappropriate


All instruments will display the OMB number and the expiration date.


  1. Exceptions to Certification for Paperwork Reduction Act Submissions


There are no exceptions to the certification.



1 Baseline data collection has been completed.

2 Baseline data collection has been completed.


24



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy