Supporting Statement A_v7_clean

Supporting Statement A_v7_clean.docx

OPRE Evaluation: Evaluation of Employment Coaching for TANF and Other Related Populations [Experimental impact study and an Implementation study]

OMB: 0970-0506

Document [docx]
Download: docx | pdf




Evaluation of Employment Coaching for TANF and Related Populations

(Second Follow-Up Survey)



OMB Information Collection Request

0970-0506




Supporting Statement

Part A

Revised March 2020

Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Hilary Bruck

Victoria Kabak


Shape1

  • Status of study:

    • This request is part of an ongoing evaluation (0970-0506). A previous information collection request (ICR) covered data collection activities for both an impact and an implementation study that are being conducted under this evaluation. Approved data collection activities for the impact study include: (1) baseline data collection and (2) the first follow-up survey. Approved data collection activities for the implementation study include: (1) semi-structured staff interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions. This current ICR seeks approval for the second follow-up survey for the impact study (see Attachment N for the survey instrument and Attachment O for a question-by-question justification for the survey instrument) and for minor wording revisions to study notifications (Attachment I).

  • What is being evaluated (program and context) and measured:

    • The evaluation is assessing the effectiveness of employment coaching interventions in helping TANF and related populations obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve self-regulation skills and overall well-being.

  • Type of study:

    • The evaluation includes an impact study (individuals are randomly assigned to treatment and control conditions) and an implementation study.

  • Utility of the information collection:

    • Coaching may be a promising way to help low-income or at-risk people become economically secure; however, there is little evidence on the effectiveness of coaching for improving employment and self-sufficiency among TANF and other low-income populations. This evaluation will describe six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve self-regulation skills and overall well-being.

    • This information can be used by policymakers to inform funding and policy decisions and by practitioners to improve employment programs. If this information collection does not take place, policymakers and providers of coaching programs will lack high-quality information on the effects of the interventions, as well as descriptive information that can help refine the operation of coaching interventions so they can better meet participants’ employment and self-sufficiency goals.



A1. Necessity for the Data Collection

The Office of Planning, Research, and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for a second follow-up survey conducted for the Evaluation of Employment Coaching for TANF and Related Populations (0970-0506) and for minor wording revisions to study notifications (Attachment I). The objective of this evaluation is to provide information on coaching interventions implemented by Temporary Assistance for Needy Families (TANF) agencies and other employment programs. The evaluation will describe up to six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve their overall well-being. The evaluation includes both an experimental impact study and an implementation study. The second follow-up survey will contribute to the experimental impact study.

A previous information collection request (ICR; 0970-0506) covered data collection activities for both an impact and an implementation study. Approved data collection activities for the impact study include: (1) baseline data collection and (2) the first follow-up survey. Approved data collection activities for the implementation study include: (1) semi-structured staff interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions. This current ICR seeks approval for the second follow-up survey for the impact study and for minor wording revisions to study notifications (Attachment I).

Study Background

Traditionally, TANF agencies and other employment programs build job search skills, prescribe further education and training, and address barriers to employment, such as those caused by mental health problems or lack of transportation and child care. Despite a variety of strategies implemented over several decades, assistance provided by these programs is insufficient to enable many participants to achieve self-sufficiency (Hamilton 2012). In response, some researchers have suggested that employment programs seeking to help low-income populations find and keep jobs take an alternative approach in which traditional case management is replaced with or supplemented by employment coaching strategies. Long recognized as an effective approach to helping people meet career and personal goals, coaching has drawn increasing interest as a way to help low-income people gain and maintain employment and realize career and family goals (Annie E. Casey Foundation 2007).

Coaching strategies are typically informed by behavioral science and focus on the role of self-regulation skills in finding and keeping a job. Self-regulation skills allow people to intentionally control thoughts, emotions, and behavior (Blair and Raver 2012). They include executive function (the ability to process, filter, and act upon information), attention, metacognition, emotion understanding and regulation, motivation, grit, and self-efficacy. Recently, research suggests that poverty can hinder the development and use of self-regulation skills (Mullainathan and Shafir 2013). Research has shown that coaching is a promising way to help low-income or at-risk people. For example, an evaluation of two financial coaching programs for low- and moderate-income people found that the programs reduced debt and financial stress, and increased savings (Theodos et al. 2015). Similarly, coaching has been found to be effective in assisting people with disabilities to obtain employment. The Individual Placement and Support (IPS) model was designed to help clients with disabilities plan for, obtain, and keep jobs consistent with their goals, preferences, and abilities (Wittenburg et al. 2013). In experimental studies, IPS has improved employment outcomes across multiple settings and populations (Davis et al. 2012; Bond et al. 2015). However, there is little evidence on the effectiveness of coaching for improving employment and self-sufficiency among TANF and other low-income populations.

Drawing on the history of coaching in other contexts, some employment programs for low-income people—administered by TANF, other public agencies, and nonprofit organizations—have begun to provide coaches as a means of improving employment and self-sufficiency (Pavetti 2014). These coaches work with participants to set individualized goals and provide support and feedback as they pursue their goals (Ruiz de Luzuriaga 2015; Pavetti 2014). The coaches may take into account self-regulation skills in three ways. First, they may teach self-regulation skills and encourage participants to practice them. This may occur by helping the participant set goals, determining with the participant the necessary steps to reach those goals, modeling self-regulation skills, and providing rewards or incentives. Second, they may help participants accommodate areas where their self-regulation skills are less developed. For example, staff may help participants choose jobs that align well with their stronger self-regulation skills or suggest participants use a cell phone app to remind them of appointments. Third, the coaches may reduce factors that hinder the use of self-regulation skills. They may do this by teaching stress-management techniques or reducing the paperwork and other burdens placed on the participant by the program itself.

To learn more about these practices, OPRE contracted with Mathematica Policy Research and Abt Associates to evaluate the following coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado. The second follow-up survey will contribute to the impact study, which will address the effectiveness of each coaching intervention in improving employment, self-sufficiency, and self-regulation outcomes as well as other measures of well-being.

Legal or Administrative Requirements that Necessitate the Collection

There are no legal or administrative requirements that necessitate the data collection. The collection is being undertaken at the discretion of ACF.

A2. Purpose of Survey and Data Collection Procedures

Overview of Purpose and Approach

The information collected through the second follow-up survey will be used to learn about the effectiveness of coaching interventions at improving outcomes for participants in employment programs serving TANF and related populations. This information can be used by policymakers to inform funding and policy decisions. If the information collection does not take place, policymakers and providers of coaching programs will lack high quality and long-term information on the effects of the interventions.

Research Questions

The second follow-up study will provide data for the impact study to answer the following research questions:

  1. Do the coaching interventions improve participants’ employment outcomes (such as employment, earnings, job quality, job retention, job satisfaction, and career advancement); self-sufficiency (income, public assistance receipt); and other measures of well-being?

  2. Do the coaching interventions improve measures of self-regulation? To what extent do impacts on self-regulation explain impacts on employment outcomes?

  3. Are the coaching interventions more effective for some groups of participants than others?

  4. How do the impacts of the coaching interventions change over time?

Study Design

The study will evaluate the following coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado.

MyGoals for Employment Success in Baltimore and Houston

MyGoals is targeted to unemployed or underemployed adults between the ages of 18 and 56 who are receiving housing support from the housing authority. Its objective is to improve self-regulation skills and help participants find solutions to their problems in the short-term while increasing their overall economic security and decreasing their reliance on public assistance in the long-term. MyGoals is a three-year program. Coaches meet with participants every three to four weeks during the first two years and are encouraged to check in between sessions. They meet with participants less frequently in the third year.

Family Development and Self-Sufficiency Program

Iowa’s Department of Human Rights implements the Family Development and Self-Sufficiency (FaDSS) program through contracts with 17 local agencies across the state. This evaluation will include a subset of these local agencies. FaDSS is funded through the TANF block grant and serves only TANF participants. The objective of the program is to help families achieve emotional and economic independence. FaDSS is targeted to TANF recipients with barriers to self-sufficiency. The coaches meet with participants in their homes at least twice in each of the first three months and then monthly starting in the fourth month, with two additional contacts with the family each month. FaDSS expects to be able to enroll 1,000 people rather than 2,000 for the evaluation.

LIFT – New York City, Chicago, and Los Angeles

LIFT is a national non-profit that provides coaching and navigation services to clients in New York City, Chicago, Los Angeles, and Washington, DC. For the purposes of our evaluation, the New York, Chicago, and Los Angeles subsites will be aggregated and considered a single LIFT site. LIFT’s goal is to help clients find a path toward goal achievement and financial security by matching them with coaches. Clients set short-term and long-term goals and the coach helps clients build an action plan to achieve those goals. The LIFT coaching approach is nondirective and allows clients to choose the goals and milestones they want to work on. LIFT clients are expected to meet with a coach on a regular basis for up to two years. During the first month of the program, clients typically have two or three in-person sessions with a coach. After the first month, clients meet with coaches monthly to discuss progress toward goals and obstacles that are impeding progress. These sessions typically last 60 to 90 minutes.

Work Success – Utah

Work Success is an employment coaching program administered by Utah’s Department of Workforce Services—an agency that oversees TANF, Supplemental Nutrition Assistance Program, Workforce Innovation and Opportunity Act, and other workforce programs. The program is offered statewide in about 30 employment centers (American Job Centers) with one or two coaches per center. The program served about 1,350 clients in 2016, largely concentrated in the greater Salt Lake City area. The objective of the program is to improve employment outcomes by focusing on job placement. Each participant is assigned a coach, who works with him/her to set goals and review progress toward goals. The Work Success coach meets with clients daily, one-on-one while they are in the program to discuss their individual goals, steps they will take to achieve those goals, and any challenges they are facing. Coaching also happens in group settings where the coach engages the group in soft skills trainings, identification of skills and strengths, and other group activities.

Goal4 It! Jefferson County, Colorado

Goal4 It! is an evidence-informed, customer-centered framework for setting and achieving goals developed by Mathematica Policy Research. It was designed to be a replicable and sustainable coaching approach that can be used in a TANF, workforce, or other social service environment. Using the Goal4 It! approach, trained coaches help clients set meaningful goals, break goals down into manageable steps, develop specific plans to achieve the steps, and regularly review goal progress and revise their goals and/or plans. Coaches and case managers meet with clients who are not working at least once per month and meet with clients who are working at least once every two months. The first meeting is usually for one hour. Ongoing meetings are 30 or 45 minutes long. Each coach and case manager serves about 45 clients.

The two main criteria for selecting the coaching interventions for the evaluation were that: (1) an evaluation of the program would address ACF’s policy interests and inform the potential development of coaching interventions in the future; and (2) it was feasible to conduct a rigorous impact evaluation of the coaching intervention. To meet the first broad criterion, the program in which the intervention is embedded needed to serve a low-income population and focus on employment, and the coaching intervention should be robust and well implemented. To meet the second broad criterion, random assignment must have been feasible, the potential number of study participants must have been large enough to detect an impact expected from the intervention, and the program’s management and staff must have been supportive of an experimental evaluation.

The second follow-up survey, which is part of the overall impact study, will provide rigorous evidence on whether the coaching interventions are effective, for whom, and under what circumstances. The study is experimental. Participants eligible for the coaching services were asked to consent to participate in the study (Attachment A) and, if consent was given, were randomly assigned to two groups: a treatment group offered coaching and a control group not offered coaching. Individuals who did not consent to participate in the study were not eligible to receive coaching, were not randomly assigned, and will not participate in the data collection efforts. The control group may receive other services within the program. Both groups will remain eligible for other services offered in the community. For example, the control group may receive regular case management from staff who have not been trained in coaching. With this design, the research groups are likely to have similar characteristics, so differences in outcomes too large to be attributable to chance can be attributed to the coaching intervention. Under 0970-0506, we are collecting information at baseline (before or during random assignment) from study participants and staff and again at about 6 to 12 months after random assignment. This ICR seeks clearance for a second follow-up survey, which will be available via the web and telephone at about 21 to 24 months after random assignment.

Universe of Data Collection Efforts



This ICR seeks clearance for a second follow-up survey for the impact study (Attachment N). The second follow-up survey will primarily collect data on outcomes of both the treatment and control group members, including outcomes related to employment, self-sufficiency, self-regulation, and service receipt. This second follow-up survey will collect a similar set of outcome data as the first. A question-by-question justification for the items included in the second follow-up survey is presented in Attachment O. The second follow-up survey will be available to participants via the web or telephone about 21 to 24 months after random assignment.

Previously approved data collection efforts are listed below. Additionally, OMB previously approved the study consent form (Attachment A) and notifications (Attachment I).

Impact Study

  • Baseline data collection (Attachment B)

  • First follow-up survey (Attachment C)

Implementation Study

  • Semi-structured management staff and supervisor interviews (Attachment D)

  • Staff survey (Attachment E)

  • In-depth participant interviews (Attachment F)

  • Staff reports of program service receipt (Attachment G)

  • Instructions for video recording coaching sessions (Attachment M)


A3. Improved Information Technology to Reduce Burden

This evaluation will use multiple applications of information technology to reduce burden. The second follow-up survey will be hosted on the Internet via a live secure web-link. To reduce burden, the survey will employ the following: (1) secure log-ins and passwords so that respondents can save and complete the survey in multiple sessions, (2) drop-down response categories so that respondents can quickly select from a list, (3) dynamic questions and automated skip patterns so that respondents only see those questions that apply to them (including those based on answers provided previously in the survey), and (4) logical rules for responses so that respondents’ answers are restricted to those intended by the question.

Respondents also have the option to complete the second follow-up survey using computer-assisted telephone interviewing (CATI). CATI reduces respondent burden, relative to interviewing via telephone without a computer, by automating skip logic and question adaptations and by eliminating delays caused when interviewers must determine the next question to ask. CATI is programmed to accept only valid responses based on preprogrammed checks for logical consistency across answers.

A4. Efforts to Identify Duplication

Information that is already available from alternative data sources will not be collected again for this evaluation. We will be collecting information related to employment and earnings both through administrative records and directly from study participants. This information is not duplicative because the two sources cover different types of employment. Information on quarterly earnings from jobs covered by unemployment insurance will be obtained from NDNH administrative records. The second follow-up survey will ask for earnings across all jobs, including those not covered by unemployment insurance. A number of experimental employment evaluations have found large differences in survey- and administrative-based earnings impacts (Barnow and Greenberg 2015). Therefore, collecting information from both sources is necessary for a full understanding of impacts on earnings. To further identify and avoid duplication, we will not request baseline characteristic information in the second follow-up survey for participants that already provided this information in the first follow-up survey.

A5. Involvement of Small Organizations

The data collection does not involve small businesses or other small entities.

A6. Consequences of Less Frequent Data Collection

A first follow-up survey is available to participants approximately six to 12 months after random assignment. This second follow-up survey will be available to participants about 21 to 24 months after random assignment. This second follow-up survey will collect a similar set of outcome data as the first. This will allow an examination of whether the impacts of the program changed over time and whether changes in self-regulation skills were associated with changes in employment and self-sufficiency outcomes.

A7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.

A8. Federal Register Notice and Consultation

Federal Register Notice and Comments

In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published a notice in the Federal Register announcing the agency’s intention to request an OMB review of this information collection activity. This notice was published on September 18, 2018, Volume 83, Number 181, pages 47176-47177, and provided a 60-day period for public comment. Attachment D provides a copy of this notice. During the notice and comment period, no comments were received.

Consultation with Experts Outside of the Study

Experts in their respective fields from OPRE, Mathematica Policy Research, Abt Associates, and the University of Chicago listed below were consulted in developing the design, data collection plan, and materials for which clearance is requested.

OPRE

Hilary Bruck, Senior Social Science Research Analyst

Victoria Kabak, Social Science Research Analyst


Business Strategy Consultants

Gabrielle Newell, Contract Social Science Research Analyst


Mathematica Policy Research

Dr. Sheena McConnell, Project Director

Dr. Quinn Moore, Deputy Project Director

Dr. Michelle Derr, Principal Investigator

Shawn Marsh, Survey Director


Abt Associates

Dr. Alan Werner, Principal Investigator

Dr. Bethany Borland, Senior Analyst


University of Chicago

Dr. James Heckman, Measurement Expert

A9. Incentives for Respondents

The Office of Management and Budget’s Office of Information and Regulatory Affairs OIRA approved a two-tiered incentive structure with an “early bird” incentive that provides survey respondents $35 if they complete the survey within four weeks of the initial notification, and $25 if they complete after four weeks. We have employed this incentive structure for participants in all six programs during the administration of both the first and second follow-up surveys to date. We are now proposing that the two-tiered incentive structure continue only among study participants in the two MyGoals sites in Baltimore and Houston, and propose that participants from the other four sites (FaDSS, LIFT, Jefferson County Colorado Works, and Work Success) be offered a $50 incentive for completing each survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. The reason for proposing this change is that, given patterns of survey response for those four sites, there is a risk that our analysis will result in biased estimates of program impacts and will underrepresent participants in key analytic groups. The four sites for which we propose the higher monetary incentive are different from the MyGoals sites which are run in conjunction with public housing programs where participants live, and as a result the study participants from these programs have exhibited higher levels of cooperation with the evaluation. The response rates for MyGoals participants in completed monthly cohorts are on target to reach 80 percent with small differences in the response rates between treatment and control groups, and these response rates have been achieved with shorter field periods which limits recall issues. The response rates for FaDSS, LIFT, Jefferson County, and Work Success have been significantly lower and have required longer field periods for the first follow-up survey, with some monthly cohorts having to be closed before completion of the survey because they are due to start of the second follow-up survey. The justification for the two-tiered and in-depth interview incentive strategy is provided below. The justification for the change in incentive structure and amount for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites is included in Attachment P (“Request to change burden and incentive structure-amount”).


Background:


Estimates of program impacts may be biased if respondents differ substantially from non-respondents and those differences are correlated with assignment to the evaluation treatment or control groups. The risk of biased impact estimates increases with lower overall survey response rates or larger differences in survey response rates between the research groups (What Works Clearinghouse 2013). Thus, if low overall response rates or large differential response rates between the research groups are observed, differences between groups on key outcomes might be the result of differences in baseline characteristics among survey respondents and cannot be attributed solely to the effect of the coaching intervention (What Works Clearinghouse 2013).

Concern about the potential for low overall response rates are particularly relevant to this study. The longitudinal nature of the study adds to the complexity of the second follow-up survey data collection. Additionally, the coaching interventions are designed for unemployed low-income people. A number of factors could complicate tracking such participants over time. These factors include:

  • Unstable housing.

  • Less use of mortgages, leases, public utility accounts, cell phone contracts, credit reports, memberships in professional associations, licenses for specialized jobs, activity on social media, and appearances in publications such as newspapers or blogs.

  • Use of an alias to get utility accounts because of poor credit and prior payment issues.

  • Use of pay-as-you-go cell phones. These phone number are generally not tracked in online databases. Pay-as-you-go cell phone users also switch numbers frequently, which makes contacting them across a follow-up period more difficult.

Differential response rates between the treatment and control groups could bias this study’s impact estimates. Participants assigned to the control group may be less motivated to participate than those assigned to the treatment group because they are not receiving the intervention. They may also feel that the surveys are not relevant to them.

Evidence supporting use of incentives:

Methodological research on incentives. Evidence from prior studies shows that incentives can decrease the differential response rate between the treatment and control groups, and therefore reduce nonresponse bias on impact estimates (Singer and Kulka 2002; Singer et al. 1999; Singer and Ye 2013). For example, incentives are useful in compensating for lack of motivation to participate among control group members (Shettle and Mooney 1999; Groves et al. 2000). Incentives have also been found to induce participation among sample members for whom the topic is less salient, including members of the control group (Baumgartner and Rathbun 1997), a finding that also applies with hard-to-reach populations, similar to the target population of the current study (Martinez-Ebers 1997). Other experimental research on incentives concludes that incentives significantly increase response rates, reduce the average number of contacts required to achieve completed surveys, and reduce overall survey data collection costs (Westra et al. 2015).

Research evidence from similar studies. Evidence from an incentive experiment conducted as part of the Self-Employment Training (SET) Demonstration, approved by OMB (OMB control number 1205-0505), suggests that incentives are a successful strategy for improving response rates for low-income populations. This experiment assessed the effectiveness of three incentive approaches: (1) offering a standard incentive of $25; (2) offering a two-tiered incentive, with an incentive of $50 if respondents completed an 18-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks; or (3) no incentive.

Results from the SET incentive experiment suggest that incentives substantially reduce both overall nonresponse rates and differential response rates between the research groups. Among sample members offered an incentive, this experiment resulted in a 73 percent overall response rate for those in the two-tiered incentive group and a 64 percent response rate for those in the standard incentive group. The response rate for sample members who were not offered an incentive was 37 percent. The differential response rate between research groups for sample members offered an incentive was 12 percentage points for the two-tiered incentive group (79 percent for the treatment group versus 67 percent in the control group) and 6 percentage points for the standard incentive group (67 percent for the treatment group versus 61 percent in the control group). The differential response rate was substantially higher for the no incentive group at 36 percentage points (55 percent for the treatment group versus 19 percent in the control group).

Based on evidence from SET, we anticipate that without incentives, the survey response rate would be unacceptably low; it is likely to be less than 50 percent. Such response rates would put the study at severe risk of biased impact estimates.

Evidence supporting use of two-tiered incentives for the follow-up survey:

In addition to determining whether the study requires use of incentives, we must determine the structure that the incentives will take. We propose continuing to use the OMB-approved two-tiered incentive approach for the second follow-up survey, which is identical to the two-tiered incentive approach used for the first follow-up survey, with study participants from the two MyGoals sites.1 We will continue to offer a $35 gift card to those who complete the survey, either online or by telephone, within the first four weeks after being first asked to complete the survey; respondents will receive a $25 gift card if they complete the survey after four weeks. A key aim of this “early bird” approach is reducing survey administration costs by encouraging low-cost online survey completion and reducing the need for costly location efforts. We propose using the two-tiered incentive model based on past experience from related studies, which observed lower survey costs due to reduced need for mail reminders, locating, and reminder calls. We anticipate that continuing to use the two-tiered incentive will help us achieve our response rate target of 80 percent for the two MyGoals sites. To date, the response rates in the MyGoals sites are on target to reach 80 percent with small differences in the response rates between treatment and control groups. Using a two-tiered incentive structure will facilitate shorter data collection times and contain data collection costs. As described in Section A.9 above, we propose changing the incentive structure and amount for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites. The justification for this change is included in Attachment P (“Request to change burden and incentive structure-amount”).

Research evidence from similar studies. Two impact evaluations conducted incentive experiments that informed our proposed two-tiered incentive structure: SET and YouthBuild.

The results of the SET incentive experiment described above showed that relative to standard incentives, the two-tiered incentive led to somewhat higher overall response rates (73 versus 64 percent) but somewhat greater differential nonresponse rates between the research groups (12 versus 6 percentage points). Thus, findings related to response rate patterns do not strongly favor one incentive approach over the other.

However, the SET incentive experiment also concluded that two-tiered incentives led to shorter response times, lower average costs, and lower total fielding costs (including for the cost of the incentive payments). Specifically, the incentive experiment found that 98 percent of survey completes in the two-tiered incentive group came within four weeks of release, compared to 86 percent for the standard incentive group. Faster response time has implications for data quality because it ensures that the reference period for the one-year follow-up survey is as close to one year after study enrollment as possible. Faster response times also have important implications for data collection cost. In the SET incentive experiment, the average cost per complete was approximately 10 percent higher for the standard incentive group than for the two-tiered incentive group, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model.

Please note that the SET incentive experiment cannot disentangle which aspect of the two-tiered incentive structure—two tiers or higher overall incentive amount—led to higher overall response rates, faster response times and lower overall costs. Thus we do not know what the response and cost patterns would have been with a two-tiered structure that used a lower initial incentive amount. The proposed initial incentive amount for this study ($35 for response within the first four weeks) is lower than the one used in SET ($50 for response within the first four weeks). The final incentive amount is the same ($25 for response after four weeks).

The YouthBuild evaluation (OMB control number 1205-0503), sponsored by the Department of Labor, also incorporated an incentive experiment. This experiment assessed the effectiveness of two incentive approaches: (1) offering a standard incentive of $25; or (2) offering a two-tiered incentive, with an incentive of $40 if respondents completed a 12-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks.

Results from the YouthBuild incentive experiment are consistent with those of the SET incentive experiment in terms of effects on response rate, response time and cost. The tiered incentive structure slightly increased the overall response rate; sample members in the two-tiered incentive group had an overall response rate of 72 percent compared to 68 percent for the standard incentive group. We do not have data from the YouthBuild incentive experiment on the effect of incentive structure on differential response rates between the research groups.

YouthBuild sample members in the two-tiered incentive group were 38 percent more likely to respond to the survey within four weeks than those assigned to receive a standard incentive. As a result sample members in the two-tiered incentive group were less likely to be subject to more labor intensive and costly data collection efforts such as contacts from telephone interviewers, extensive in-house locating, or ultimately field locating. Results from the YouthBuild incentive experiment indicate that final data collection cost estimates were approximately 17 percent lower with two-tiered incentives than with standard incentives, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model. As with the SET incentive experiment, we cannot disentangle which aspect of the two-tiered incentive structure (incentive value or incentive structure) is responsible for the reported effects of the incentive.

The two-tiered incentive structure for the first and second follow-up surveys was approved and has been implemented across all sites for the first and second follow-up surveys to date. We propose changing this structure and amount for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites due to significantly low survey response rates that risk our analysis resulting in biased estimates of program impacts and underrepresentation of participants in key analytic groups. We propose to offer sample members from these sites $50 for completing each follow-up survey, irrespective of whether they complete the survey within the four-week “early bird” period. The justification for this change is included in Attachment P (“Request to change burden and incentive structure-amount”).


Table A.1 below presents findings from the incentive experiments described above.


Table A.1 Incentive type and response rates obtained in similar studies with incentive experiments

Study

Instrument

Duration

(minutes)

Response Rate


Self-Employment Training Demonstration,

Incentive experiment sample

OMB control #1205-0505

18 month follow-up

20

Two-tiered incentive ($50 first four weeks, $25 after four weeks):

  • 73 percent overall

  • 79 percent treatment group

  • 67 percent control group


Standard incentive ($25):

  • 64 percent overall

  • 67 percent treatment group

  • 61 percent control group


No incentive:

  • 37 percent overall

  • 55 percent treatment group

  • 19 percent control group


YouthBuild,

Incentive experiment sample

OMB control #1205-0503

12-month follow-up

60

Two-tiered incentive ($40 first four weeks, $15 after four weeks):

  • 72 percent overall


Standard incentive ($25):

  • 68 percent overall

Note: Response rates separate by research group are not available for the YouthBuild incentive experiment.



Response rates for similar studies:

Table A.2 presents the type of data collection, incentive offered, and response rates obtained for similar studies cited in this section. Table A.2 includes information on the SET and YouthBuild studies. Information on these studies in Table A.1, discussed above, relates to results from the incentive experiment, conducted on early cohorts of sample released for data collection. Based on results of these experiments, the SET and YouthBuild studies both implemented two-tiered incentives study wide. Table A.2 presents results for the full data collection, before and after the conclusion of the incentive experiments.

Table A.2 Incentives and response rates obtained in similar studies

Study

Instrument

Duration

(minutes)

Incentive Amount

Response Rate


Self-Employment Training Demonstration, Full sample

OMB control #1205-0505

18 month follow-up

20

$50 first four weeks

$25 after four weeks

80 percent overall

83 percent treatment

78 percent control

YouthBuild

Full sample

OMB control #1205-0503

12 month follow-up

60

$40 first four weeks

$25 after four weeks

81 percent overall

82 percent treatment

79 percent control

Note: Treatment and control groups in this table refer to the overall evaluation (that is, the original conditions to which sample members were assigned upon enrollment) and not the incentive experiment. The SET and YouthBuild samples include the survey sample, including the time before and after the conclusion of the incentive experiments described in Table A.1.

A10. Privacy of Respondents

Information collected will be kept private to the extent permitted by law. As part of the consent process (Attachment A), respondents were informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private to the extent permitted by law. Due to the sensitive nature of this research (see A11 for more information), the evaluation obtained a Certificate of Confidentiality. The Certificate of Confidentiality helps assure participants that their information will be kept private to the fullest extent permitted by law.

As specified in the contract, Mathematica and Abt will protect respondent privacy to the extent permitted by law and will comply with all Federal and departmental regulations for private information. Mathematica has developed a Data Safety and Monitoring Plan that assesses all protections of respondents’ personally identifiable information (PII). Mathematica and Abt will ensure that all of its employees, subcontractors (at all tiers), and employees of each subcontractor who perform work under this contract/subcontract are trained on data privacy issues and comply with the above requirements. All study staff with access to PII will receive study-specific training on (1) limitations on disclosure; (2) safeguarding the physical work environment; and (3) storing, transmitting, and destroying data securely. These procedures will be documented in training manuals. Refresher training will occur annually.

As specified in the evaluator’s contract, Mathematica and Abt will use Federal Information Processing Standard compliant encryption (Security Requirements for Cryptographic Module, as amended) to protect all instances of sensitive information during storage and transmission. Mathematica and Abt will securely generate and manage encryption keys to prevent unauthorized decryption of information, in accordance with the Federal Information Processing Standard. Mathematica and Abt will ensure that they incorporate this standard into their property management/control system, and establish a procedure to account for all laptop computers, desktop computers, and other mobile devices and portable media that store or process sensitive information. Any data stored electronically will be secured in accordance with the most current National Institute of Standards and Technology requirements and other applicable Federal and departmental regulations.

Information will not be maintained in a paper or electronic system from which they are actually or directly retrieved by an individuals’ personal identifier.

A11. Sensitive Questions

Some sensitive questions are necessary in an evaluation of programs designed to affect employment. Before starting the baseline and follow-up surveys and the in-depth interviews, all respondents are and will be informed that their identities will be kept private and that they do not have to answer any question that makes them uncomfortable. Although such questions may be sensitive for many respondents, they have been successfully asked of similar respondents in other data collection efforts, such as in the first follow-up survey of the Evaluation of Employment Coaching for TANF and Related Populations (OMB control number 0970-0506), Parents and Children Together (OMB control number 0970-0403) and the Workforce Investment Act Gold Standard Evaluation (OMB control number 1205-0504).

The sensitive questions in the second follow-up survey relevant for this ICR include:

  • Wage rates and earnings. It is necessary to ask about earnings because increasing participants’ earnings is a key goal of coaching interventions. The second follow-up survey asks about each job worked since random assignment, the wage rate, and the number of hours worked per week.

  • Challenges to employment. It is important to ask about challenges to employment both at baseline and at follow-up. The reported challenges at baseline can be used to define subgroups for whom the program may be particularly effective or ineffective. It is important to ask about challenges to employment in the follow-up survey because the coaching intervention may have addressed these challenges. Challenges measured through the surveys include problems with transportation, needing to take care of a family member, lack of clothes or tools, not having the right education or skills, and having a criminal record.

  • Convictions. Prior involvement in the criminal justice system makes it harder to find employment. Hence, it is important to ask about convictions that occurred before random assignment as baseline information (if participants did not already provide this information in the first follow-up) and convictions that occurred after random assignment or since the first follow-up survey as an outcome that may be affected by coaching.

  • Economic hardships. The follow-up survey asks about economic hardships, such as missing meals or needing to borrow money from friends. These outcomes reflect a lack of self-sufficiency and may be affected by coaching.

A12. Estimation of Information Collection Burden

Previously Approved Information Collections

Total Burden Previously Approved

As of the last change to 0970-0506, 6,188 annual burden hours were approved. This includes burden for data collection at six sites and covers the following information collections:


  • Baseline data collection (Attachment B)

  • First follow-up survey (Attachment C)

  • Semi-structured staff interviews (Attachment D)

  • Staff survey (Attachment E)

  • In-depth participant interviews (Attachment F)

  • Staff reports of program service receipt (Attachment G)

  • Video recordings of coaching sessions (Attachment M)

Table A.3 presents the 6,089 annual burden hours remaining at the time of this request from the previously approved information collections.


Table A.3 Burden remaining from previously approved information collections



Instrument

Total number of respondents remaining

Annual number of respondents

remaining

Number of responses Per respondent

Average burden hours per response

Annual burden hours

Average Hourly Wage

Total Annual Cost

Baseline data collection – study participants

5,931

1,977

1

0.33

653

$7.25

$4,734.25

Baseline data collection – staff

60

20

99

0.33

653

$33.38

$21,797.14

First follow-up survey

4,544

1,515

1

0.75

1,136

$7.25

$8,236

Semi-structured staff interviews

132

44

1

1.5

66

$33.38

$2,203.08

Staff survey

96

32

1

0.75

24

$33.38

$801.12

In-depth participant interviews

48

16

1

2.5

40

$7.25

$290.00

Staff reports of program service receipt

60

20

5,200

0.03

3,120

$33.38

$104,145.60

Video recordings of coaching sessions

54

18

10

0.1

18

$33.38

$600.84

Estimated annual burden total

5,710


$142,808.03




Newly Requested Information Collections

The estimated reporting burden and cost for the second follow-up survey are presented in Table A.4. We expect to survey 6,000 study participants (1,000 participants per program). If the study includes more than 1,000 participants per program, then the survey will be administered to a random sample of 1,000 study participants. We anticipate an 80 percent response rate or 4,800 respondents.2 Annualizing 4,800 respondents over three years yields about 1,600 respondents per year. We expect each survey to last 45 minutes, for a total of about 1,200 annualized burden hours. We initially expected the survey to last 60 minutes; however, the average length of the interviews to date is 45 minutes. Therefore we propose to change the burden estimate used in communication with study participants across all sites from 60 to 45 minutes. Additional information is provided in Attachment P (“Request to change burden and incentive structure-amount”).

Table A.4 Total new burden requested under this information collection


Instrument

Total number of respondents

Annual number of respondents

Number of responses Per respondent

Average burden hours per response

Annual burden hours

Average Hourly Wage

Total Annual Cost

Second follow-up survey

4,800

1,600

1

0.75

1,200

$7.25

$8,700

Estimated annual burden total

1,200


$8,700


Total Annual Cost

The total annual cost related to the second follow-up survey is $8,700. The total estimated cost figures are computed from the total annual burden hours and an average hourly wage for program applicants. The average hourly wage of study participants is estimated to be $7.25, the federal minimum wage.



Total Burden Under 0970-0506

The total annual burden, including previously approved and remaining hours in addition to this new request is 6,910 annual hours.

A13. Cost Burden to Respondents or Record Keepers

There are no additional costs to respondents or record keepers.

A14. Estimate of Cost to the Federal Government

The total cost for the second follow-up survey data collection activity under this current request will be $5,084,317. Annual costs to the Federal government will be $1,694,772 for the proposed data collection. These costs are inclusive of survey administration, and survey analysis and reporting.

A15. Change in Burden

This submission involves a new data collection request under OMB #0970-0506 and updates burden to reflect data collection to date for previously approved instruments.

A16. Plan and Time Schedule for Information Collection, Tabulation and Publication Plans for Tabulation

The impact analysis will estimate the effectiveness of each coaching intervention in the evaluation. The goal of the impact analysis is to compare observed outcomes for study participants who were offered the coaching intervention with outcomes for members of a control group who were not offered coaching. We will use the experience of the control group as a measure of what would have happened to the treatment group participants in the absence of the intervention. Random assignment makes it likely that the two groups of study participants do not initially differ in any systematic way on any characteristic. Any observed differences in outcomes between the treatment and control group members can therefore be attributed to the intervention.

We will use the baseline data collected under 0970-0506 to describe the study participants in each coaching intervention. We will use t-tests to assess whether random assignment successfully generated treatment and control groups with similar baseline characteristics, and that survey respondents in the two groups are similar.

Differences in means or proportions of follow-up outcomes between the treatment and control group will provide unbiased estimates of the impacts of the intervention. More precise estimates will be obtained using regression models to control for random differences in the baseline characteristics of treatment and control group members. In their simplest forms, these models can be expressed by the following equation: , where is an outcome for person (such as earnings); is a constant; is a vector of baseline characteristics (such as gender, age, race/ethnicity); is a vector representing the relationship between each baseline characteristic and the outcome; is an indicator for whether person received treatment; and is an error term. represents the estimated impact of the intervention. We will estimate these models separately for each coaching intervention.

If the sample is large enough, we will conduct a subgroup analysis to examine who benefits most from the intervention. We will estimate subgroup effects using the following equation: , where is an indicator for whether person is part of a subgroup; represents the relationship between subgroup status and the outcome; represents the additional effect of treatment for those in the subgroup. We will consider subgroups that are appropriate for the intervention’s target population, such as those defined by work readiness, employment challenges, or TANF history.

Time Schedule and Publication

Study enrollment and baseline data collection began in summer 2018 under a previous ICR approved by OMB (0970-0506). Over the duration of the evaluation, a series of reports will be generated, the timing for which is highlighted in Table A.5. Two reports will be produced on the impact findings, based on the first and second follow-up surveys, respectively. Reports on the implementation study include a detailed report describing each program and a report examining the implementation findings across all six programs (a cross-site implementation study report). In addition to these reports, this evaluation may provide opportunities for analyzing and disseminating additional information through special topics reports and research or issue briefs. We will also provide a public or restricted-use data file for others to replicate and extend our analyses.

Table A.5. Study schedule

Activity

Timing*

Data collection


Second follow-up survey

Spring 2019 through Fall 2021

Reporting


Second follow-up findings report

Fall 2022

Special topics reports

To be determined

*All dates dependent on date of OMB approval of this information collection request.

A17. Reasons Not to Display OMB Expiration Date

All instruments will display the expiration date for OMB approval.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.


References

Annie E. Casey Foundation. “Financial Coaching: A New Approach for Asset Building?” Baltimore, MD: Annie E. Casey Foundation, 2007. Retrieved from www.aecf.org.

Barnow, B. S., and D. Greenberg. “Do Estimated Impacts on Earnings Depend on the Source of

the Data Used to Measure Them? Evidence from Previous Social Experiments.” Evaluation

Review, vol. 39, no. 2, April 2015.

Baumgartner, R., and P. Rathbun. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, VA, 1997.

Blair, C., and C. Raver. “Improving Young Adults’ Odds of Successfully Navigating Work and Parenting: Implications of the Science of Self-Regulation for Dual-Generation Programs.” Draft report submitted to Jack Shonkoff, Center on the Developing Child, Harvard University, January 2015.

Bond, Gary R., S. J. Kim, D. R. Becker, S. J. Swanson, R. E. Drake, I. M. Krzos, V.V. Fraser, S. O'Neill, and R. L. Frounfelker. "A Controlled Trial of Supported Employment for People with Severe Mental Illness and Justice Involvement." Psychiatric Services, vol. 66, no. 10, 2015.

Davis, Lori L., A.C. Leon, R. Toscano, C.E. Drebing, L.C. Ward, P. E Parker, T.M. Kashner, and R.E Drake, 2012. "A randomized controlled trial of supported employment among veterans with posttraumatic stress disorder." Psychiatric Services, vol. 63, no. 5, May 2012, pp. 464-470.

Groves, R.M., E. Singer, and A.D. Corning. “A Leverage-Saliency Theory of Survey Participation: Description and Illustration.” Public Opinion Quarterly, vol. 64, 2000, pp. 299–308.

Hamilton, Gayle. “Improving Employment and Earnings for TANF Recipients.” Washington, DC: The Urban Institute, Office of Planning, Research, and Evaluation, March 2012.

Lambert EY, Wiebel WW (Eds): The Collection and Interpretation of Data from

Hidden Populations. Washington, DC: United States National Institute on

Drug Abuse; 1990.

Martinez-Ebers, V. “Using Monetary Incentives with Hard-to-Reach Populations in Panel Surveys.” International Journal of Public Opinion Research, vol. 9, 1997, pp. 77–86.

Mullainathan, S., and E. Shafir. Scarcity: Why Having Too Little Means So Much. New York: Henry Holt & Company, 2013.

National Research Council. 2001. Studies of welfare populations: Data collection and research issues. Washington, DC. The National Academies Press.

Office of Information and Regulatory Affairs. “Questions and Answers When Designing Surveys for Information Collections.” Office of Management and Budget, October 2016.

Pavetti, LaDonna. “Using Executive Function and Related Principles to Improve the Design and Delivery of Assistance Programs for Disadvantaged Families.” Washington, DC: Center on Budget and Policy Priorities, May 2014.

Ruiz de Luzuriaga, Nicki. Coaching for Economic Mobility. Boston, MA: Crittenton Women’s Union, 2015.

Shettle, C, and G. Mooney. “Monetary Incentives in Government Surveys.” Journal of Official Statistics, vol. 15, 1999, pp, 231–250.

Singer, E. and R. Kulka. 2002. "Paying Respondents for Survey Participation," In Studies of

Welfare Populations: Data Collection and Research Issues, eds. Michele Ver Ploeg, Robert A.

Moffitt, and Constance F. Citro, pp. 105-128. Washington: National Academy Press.

Singer E, J. Van Hoewyk, N. Gebler, T. Raghunathan, and K. McGonagle. “The effect of incentives on response rates in interviewer-mediated surveys.” Journal of Official Statistics, vol. 15(2), 1999, 217–230.

Singer, Eleanor, and C. Ye. 2013. "The Use and Effects of Incentives in Surveys." Annals of the American Academy of Political and Social Science, 645(1): 112-141.

Theodos, Brett, Margaret Simms, Mark Treskon, Christina Stacy, Rachel Brash, Dina Emam, Rebecca Daniels, and Juan Collazos. "An Evaluation of the Impacts and Implementation Approaches of Financial Coaching Programs." October. Urban Institute. www. urban.org/sites/default/files/alfresco/publication-pdfs/2000448-An-Evaluation-of-the-Impacts-and-Implementation-Approaches-of-Financial-Coaching-Programs. pdf (2015).

What Works Clearinghouse. “Assessing Attrition Bias.” Available at: https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_attrition_v2.1.pdf. 2013.

Wittenburg, D., D. Mann, and A. Thompkins. "The Disability System and Programs to Promote Employment for People with Disabilities." IZA Journal of Labor Policy, vol. 2, no. 4, 2013. doi:10.1186/2193-9004-2-4.


1 We decided against an incentive approach which begins with a lower incentive offer and then graduates to a higher offer for the resistant cases, because we wanted to avoid training sample members to hold out for higher incentive offers in the second follow-up.

2 After achieving the anticipated response rate, we will cease active pursuit of additional responses through locating or outgoing calls. We will allow additional interested sample members to respond by keeping the system open to accept incoming online surveys or phone calls.

23


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy