Supporting Statement A_v17_clean

Supporting Statement A_v17_clean.docx

Evaluation of Employment Coaching for TANF and Other Related Populations

OMB: 0970-0506

Document [docx]
Download: docx | pdf




Evaluation of Employment Coaching for TANF and Related Populations



OMB Information Collection Request

0970-0506




Supporting Statement

Part A

Revised July 2018

Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Hilary Forster

Victoria Kabak


A1. Necessity for the Data Collection

The Office of Planning, Research and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for data collection activities conducted for the Evaluation of Employment Coaching for TANF and Related Populations. The objective of this evaluation is to provide information on coaching interventions implemented by Temporary Assistance for Needy Families (TANF) agencies and other employment programs. The evaluation will describe six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve their overall well-being. The evaluation will include both an experimental impact study and an implementation study.

This information collection request (ICR) covers data collection activities for both an impact and an implementation study. Data collection activities for the impact study include: (1) baseline data collection and (2) two follow-up surveys (the first of which is covered in this ICR). Data collection activities for the implementation study include: (1) semi-structured staff interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions. A subsequent ICR will request approval for the second follow-up survey of the impact study.

Study Background

Traditionally, TANF agencies and other employment programs build job search skills, prescribe further education and training, and address barriers to employment, such as those caused by mental health problems or lack of transportation and child care. Despite a variety of strategies implemented over several decades, assistance provided by these programs is insufficient to enable many participants to achieve self-sufficiency (Hamilton 2012). In response, some researchers have suggested that employment programs seeking to help low-income populations find and keep jobs take an alternative approach in which traditional case management is replaced with or supplemented by employment coaching strategies. Long recognized as an effective approach to helping people meet career and personal goals, coaching has drawn increasing interest as a way to help low-income people gain and maintain employment and realize career and family goals (Annie E. Casey Foundation 2007).

Coaching strategies are typically informed by behavioral science and focus on the role of self-regulation skills in finding and keeping a job. Self-regulation skills allow people to intentionally control thoughts, emotions, and behavior (Blair and Raver 2012). They include executive function (the ability to process, filter, and act upon information), attention, metacognition, emotion understanding and regulation, motivation, grit, and self-efficacy. Recently, research suggests that poverty can hinder the development and use of self-regulation skills (Mullainathan and Shafir 2013). Research has shown that coaching is a promising way to help low-income or at-risk people. For example, an evaluation of two financial coaching programs for low- and moderate-income people found that the programs reduced debt and financial stress, and increased savings (Theodos et al. 2015). Similarly, coaching has been found to be effective in assisting people with disabilities to obtain employment. The Individual Placement and Support (IPS) model was designed to help clients with disabilities plan for, obtain, and keep jobs consistent with their goals, preferences, and abilities (Wittenburg et al. 2013). In experimental studies, IPS has improved employment outcomes across multiple settings and populations (Davis et al. 2012; Bond 2015). However, there is little evidence on the effectiveness of coaching for improving employment and self-sufficiency among TANF and other low-income populations.

Drawing on the history of coaching in other contexts, some employment programs for low-income people—administered by TANF, other public agencies, and nonprofit organizations—have begun to provide coaches as a means of improving employment and self-sufficiency (Pavetti 2014). These coaches work with participants to set individualized goals and provide support and feedback as they pursue their goals (Ruiz de Luzuriaga 2015; Pavetti 2014). The coaches may take into account self-regulation skills in three ways. First, they may teach self-regulation skills and encourage participants to practice them. This may occur by helping the participant set goals, determining with the participant the necessary steps to reach those goals, modeling self-regulation skills, and providing rewards or incentives. Second, they may help participants accommodate areas where their self-regulation skills are less developed. For example, staff may help participants choose jobs that align well with their stronger self-regulation skills or suggest participants use a cell phone app to remind them of appointments. Third, the coaches may reduce factors that hinder the use of self-regulation skills. They may do this by teaching stress-management techniques or reducing the paperwork and other burdens placed on the participant by the program itself.

To learn more about these practices, OPRE contracted with Mathematica Policy Research and Abt Associates to evaluate coaching interventions. The evaluation will include impact and implementation studies for the following six coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado. The impact study will address the effectiveness of each coaching intervention in improving employment, self-sufficiency, and self-regulation outcomes as well as other measures of well-being. The implementation study will aid in interpreting the impact study findings and generate evidence to support future replication of effective coaching interventions.

Legal or Administrative Requirements that Necessitate the Collection

There are no legal or administrative requirements that necessitate the data collection. The collection is being undertaken at the discretion of ACF.

A2. Purpose of Survey and Data Collection Procedures

Overview of Purpose and Approach

The information collected through the instruments included in this ICR will be used to learn about coaching interventions in employment programs serving TANF and other low-income populations. The data collection efforts will provide information on implementation of coaching interventions, the experiences of the program participants who are paired with a coach, and the interventions’ effectiveness at improving outcomes for program participants. They will also provide information on the reasons interventions may or may not be effective, the successes and challenges in implementing them, and potential solutions for addressing those challenges.

This information can be used by policymakers to inform funding and policy decisions and by practitioners to improve employment programs. If the information collection this ICR requests does not take place, policymakers and providers of coaching programs will lack high-quality information on the effects of the interventions, as well as descriptive information that can help refine the operation of coaching interventions so they can better meet participants’ employment and self-sufficiency goals.

Research Questions

The questions this evaluation will answer are the following:

  1. Do the coaching interventions improve participants’ employment outcomes (such as employment, earnings, job quality, job retention, job satisfaction, and career advancement); self-sufficiency (income, public assistance receipt); and other measures of well-being?

  2. Do the coaching interventions improve measures of self-regulation? To what extent do impacts on self-regulation explain impacts on employment outcomes?

  3. Are the coaching interventions more effective for some groups of participants than others?

  4. How do the impacts of the coaching interventions change over time?

  5. How were the coaching interventions designed, how were they implemented, and what factors appear to have impeded or facilitated implementation of the program as designed?

  6. What were the participants’ experiences with coaching, what services did they receive, and what types of coaching and other services did those who did not participate in the coaching interventions receive?

  7. Which services or implementation features of the coaching interventions appear to be related to program impacts? Which components or services do participants and staff perceive to be helpful?

Study Design

The study will evaluate six coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado.



MyGoals for Employment Success in Baltimore and Houston

MyGoals is targeted to unemployed or underemployed adults between the ages of 18 and 56 who are receiving housing support from the housing authority. Its objective is to improve self-regulation skills and help participants find solutions to their problems in the short-term while increasing their overall economic security and decreasing their reliance on public assistance in the long-term. MyGoals is a three-year program. Coaches meet with participants every three to four weeks during the first two years and are encouraged to check in between sessions. They meet with participants less frequently in the third year.

Family Development and Self-Sufficiency Program

Iowa’s Department of Human Rights implements the Family Development and Self-Sufficiency (FaDSS) program through contracts with 17 local agencies across the state. This evaluation will include a subset of these local agencies. FaDSS is funded through the TANF block grant and serves only TANF participants. The objective of the program is to help families achieve emotional and economic independence. FaDSS is targeted to TANF recipients with barriers to self-sufficiency. The coaches meet with participants in their homes at least twice in each of the first three months and then monthly starting in the fourth month, with two additional contacts with the family each month. FaDSS expects to be able to enroll 1,000 people rather than 2,000 for the evaluation.

LIFT – New York City, Chicago, and Los Angeles

LIFT is a national non-profit that provides coaching and navigation services to clients in New York City, Chicago, Los Angeles, and Washington, DC. For the purposes of our evaluation the New York, Chicago, and Los Angeles subsites will be aggregated and considered a single LIFT site. LIFT’s goal is to help clients find a path toward goal achievement and financial security by matching them with coaches. Clients set short-term and long-term goals and the coach helps clients build an action plan to achieve those goals. The LIFT coaching approach is nondirective and allows clients to choose the goals and milestones they want to work on. LIFT clients are expected to meet with a coach on a regular basis for up to two years. During the first month of the program, clients typically have two or three in-person sessions with a coach. After the first month, clients meet with coaches monthly to discuss progress toward goals and obstacles that are impeding progress. These sessions typically last 60 to 90 minutes.

Work Success – Utah

Work Success is an employment coaching program administered by Utah’s Department of Workforce Services—an agency that oversees TANF, Supplemental Nutrition Assistance Program, Workforce Innovation and Opportunity Act, and other workforce programs. The program is offered statewide in about 30 employment centers (American Job Centers) with one or two coaches per center. The program served about 1,350 clients in 2016, largely concentrated in the greater Salt Lake City area. The objective of the program is to improve employment outcomes by focusing on job placement. Each participant is assigned a coach, who works with them to set goals and review their progress toward goals. The Work Success coach meets with clients daily, one-on-one while they are in the program to discuss their individual goals, steps they will take to achieve those goals, and any challenges they are facing. Coaching also happens in group settings where the coach engages the group in soft skills trainings, identification of skills and strengths, and other group activities.

Goal4 It!, Jefferson County, Colorado

Goal4 It! is an evidence-informed, customer-centered framework for setting and achieving goals developed by Mathematica Policy Research. It was designed to be a replicable and sustainable coaching approach that can be used in a TANF, workforce, or other social service environment. Using the Goal4 It! approach, trained coaches help clients set meaningful goals, break goals down into manageable steps, develop specific plans to achieve the steps, and regularly review goal progress and revise their goals and/or plans. Coaches and case managers meet with clients who are not working at least once per month and meet with clients who are working at least once every two months. They typically meet more often with clients who are in crisis or actively looking for a job. The first meeting is usually for one hour. Ongoing meetings are 30 or 45 minutes long. Each coach and case manager serves about 45 clients.

The two main criteria for selecting the coaching interventions for the evaluation were that: (1) an evaluation of the program will address ACF’s policy interests and inform the potential development of coaching interventions in the future; and (2) it will be feasible to conduct a rigorous impact evaluation of the coaching intervention. To meet the first broad criterion, the program in which the intervention is embedded needed to serve a low-income population and focus on employment, and the coaching intervention should be robust and well-implemented. To meet the second broad criterion, random assignment must be feasible, the potential number of study participants must be large enough to detect an impact expected from the intervention, and the program’s management and staff must be supportive of an experimental evaluation.

The impact study will provide rigorous evidence on whether the coaching interventions are effective, for whom, and under what circumstances. The study will be experimental. Participants eligible for the coaching services will be asked to consent to participate in the study (Attachment A) and, if consent is given, will be randomly assigned to two groups: a treatment group offered coaching and a control group not offered coaching. Individuals who do not consent to participate in the study will not be eligible to receive coaching, will not be randomly assigned, and will not participate in the data collection efforts. The control group may receive other services within the program. Both groups will remain eligible for other services offered in the community. For example, the control group may receive regular case management from staff who have not been trained in coaching. With this design, the research groups are likely to have similar characteristics, so differences in outcomes too large to be attributable to chance can be attributed to the coaching intervention. We will collect information at baseline (before random assignment occurs) from study participants and staff. Follow-up surveys will be available via the web and telephone at about 6 to 12 months after random assignment and then about 21 months after random assignment. This ICR seeks clearance for the baseline data collection and the first follow-up survey. The second follow-up survey will be covered by a future ICR.

The implementation study will describe the coaching interventions and how they operated, document changes in the implementation relative to plans, provide information on the contrast between the treatment and control groups, and detail challenges to implementing the interventions and solutions to addressing those challenges. The implementation study will include semi-structured interviews with program staff, a staff survey, in-depth interviews with participants who have been paired with coaches, staff reports of program service receipt, and video recordings of coaching sessions.

Universe of Data Collection Efforts



Impact study. This ICR includes two instruments associated with the following data collection efforts for the impact study:

  1. Baseline data collection (Attachment B). Data collected at baseline will provide information on all study participants. These data will be used for the following purposes: (1) to describe the characteristics of study participants and check that random assignment has created treatment and control groups with similar characteristics, (2) to define subgroups, (3) to provide control variables for regression models that will increase statistical precision, (4) to construct weights to adjust for survey nonresponse, (5) to support analysis of the mediating factors driving program impacts, and (6) to locate study participants for the follow-up surveys. A question-by-question justification for the items included in baseline data collection is presented in Attachment J. Program staff will read to participants the consent form (Attachment A); if consent is given they will then administer the baseline survey and enter the data into a web-based information system developed for the evaluation, the Random Assignment, Participant Tracking Enrollment, and Reporting system (RAPTER). The burden associated with baseline data collection is represented as two rows in the burden table, which correspond to the participants responding to the data collection and to staff administering the data collection. Some programs might ask that the evaluation allow participants to complete the baseline data collection on their own as part of the evaluation intake process. In this case, some participants may complete a self-administered baseline survey, using the same web-based system. This would not be expected to affect participant burden and might reduce staff burden. The baseline survey includes alternate language tailored to self-administration. For example, text transitioning between sections of the survey may read, “Now I would like to ask you some questions about the people who live with you” for the staff-administered version and, “The next questions are about people who live with you.” for the self-administered version.

As part of a separate evaluation conducted by MDRC, the MyGoals programs began enrolling and randomly assigning sample members in February 2017, collecting baseline data using their own instruments. As a result, those activities will not be conducted under the Evaluation of Employment Coaching project. Instead, through our partnership with MDRC, we will build on the work completed and therefore prevent redundancy in activities.

  1. First follow-up survey (Attachment C). The follow-up surveys will primarily collect data on outcomes of both the treatment and control group members, including outcomes related to employment, self-sufficiency, self-regulation, and service receipt. The first follow-up survey will also collect data on some baseline characteristics, such as criminal history and place of birth, along with updated contact information. A question-by-question justification for the items included in the first follow-up survey is presented in Attachment K. The first follow-up survey will be available to participants via the web or telephone about 6 to 12 months after random assignment. A second follow-up survey will be available approximately 21 months after random assignment. Request for clearance for the second follow-up survey will be submitted under a separate ICR.


In addition, administrative data on outcomes will be collected for all study participants. Data from the National Directory of New Hires (NDNH), operated by the Office of Child Support Enforcement at HHS, includes quarterly earnings, unemployment insurance benefits, and start dates of new jobs. Administrative data will also be collected on TANF benefits and information on whether study participants are exempt from work requirements and participate in specific types of work activities. These data are already being collected and do not represent additional burden for respondents.

Implementation study. This ICR includes five instruments associated with the following data collection efforts for the implementation study:

  1. Semi-structured staff interviews (Attachment D). Semi-structured interviews with staff will provide the study with a nuanced, qualitative description of the coaching intervention’s design and implementation. The sample population for this effort includes coaches; other direct service staff (for example, case managers, workshop instructors, and job developers); supervisors; and program administrators. Three different interview guides, all contained in Attachment D, were developed that pertain to different types of staff (frontline workers, supervisors, and managers/program administrators). The interviews will be conducted in person during site visits, either individually or in small groups. A site visit to each program studied will occur about six months after study enrollment begins in that program.

  2. Staff survey (Attachment E). The staff survey will collect information on staff members’ professional backgrounds, training, coaching practices, and attitudes. A question-by-question justification for the items included in the staff survey is presented in Attachment L. Compared with the semi-structured interviews, this survey will enable the collection of information (1) in a more structured format, (2) on topics that staff may be uncomfortable talking about in a group setting (such as the work of management and other staff), and (3) from a broader set of staff. The staff survey will be administered to managers, supervisors, coaches, and case managers. It will be administered via the web approximately six months after study enrollment begins in that program.

  3. In-depth participant interviews (Attachment F). In-person interviews with participants who have received coaching services will provide detailed, contextual information about the participants’ experiences with coaching. They will provide insights into the participants’ lives, details of their goals, their perceptions of factors that might impede them from reaching their goals, their relationship with their coaches, and their perceptions of how the coach and the program have helped them progress toward their goals. For participants who have become disengaged from the program, the interviews will provide information on why the participants became disengaged. These interviews will inform the understanding of whether the coaching intervention was implemented as planned and suggest possible refinements. In addition, these interviews will provide the “stories” that will make the findings from the implementation and impact studies more meaningful.

  4. Staff reports of program service receipt (Attachment G). Program staff will record information about the treatment group members’ participation in coaching. They will also record information on case management and other program services that both the treatment group and the control group members receive, if the design allows control group members to receive these other program services. This information will be used to describe the coaching and employment services the treatment group receives through the program. Where relevant, it will also be used to compare service receipt for the treatment and control group members. This information will also be used to monitor the extent to which the treatment group is participating in coaching. The staff will record the information in RAPTER or through their own management information system.

  5. Video recordings of coaching sessions. Video recordings will capture the interaction between the coaches and participants. These recordings will provide information on what happens during a coaching session, whether the coaching is consistent with the coaches’ training, and the reactions of the participants. A subset of coaching sessions at each program will be recorded. This subset will be chosen to include all sessions occurring during a specific period of time with each coach and will capture multiple participants for each coach. We anticipate recording up to 90 sessions per program (approximately 540 sessions across all six programs). These recordings will occur after the site visit to conduct the semi-structured interviews, which is also when staff will be trained on setting up the recordings.

A3. Improved Information Technology to Reduce Burden

This evaluation will use multiple applications of information technology to reduce burden. As described below, information technology will be used to collect baseline data, conduct the first follow-up and staff surveys, collect staff reports on program service receipt, and video-record coaching sessions. The semi-structured staff interviews and in-depth participant interviews will not involve information technology.

Baseline data collection. RAPTER is a secure, web-based system that program staff will use to administer consent to participants, collect baseline data, and conduct random assignment. The use of check boxes and drop-down menus and response categories will minimize data entry burden. Participants completing the baseline survey on their own will also utilize this web-based system.

First follow-up survey. The follow-up survey will be hosted on the Internet via a live secure web-link. To reduce burden, the surveys will employ the following: (1) secure log-ins and passwords so that respondents can save and complete the survey in multiple sessions, (2) drop-down response categories so that respondents can quickly select from a list, (3) dynamic questions and automated skip patterns so that respondents only see those questions that apply to them (including those based on answers provided previously in the survey), and (4) logical rules for responses so that respondents’ answers are restricted to those intended by the question.

Respondents also have the option to complete the follow-up surveys using computer-assisted telephone interviewing (CATI). CATI reduces respondent burden, relative to interviewing via telephone without a computer, by automating skip logic and question adaptations and by eliminating delays caused when interviewers must determine the next question to ask. CATI is programmed to accept only valid responses based on preprogrammed checks for logical consistency across answers.

Staff survey. As with the follow-up survey, the staff survey will be hosted on the Internet via a live secure web-link and will employ: (1) secure log-ins and passwords, (2) drop-down response categories, (3) dynamic questions and automated skip patterns, and (4) logical rules for responses.

Staff reports of program service receipt. Staff will use RAPTER to enter data on program receipt. The system will employ drop-down menus and response categories to minimize burden and accept only valid responses.

Video recordings of coaching sessions. Program staff will be provided with tablets and will be trained on how to use them to record the coaching sessions. Relative to in-person observations, video recording by tablet is a less obtrusive method for understanding the interaction between the coaches and participants.

A4. Efforts to Identify Duplication

Information that is already available from alternative data sources will not be collected again for this evaluation. For example, if a coaching program has an existing management information system that collects information needed for this evaluation that is exportable and of sufficient quality, we will accept data from its existing system and request that they only enter data into RAPTER that they are not already collecting.

We will be collecting information related to employment and earnings both through administrative records and directly from study participants. This information is not duplicative because the two sources cover different types of employment. Information on quarterly earnings from jobs covered by unemployment insurance will be obtained from NDNH administrative records. The baseline data collection and follow-up surveys will ask for earnings across all jobs, including those not covered by unemployment insurance. A number of experimental employment evaluations have found large differences in survey- and administrative-based earnings impacts (Barnow and Greenberg 2015). Therefore, collecting information from both sources is necessary for a full understanding of impacts on earnings.

A5. Involvement of Small Organizations

The data collection does not involve small businesses or other small entities.

A6. Consequences of Less Frequent Data Collection

The baseline data collection, the semi-structured staff interviews, the staff survey, and the in-depth participant interviews are one-time data collections.

Follow-up survey. About 21 months after random assignment, a second follow-up survey will be made available to sample members. This second follow-up survey will collect a similar set of outcome data as the first. This will allow an examination of whether the impacts of the program changed over time and whether changes in self-regulation skills were associated with changes in employment and self-sufficiency outcomes. Request for clearance for the second follow-up survey will be submitted under a separate ICR.

Staff reports of program service receipt. Staff members will need to enter data into RAPTER on participants’ service receipt throughout the study period. To avoid recall error, they will be asked to enter the information into RAPTER immediately after the service is provided. These repeated entries will provide complete information on the participants’ service receipt.

Video recordings of coaching sessions. Some coaches and participants will be video-recorded multiple times. Multiple recordings of each coach will provide more information on how his or her coaching reflects training received over time.

A7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.

A8. Federal Register Notice and Consultation

Federal Register Notice and Comments

In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published a notice in the Federal Register announcing the agency’s intention to request an OMB review of this information collection activity. This notice was published on June 26, 2017, Volume 82, Number 121, pages 28856-28857, and provided a 60-day period for public comment. Attachment H provides a copy of this notice. During the notice and comment period, no comments were received.

To provide the opportunity for public comment on the addition of three programs to the evaluation, ACF published a Federal Register Notice allowing for 30 days of comment on July 11, 2018.

Consultation with Experts Outside of the Study

Experts in their respective fields from OPRE, Mathematica Policy Research, Abt Associates, and the University of Chicago listed below were consulted in developing the design, data collection plan, and materials for which clearance is requested.

OPRE

Hilary Forster, Senior Social Science Research Analyst

Victoria Kabak, Social Science Research Analyst

Gabrielle Newell, Contract Social Science Research Analyst


Mathematica Policy Research

Dr. Sheena McConnell, Project Director

Dr. Quinn Moore, Deputy Project Director

Dr. Michelle Derr, Principal Investigator

Shawn Marsh, Survey Director


Abt Associates

Dr. Alan Werner, Principal Investigator

Dr. Bethany Borland, Senior Analyst


University of Chicago

Dr. James Heckman, Measurement Expert

A9. Incentives for Respondents

The Office of Management and Budget’s Office of Information and Regulatory Affairs has approved incentives for participants in past ACF studies for a mix of reasons, including to increase survey response rates, reduce differential nonresponse between the study research groups, reduce survey costs, and increase ongoing participation of respondents across multiple years of follow-up. In this study, we propose to offer respondents incentives for only two of the data collection activities discussed above: the follow-up survey and an in-depth participant interview. We also plan to offer an incentive for completing the second follow-up survey, which will be described in a subsequent ICR. Specifically, we propose to offer respondents who participate in the in-depth interviews, which are estimated to take 2.5 hours on average, a $50 gift card. We propose to offer respondents who complete the 60 minute follow-up interview 6-12 months after the baseline an incentive of $35 if they complete the survey within the first four weeks, but only an incentive of $25 if they complete the survey later. The justification for these incentives is provided below.


Background:


Estimates of program impacts may be biased if respondents differ substantially from non-respondents and those differences are correlated with assignment to the evaluation treatment or control groups. The risk of biased impact estimates increases with lower overall survey response rates or larger differences in survey response rates between the research groups (What Works Clearinghouse 2013). Thus, if low overall response rates or large differential response rates between the research groups are observed, differences between groups on key outcomes might be the result of differences in baseline characteristics among survey respondents and cannot be attributed solely to the effect of the coaching intervention (What Works Clearinghouse 2013).

Concern about the potential for low overall response rates are particularly relevant to this study because the coaching interventions are designed for unemployed low-income people. A number of factors could complicate tracking such participants over time. These factors include:

  • Unstable housing.

  • Less use of mortgages, leases, public utility accounts, cell phone contracts, credit reports, memberships in professional associations, licenses for specialized jobs, activity on social media, and appearances in publications such as newspapers or blogs.

  • Use of an alias to get utility accounts because of poor credit and prior payment issues.

  • Use of pay-as-you-go cell phones. These phone number are generally not tracked in online databases. Pay-as-you-go cell phone users also switch numbers frequently, which makes contacting them across a follow-up period more difficult.

Differential response rates between the treatment and control groups could bias this study’s impact estimates. Participants assigned to the control group may be less motivated to participate than those assigned to the treatment group because they are not receiving the intervention. They may also feel that the surveys are not relevant to them.

Evidence supporting use of incentives:

Methodological research on incentives. Evidence from prior studies shows that incentives can decrease the differential response rate between the treatment and control groups, and therefore reduce nonresponse bias on impact estimates (Singer and Kulka 2002; Singer et al. 1999; Singer and Ye 2013). For example, incentives are useful in compensating for lack of motivation to participate among control group members (Shettle and Mooney 1999; Groves et al. 2000). Incentives have also been found to induce participation among sample members for whom the topic is less salient, including members of the control group (Baumgartner and Rathbun 1997), a finding that also applies with hard-to-reach populations, similar to the target population of the current study (Martinez-Ebers 1997). Other experimental research on incentives concludes that incentives significantly increase response rates, reduce the average number of contacts required to achieve completed surveys, and reduce overall survey data collection costs (Westra et al. 2015).

Research evidence from ACF studies. Evidence from an incentive experiment conducted as part of the Self-Employment Training (SET) Demonstration, approved by OMB (OMB control number 1205-0505), suggests that incentives are a successful strategy for improving response rates for low-income populations. This experiment assessed the effectiveness of three incentive approaches: (1) offering a standard incentive of $25; (2) offering a two-tiered incentive, with an incentive of $50 if respondents completed an 18-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks; or (3) no incentive.

Results from the SET incentive experiment suggest that incentives substantially reduce both overall nonresponse rates and differential response rates between the research groups. Among sample members offered an incentive, this experiment resulted in a 73 percent overall response rate for those in the two-tiered incentive group and a 64 percent response rate for those in the standard incentive group. The response rate for sample members who were not offered an incentive was 37 percent. The differential response rate between research groups for sample members offered an incentive was 12 percentage points for the two-tiered incentive group (79 percent for the treatment group versus 67 percent in the control group) and 6 percentage points for the standard incentive group (67 percent for the treatment group versus 61 percent in the control group). The differential response rate was substantially higher for the no incentive group at 36 percentage points (55 percent for the treatment group versus 19 percent in the control group).

Based on evidence from SET, we anticipate that without incentives, the survey response rate would be unacceptably low; it is likely to be less than 50 percent. Such response rates would put the study at severe risk of biased impact estimates.

Evidence supporting use of two-tiered incentives for the follow-up survey:

In addition to determining whether the study requires use of incentives, we must determine the structure that the incentives will take. We propose using a two-tiered incentive approach for the follow-up surveys.1 We would offer a $35 gift card to those who complete the survey, either online or by telephone, within the first four weeks after being first asked to complete the survey; respondents will receive a $25 gift card if they complete the survey after four weeks. A key aim of this “early bird” approach is reducing survey administration costs by encouraging low-cost online survey completion and reducing the need for costly location efforts. We propose using the two-tiered incentive model based on past experience from related studies, which observed lower survey costs due to reduced need for mail reminders, locating, and reminder calls. We anticipate that using an incentive will help us achieve our response rate target of 80 percent. Using a two-tiered incentive structure will facilitate shorter data collection times and contain data collection costs.

Research evidence from ACF studies. Two impact evaluations conducted incentive experiments that informed our proposed two-tiered incentive structure: SET and YouthBuild.

The results of the SET incentive experiment described above showed that relative to standard incentives, the two-tiered incentive led to somewhat higher overall response rates (73 versus 64 percent) but somewhat greater differential nonresponse rates between the research groups (12 versus 6 percentage points). Thus, findings related to response rate patterns do not strongly favor one incentive approach over the other.

However, the SET incentive experiment also concluded that two-tiered incentives led to shorter response times, lower average costs, and lower total fielding costs (including for the cost of the incentive payments). Specifically, the incentive experiment found that 98 percent of survey completes in the two-tiered incentive group came within four weeks of release, compared to 86 percent for the standard incentive group. Faster response time has implications for data quality because it ensures that the reference period for the one-year follow-up survey is as close to one year after study enrollment as possible. Faster response times also have important implications for data collection cost. In the SET incentive experiment, the average cost per complete was approximately 10 percent higher for the standard incentive group than for the two-tiered incentive group, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model.

Please note that the SET incentive experiment cannot disentangle which aspect of the two-tiered incentive structure—two tiers or higher overall incentive amount—led to higher overall response rates, faster response times and lower overall costs. Thus we do not know what the response and cost patterns would have been with a two-tiered structure that used a lower initial incentive amount. The proposed initial incentive amount for this study ($35 for response within the first four weeks) is lower than the one used in SET ($50 for response within the first four weeks). The final incentive amount is the same ($25 for response after four weeks).

The YouthBuild evaluation (OMB control number 1205-0503), sponsored by the Department of Labor, also incorporated an incentive experiment. This experiment assessed the effectiveness of two incentive approaches: (1) offering a standard incentive of $25; or (2) offering a two-tiered incentive, with an incentive of $40 if respondents completed a 12-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks.

Results from the YouthBuild incentive experiment are consistent with those of the SET incentive experiment in terms of effects on response rate, response time and cost. The incentive structure slightly increased the overall response rate; sample members in the two-tiered incentive group had an overall response rate of 72 percent compared to 68 percent for the standard incentive group. We do not have data from the YouthBuild incentive experiment on the effect of incentive structure on differential response rates between the research groups.

YouthBuild sample members in the two-tiered incentive group were 38 percent more likely to respond to the survey within four weeks than those assigned to receive a standard incentive. As a result sample members in the two-tiered incentive group were less likely to be subject to more labor intensive and costly data collection efforts such as contacts from telephone interviewers, extensive in-house locating, or ultimately field locating. Results from the YouthBuild incentive experiment indicate that final data collection cost estimates were approximately 17 percent lower with two-tiered incentives than with standard incentives, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model. As with the SET incentive experiment, we cannot disentangle which aspect of the two-tiered incentive structure (incentive value or incentive structure) is responsible for the reported effects of the incentive.

If approved to use the two-tiered incentive structure, as part of the proposed study we will collect paradata on prevalence of survey response within 4 weeks (with receipt of the larger, initial incentive amount), prevalence of response after 4 weeks (with receipt of the smaller incentive amount), average time to survey response, and average amount in incentive payment. We will examine response rates and compare the characteristics of respondents and nonrespondents. We will conduct all analysis for the full sample and separately for the treatment and control groups. These data will help ACF and OMB understand how sample members responded to the two-tiered incentives, and could help inform decisions on incentives for future studies.


Table A.1 below presents findings from the incentive experiments described above.


Table A.1 Incentive type and response rates obtained in similar studies with incentive experiments

Study

Instrument

Duration

(minutes)

Response Rate


Self-Employment Training Demonstration,

Incentive experiment sample

OMB control #1205-0505

18 month follow-up

20

Two-tiered incentive ($50 first four weeks, $25 after four weeks):

  • 73 percent overall

  • 79 percent treatment group

  • 67 percent control group


Standard incentive ($25):

  • 64 percent overall

  • 67 percent treatment group

  • 61 percent control group



No incentive:

  • 37 percent overall

  • 55 percent treatment group

  • 19 percent control group


YouthBuild,

Incentive experiment sample

OMB control #1205-0503

12-month follow-up

60

Two-tiered incentive ($40 first four weeks, $15 after four weeks):

  • 72 percent overall


Standard incentive ($25):

  • 68 percent overall

Note: Response rates separate by research group are not available for the YouthBuild incentive experiment.



Incentive for the in-depth interview:

We propose giving respondents who participate in the in-depth interviews, which are estimated to take 2.5 hours on average, a $50 gift card. This incentive is modeled on another ACF study entitled Parents and Children Together (PACT). Respondents (who were low-income couples and fathers) received a $60 gift card for an in-depth interview (OMB control number 0970-0403). The PACT study observed overall response rates of 88 and 72 percent for their healthy marriage and responsible fatherhood programs, respectively. As with the current study, PACT targeted low-income populations; thus respondents had similar demands on their time and constraints as the target population in this study. Incentives can make it easier for respondents to participate in the in-depth interviews by helping offset costs of transportation, child care, and cell phone data and minute plans. The in-depth interviews will take place in person and will take place during scheduled visits to the coaching programs. Because the timing of the in-depth interviews cannot vary, a two-tiered structure was not considered for this incentive.

Response rates for similar studies:

Table A.2 presents the type of data collection, incentive offered, and response rates obtained for similar studies cited in this section. Table A.2 includes information on the SET and YouthBuild studies. Information on these studies in Table A.1, discussed above, relates to results from the incentive experiment, conducted on early cohorts of sample released for data collection. Based on results of these experiments, the SET and YouthBuild studies both implemented two-tiered incentives study wide. Table A.2 presents results for the full data collection, before and after the conclusion of the incentive experiments.

Table A.2 Incentives and response rates obtained in similar studies

Study

Instrument

Duration

(minutes)

Incentive Amount

Response Rate


Self-Employment Training Demonstration, Full sample

OMB control #1205-0505

18 month follow-up

20

$50 first four weeks

$25 after four weeks

80 percent overall

83 percent treatment

78 percent control

YouthBuild

Full sample

OMB control #1205-0503

12 month follow-up

60

$40 first four weeks

$25 after four weeks

81 percent overall

82 percent treatment

79 percent control

Parents and Children Together

OMB control #0970-0403

In-depth interview of treatment group members

120

$60

88 percent healthy marriage overall

72 percent responsible fatherhood overall

Note: Treatment and control groups in this table refer to the overall evaluation (that is, the original conditions to which sample members were assigned upon enrollment) and not the incentive experiment. The SET and YouthBuild samples include the survey sample, including the time before and after the conclusion of the incentive experiments described in Table A.1.

A10. Privacy of Respondents

Information collected will be kept private to the extent permitted by law. As part of the consent process (Attachment A), respondents will be informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private to the extent permitted by law. As described in Section A11, the evaluation team will request Social Security numbers to gather information on respondents’ employment outcomes from the NDNH. Respondents will still be eligible for the study and for program services if they choose not to provide their Social Security number.

Due to the sensitive nature of this research (see A11 for more information), the evaluation will obtain a Certificate of Confidentiality. The study team will apply for this Certificate and will provide it to OMB after it is received. The Certificate of Confidentiality helps assure participants that their information will be kept private to the fullest extent permitted by law.

As specified in the contract, Mathematica and Abt will protect respondent privacy to the extent permitted by law and will comply with all Federal and departmental regulations for private information. Mathematica has developed a Data Safety and Monitoring Plan that assesses all protections of respondents’ personally identifiable information (PII). Mathematica and Abt will ensure that all of its employees, subcontractors (at all tiers), and employees of each subcontractor who perform work under this contract/subcontract are trained on data privacy issues and comply with the above requirements. All study staff with access to PII will receive study-specific training on (1) limitations on disclosure; (2) safeguarding the physical work environment; and (3) storing, transmitting, and destroying data securely. These procedures will be documented in training manuals. Refresher training will occur annually.

As specified in the evaluator’s contract, Mathematica and Abt will use Federal Information Processing Standard compliant encryption (Security Requirements for Cryptographic Module, as amended) to protect all instances of sensitive information during storage and transmission. Mathematica and Abt will securely generate and manage encryption keys to prevent unauthorized decryption of information, in accordance with the Federal Information Processing Standard. Mathematica and Abt will ensure that they incorporate this standard into their property management/control system, and establish a procedure to account for all laptop computers, desktop computers, and other mobile devices and portable media that store or process sensitive information. Any data stored electronically will be secured in accordance with the most current National Institute of Standards and Technology requirements and other applicable Federal and departmental regulations. In addition, Mathematica must and will submit a plan for minimizing, to the extent possible, the inclusion of PII and other sensitive information on paper records, and for the protection of any paper records, field notes, or other documents that contain PII or other sensitive information that ensures secure storage and limits on access.

Information will not be maintained in a paper or electronic system from which they are actually or directly retrieved by an individuals’ personal identifier.



We will work with the ACF and HHS Offices of the Chief Information Officer (OCIO) to ensure that the RAPTER system is covered by an Authority to Operate and a Privacy Impact Assessment (PIA). This will: ensure that information handling conforms with applicable legal, regulatory, and policy requirements regarding privacy; determine the risks of collecting and maintaining PII; assist in identifying protections and alternative processes for handling PII to mitigate potential privacy risks; and communicate all relevant privacy practices to the public. The PIA will be available online through HHS at https://www.hhs.gov/pia.

A11. Sensitive Questions

Some sensitive questions are necessary in an evaluation of programs designed to affect employment. Before starting the baseline and follow-up surveys and the in-depth interviews, all respondents will be informed that their identities will be kept private and that they do not have to answer any question that makes them uncomfortable. Although such questions may be sensitive for many respondents, they have been successfully asked of similar respondents in other data collection efforts, such as Parents and Children Together (OMB control number 0970-0403) and the Workforce Investment Act Gold Standard Evaluation (OMB control number 1205-0504).

The sensitive questions in the data collection instruments relevant for this ICR include:

  • Respondents’ Social Security numbers. Respondents’ Social Security numbers are necessary to collect administrative data on respondents from NDNH and TANF administrative databases. Respondents will be informed that the study may contact federal and state agencies for information about their employment and earnings and receipt of benefits. Social Security numbers will be used to collect information through an online locating database on the location of study participants for the follow-up survey data collection. Social Security numbers, along with names and birthdates, will also be used to verify respondents’ identities. Social Security numbers will be collected at baseline and verified during the follow-up survey.

  • Wage rates and earnings. It is necessary to ask about earnings because increasing participants’ earnings is a key goal of coaching interventions. The follow-up survey asks about each job worked since random assignment, the wage rate, and the number of hours worked per week. This information will be collected on the first follow-up survey and discussed during the in-depth participant interviews.

  • Challenges to employment. It is important to ask about challenges to employment both at baseline and at follow-up. The reported challenges at baseline can be used to define subgroups for whom the program may be particularly effective or ineffective. It is important to ask about challenges to employment in the follow-up survey because the coaching intervention may have addressed these challenges. Challenges measured through the surveys include problems with transportation, needing to take care of a family member, lack of clothes or tools, not having the right education or skills, and having a criminal record. These challenges may also be discussed during the in-depth participant interviews.

  • Convictions. Prior involvement in the criminal justice system makes it harder to find employment. Hence, it is important to ask about convictions that occurred before random assignment as baseline information and convictions that occurred after random assignment as an outcome that may be affected by coaching. This information will be collected on the first follow-up survey. Criminal history may also be discussed during the in-depth participant interviews.

  • Economic hardships. The follow-up survey asks about economic hardships, such as missing meals or needing to borrow money from friends. These outcomes reflect a lack of self-sufficiency and may be affected by coaching. Economic hardships may also be discussed as part of the in-depth participant interviews.

A12. Estimation of Information Collection Burden

Newly Requested Information Collections

The estimated reporting burden and cost for the data collection instruments and efforts included in this ICR are presented in Table A.3.

Details of the estimates are as follows:

  • Baseline data collection. Baseline data collection involves both study participants and program staff. These burden estimates for baseline data collection for both respondents and staff include the time spent administering the consent process.

  • We expect about 6,000 study participants (1,000 in each of six programs) will complete baseline data collection. Annualizing 6,000 over three years is 2,000 per year. We expect each survey to last 0.33 hours, for a total of 660 hours per year for study participants.

  • We assume that 60 program staff across all six programs (approximately 10 per program) will perform the baseline data collection. Annualizing 60 over three years is 20 staff members per year. Each staff member will administer 100 surveys and each survey is expected to last 0.33 hours, for a total of 660 hours per year by staff.

  • First follow-up survey. We expect to survey 6,000 study participants (1,000 participants per program). If the study includes more than 1,000 participants per program, then the survey will be administered to a random sample of 1,000 study participants. We anticipate an 80 percent response rate or 4,800 respondents.2 Annualizing 4,800 respondents over three years yields 1,600 respondents per year. We expect each survey to last one hour, for a total of about 1,600 annualized burden hours.

  • Semi-structured staff interviews. We expect to interview 132 program staff across all six programs (approximately 22 per program). Annualizing 132 respondents over three years yields 44 respondents per year. We expect each interview to last 1.5 hours on average, for a total of 66 annualized burden hours.

  • Staff survey. We expect to survey 96 program staff who directly interact with participants. Annualizing 96 respondents over three years yields 32 respondents per year. The survey is expected to last 0.75 hours, for a total of 24 annualized burden hours.

  • In-depth participant interviews. We expect to interview 48 participants (eight in each of the six programs). Annualizing 48 respondents over three years yields 16 respondents per year. These interviews are expected to last 2.5 hours on average, for a total of 40 annualized burden hours.

  • Staff reports of program service receipt. We anticipate 60 staff members (10 in each of the six programs) will enter data on program service receipt into RAPTER. Annualizing, the 60 staff members over three years yields 20 staff members per year. We expect 5,200 entries per staff member per year and expect that each entry will take just under 2 minutes, for a total of 3,120 annualized burden hours.

  • Video recordings of coaching sessions. We anticipate that nine staff from each of the six programs will collect these video recordings, for a total of 54 staff. Annualizing over three years yields 18 staff per year. Each staff will record 10 sessions and we expect that it will take 6 minutes to set up the video camera and upload the video to a secure transfer site, for a total of 18 annualized burden hours.

Table A.3 Total burden requested under this information collection


Instrument

Total number of respondents

Annual number of respondents

Number of responses Per respondent

Average burden hours per response

Annual burden hours

Average Hourly Wage

Total Annual Cost

Baseline data collection – study participants

6,000

2,000

1

0.33

660

$7.25

$4,785.00

Baseline data collection – staff

60

20

100

0.33

660

$33.38

$22,030.80

First follow-up survey

4,800

1,600

1

1

1,600

$7.25

$11,600.00

Semi-structured staff interviews

132

44

1

1.5

66

$33.38

$2,203.08

Staff survey

96

32

1

0.75

24

$33.38

$801.12

In-depth participant interviews

48

16

1

2.5

40

$7.25

$290.00

Staff reports of program service receipt

60

20

5,200

0.03

3,120

$33.38

$104,145.60

Video recordings of coaching sessions

54

18

10

0.1

18

$33.38

$600.84

Estimated annual burden total

6,188


$146.456.44


Total Annual Cost

The total annual cost is $146,456.44. The total estimated cost figures are computed from the total annual burden hours and an average hourly wage for staff and program applicants. We estimate the average hourly wage for program staff to be the average hourly wage of Social and Community Service Managers (SOC 11-9151) taken from the U.S. Bureau of Labor Statistics National Compensation Survey, 2015 ($33.38). The average hourly wage of study participants is estimated to be $7.25, the federal minimum wage.

A13. Cost Burden to Respondents or Record Keepers

There are no additional costs to respondents or record keepers.

A14. Estimate of Cost to the Federal Government

The total cost for the data collection activities under this current request will be $12,078,065. Annual costs to the Federal government will be $4,026,022 for the proposed data collection. These costs are inclusive of design, implementation, monitoring of random assignment, survey administration, and survey analysis and reporting. The costs associated with the second follow-up survey will be included in a subsequent submission to OMB.

A15. Change in Burden

This change request increases the burden associated with all data collection activities due to the increase from three to six programs (from 3,094 hours to 6,188 hours). This increased burden is reflected in A.12 and Table A.3.

A16. Plan and Time Schedule for Information Collection, Tabulation and Publication Plans for Tabulation

Impact Study

The impact analysis will estimate the effectiveness of each coaching intervention in the evaluation. The goal of the impact analysis is to compare observed outcomes for study participants who were offered the coaching intervention with outcomes for members of a control group who were not offered coaching. We will use the experience of the control group as a measure of what would have happened to the treatment group participants in the absence of the intervention. Random assignment makes it likely that the two groups of study participants do not initially differ in any systematic way on any characteristic. Any observed differences in outcomes between the treatment and control group members can therefore be attributed to the intervention.

We will use the baseline data to describe the study participants in each coaching intervention. We will use t-tests to assess whether random assignment successfully generated treatment and control groups with similar baseline characteristics, and that survey respondents in the two groups are similar.

Differences in means or proportions of follow-up outcomes between the treatment and control group will provide unbiased estimates of the impacts of the intervention. More precise estimates will be obtained using regression models to control for random differences in the baseline characteristics of treatment and control group members. In their simplest forms, these models can be expressed by the following equation: , where is an outcome for person (such as earnings); is a constant; is a vector of baseline characteristics (such as gender, age, race/ethnicity); is a vector representing the relationship between each baseline characteristic and the outcome; is an indicator for whether person received treatment; and is an error term. represents the estimated impact of the intervention. We will estimate these models separately for each coaching intervention.

If the sample is large enough, we will conduct a subgroup analysis to examine who benefits most from the intervention. We will estimate subgroup effects using the following equation: , where is an indicator for whether person is part of a subgroup; represents the relationship between subgroup status and the outcome; represents the additional effect of treatment for those in the subgroup. We will consider subgroups that are appropriate for the intervention’s target population, such as those defined by work readiness, employment challenges, or TANF history.

Implementation Study

The implementation study has three main objectives. The first objective is to identify features and conditions necessary for replication of each coaching intervention by detailed documentation of the interventions and context in which they are implemented. Second, interpreting impact estimates requires a clear understanding of the planned intervention and how it was actually delivered, as well as participants’ experiences with coaching and how these experiences differed from the counterfactual experiences of the control group. Third, understanding the implementation challenges and solutions, as well as the intervention features that staff and participants view as being effective, may suggest possible intervention refinements.

Researchers will reduce the qualitative data—write-ups from staff interviews, transcriptions of participant interviews, and analyses of video recordings—to a manageable number of topics and themes for analysis. They will develop a coding scheme organized according to the three objectives and key research questions, and aligned with each program’s logic model. A small, trained team will code field notes from site visits and in-depth interview transcriptions using qualitative analysis software. To obtain reliability across codes, all team members will code an initial set of documents, after which differences in their coding will be identified and resolved.

Using the data collected from the multiple sources, the information will be summarized in tables. For the qualitative data, theme tables will be developed that identify common themes across respondents for specific topics or research questions and examine the similarities and differences across the six programs (Yin 1994). The extent to which the programs were implemented with fidelity will be examined by completing a fidelity checklist for each. The checklist will include five elements of fidelity referenced by Carroll et al. (2007): (1) information on whether the core or essential intervention components were implemented, (2) adherence to other aspects of the service model, (3) service quality, (4) dosage offered, and (5) participant engagement. Key challenges for replicating the coaching interventions and promising practices to overcome them will be identified.

Time Schedule and Publication

Study enrollment and baseline data collection began in Spring 2018 for FaDSS and is expected to begin in Fall 2018 for the additional sites (LIFT, Work Success, and Goal4 It!), pending OMB approval. Over the duration of the evaluation, a series of reports will be generated, the timing for which is highlighted in Table A.4. Two reports will be produced on the impact findings, based on the first and second follow-up surveys, respectively. Reports on the implementation study include a detailed report describing each program and a report examining the implementation findings across all six programs (a cross-site implementation study report). In addition to these reports, this evaluation may provide opportunities for analyzing and disseminating additional information through special topics reports and research or issue briefs. We will also provide a public or restricted-use data file for others to replicate and extend our analyses.

Table A.4. Study schedule

Activity

Timing*

Data collection


Sample enrollment and baseline data collection

Spring 2018 through Spring 2019 for FaDSS; Summer 2018 through Summer 2019 for LIFT, Work Success, and Goal4 It!; Not applicable for the two MyGoals sites


Implementation study data collection

Summer 2018 through Summer 2020


First follow-up survey

Spring 2018 through Fall 2019


Second follow-up survey

Fall 2019 through Fall 2020


Reporting


Implementation study report(s)

Winter 2019-2020


First follow-up findings report

December 2020


Second follow-up findings report

June 2021


Special topics reports

To be determined

*All dates dependent on date of OMB approval of non-substantive change.

A17. Reasons Not to Display OMB Expiration Date

All instruments will display the expiration date for OMB approval.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.


References

Annie E. Casey Foundation. “Financial Coaching: A New Approach for Asset Building?” Baltimore, MD: Annie E. Casey Foundation, 2007. Retrieved from www.aecf.org.

Barnow, B. S., and D. Greenberg. “Do Estimated Impacts on Earnings Depend on the Source of

the Data Used to Measure Them? Evidence from Previous Social Experiments.” Evaluation

Review, vol. 39, no. 2, April 2015.

Baumgartner, R., and P. Rathbun. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, VA, 1997.

Blair, C., and C. Raver. “Improving Young Adults’ Odds of Successfully Navigating Work and Parenting: Implications of the Science of Self-Regulation for Dual-Generation Programs.” Draft report submitted to Jack Shonkoff, Center on the Developing Child, Harvard University, January 2015.

Bond, Gary R., S. J. Kim, D. R. Becker, S. J. Swanson, R. E. Drake, I. M. Krzos, V.V. Fraser, S. O'Neill, and R. L. Frounfelker. "A Controlled Trial of Supported Employment for People with Severe Mental Illness and Justice Involvement." Psychiatric Services, vol. 66, no. 10, 2015.

Carroll, Christopher, M. Patterson, S. Wood, A. Booth, J. Rick, and S. Balain. “A Conceptual Framework for Implementation Fidelity.” Implementation Science, vol. 2, no. 1, November 2007, pp. 40–48.

Davis, Lori L., A.C. Leon, R. Toscano, C.E. Drebing, L.C. Ward, P. E Parker, T.M. Kashner, and R.E Drake, 2012. "A randomized controlled trial of supported employment among veterans with posttraumatic stress disorder." Psychiatric Services, vol. 63, no. 5, May 2012, pp. 464-470.

Groves, R.M., E. Singer, and A.D. Corning. “A Leverage-Saliency Theory of Survey Participation: Description and Illustration.” Public Opinion Quarterly, vol. 64, 2000, pp. 299–308.

Hamilton, Gayle. “Improving Employment and Earnings for TANF Recipients.” Washington, DC: The Urban Institute, Office of Planning, Research and Evaluation, March 2012.

Lambert EY, Wiebel WW (Eds): The Collection and Interpretation of Data from

Hidden Populations. Washington, DC: United States National Institute on

Drug Abuse; 1990.

Martinez-Ebers, V. “Using Monetary Incentives with Hard-to-Reach Populations in Panel Surveys.” International Journal of Public Opinion Research, vol. 9, 1997, pp. 77–86.

Mullainathan, S., and E. Shafir. Scarcity: Why Having Too Little Means So Much. New York: Henry Holt & Company, 2013.

National Research Council. 2001. Studies of welfare populations: Data collection and research issues. Washington, DC. The National Academies Press.

Office of Information and Regularly Affairs. “Questions and Answers When Designing Surveys for Information Collections.” Office of Management and Budget, October 2016.

Pavetti, LaDonna. “Using Executive Function and Related Principles to Improve the Design and Delivery of Assistance Programs for Disadvantaged Families.” Washington, DC: Center on Budget and Policy Priorities, May 2014.

Ruiz de Luzuriaga, Nicki. Coaching for Economic Mobility. Boston, MA: Crittenton Women’s Union, 2015.

Shettle, C, and G. Mooney. “Monetary Incentives in Government Surveys.” Journal of Official Statistics, vol. 15, 1999, pp, 231–250.

Singer, E. and R. Kulka. 2002. "Paying Respondents for Survey Participation," In Studies of

Welfare Populations: Data Collection and Research Issues, eds. Michele Ver Ploeg, Robert A.

Moffitt, and Constance F. Citro, pp. 105-128. Washington: National Academy Press.

Singer E, J. Van Hoewyk, N. Gebler, T. Raghunathan, and K. McGonagle. “The effect of incentives on response rates in interviewer-mediated surveys.” Journal of Official Statistics, vol. 15(2), 1999, 217–230.

Singer, Eleanor, and C. Ye. 2013. "The Use and Effects of Incentives in Surveys." Annals of the American Academy of Political and Social Science, 645(1): 112-141.

Theodos, Brett, Margaret Simms, Mark Treskon, Christina Stacy, Rachel Brash, Dina Emam, Rebecca Daniels, and Juan Collazos. "An Evaluation of the Impacts and Implementation Approaches of Financial Coaching Programs." October. Urban Institute. www. urban.org/sites/default/files/alfresco/publication-pdfs/2000448-An-Evaluation-of-the-Impacts-and-Implementation-Approaches-of-Financial-Coaching-Programs. pdf (2015).

What Works Clearinghouse. “Assessing Attrition Bias.” Available at: https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_attrition_v2.1.pdf. 2013.

Wittenburg, D., D. Mann, and A. Thompkins. "The Disability System and Programs to Promote Employment for People with Disabilities." IZA Journal of Labor Policy, vol. 2, no. 4, 2013. doi:10.1186/2193-9004-2-4.

Yin, R. Case Study Research: Design and Methods. 2nd edition. Beverly Hills, CA: Sage Publishing, 1994.


1 We decided against an incentive approach which begins with a lower incentive offer and then graduates to a higher offer for the resistant cases, because we wanted to avoid training sample members to hold out for higher incentive offers in the second follow-up.

2 After achieving the anticipated response rate, we will cease active pursuit of additional responses through locating or outgoing calls. We will allow additional interested sample members to respond by keeping the system open to accept incoming online surveys or phone calls.

27


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy