Supporting Statement B_rev

Supporting Statement B_rev.docx

OPRE Evaluation: Evaluation of Employment Coaching for TANF and Other Related Populations [Experimental impact study and an Implementation study]

OMB: 0970-0506

Document [docx]
Download: docx | pdf





Evaluation of Employment Coaching for TANF and Low-Income Populations




OMB Information Collection Request

0970-0506




Supporting Statement

Part B

Revised October 2020


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Hilary Bruck

Victoria Kabak


B1. Respondent Universe and Sampling Methods

The Office of Planning, Research, and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for non-substantive changes to previously approved data collection instruments and incentives for the Evaluation of Employment Coaching for TANF and Related Populations (OMB #0970-0506). The objective of this evaluation is to provide information on coaching interventions implemented by Temporary Assistance for Needy Families (TANF) agencies and other employment programs. The evaluation will describe up to six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve their overall well-being. The evaluation includes both an experimental impact study and an implementation study.

Programs selected for the evaluation, which are described in Supporting Statement A, are already participating in the study under the previous information collection requests (ICR) approved by OMB (OMB #0970-0506). Each program is expected to recruit 1,000 eligible people, for a total of 6,000 participants across all six programs. After participants consent to participate in the study (Attachment A), half are randomly assigned to the treatment group and are offered coaching services; the other half are randomly assigned to the control group and are not offered these coaching services.

The previous ICR (OMB #0970-0506) covered data collection activities for both an impact and an implementation study. Approved data collection activities for the impact study include: (1) baseline data collection and (2) the first follow-up survey. Approved data collection activities for the implementation study include: (1) semi-structured management, staff, and supervisor interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions. Approved data collection activities also include a second follow-up survey for the impact study (Attachment N). The follow-up survey is being administered to 1,000 participants per program. If the study includes more than 1,000 participants per program, then the survey will be administered to a random sample of 1,000 study participants. We expect that 80 percent will complete the survey for a total of 800 respondents per program (approximately 4,800 across all six programs).

This request seeks approval of non-substantive changes to two previously approved implementation study data collection instruments to systematically capture descriptive information related to the 2019 novel coronavirus disease (COVID-19) pandemic. We propose to conduct additional management, staff, and supervisor interviews and in-depth participant interviews in order to collect descriptive information regarding program responses to and participants’ experiences with the COVID-19 pandemic. We also request a slight increase to the incentive amount for the additional in-depth participant interviews, and to the estimated burden based on conducting this additional data collection. Finally, we also request changes to the structure and amount of the survey incentive offered to respondents from two of the study sites who complete either the first or second follow-up surveys as part of the impact evaluation, and minor revisions to the survey instruments and notifications to reflect the requested changes. The justification for these non-substantive change requests is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.

B2. Procedures for Collection of Information

The Office of Management and Budget’s Office of Information and Regulatory Affairs (OIRA) approved the conduct of the second follow-up survey in a previous ICR (OMB #0970-0506). This survey is being made available to treatment and control group members approximately 21 to 24 months after random assignment. Study participants are contacted approximately one week before the start of data collection by mail, to notify them of the upcoming survey request (Attachment I).

Table B.1 reports program-level minimum detectable impacts on outcomes obtained from survey data. We assume a study sample of 1,000 people per program (500 each in the treatment and control groups). With an 80 percent response rate, the sample of survey respondents would include 800 people per program (400 in the treatment group and 400 in the control group).

Table B.1. Minimum detectable effects on survey-based outcomes, by size of survey sample

Sample size (treatment and control)

Minimum detectable effect

500

0.25

1,000

0.18

2,000

0.13

Assumptions: People are assigned with equal probability to the treatment and control groups. We assume that covariates in the regression model will explain 20 percent of the variation in the outcome measures. All power calculations are based on the following formula: , where is the inverse t distribution with degrees of freedom, is the significance level of the test, is the level of Type II error, is the variance in outcomes explained by baseline characteristics, is the number of participants after attrition, and is the fraction of study participants in the treatment group. We assume and power is 80 percent . We assume 20 percent attrition in the survey data.

These samples are large enough to detect the expected impacts of the programs, even accounting for attrition in the survey sample. With a survey sample of 1,000 study participants (which implies an analysis sample of 800 people based on an 80 percent response rate), we will be able to detect an impact of 0.18 standard deviations. Standardized evidence reviews, such as the What Works Clearinghouse, consider effect sizes of 0.25 standard deviations or larger as substantively important (U.S. Department of Education 2014).

For this non-substantive change request, we propose conducting additional interviews with program staff and with participants to learn about how the programs have changed and how participant’s program engagement and needs have changed as a result of the COVID-19 pandemic. The interview guides in Attachment D. Semi-structured management staff and supervisor interviews_rev and Attachment F. In-depth participant interviews_rev have been revised to reframe some questions and add questions related to COVID-19; questions we do not intend to ask again have been deleted in order to keep the interview length the same as the previous interviews. The additional interviews will be conducted either by video or by phone, according to each respondent’s preference. This is intended to both reduce burden on respondents and eliminate the need for in-person data collection due to restrictions related to COVID-19. We request approval to conduct these additional interviews in five of the six sites participating in the evaluation. Work Success has not continued serving participants during COVID-19, so we are not requesting approval to conduct additional interviews in that site. The justification for these additional interviews is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Expected Response Rates

At the time of our March 2020 non-substantive change request, the response rates for MyGoals participants in completed monthly cohorts were on target to reach 80 percent with small differences in the response rates between treatment and control groups, and these response rates had been achieved with shorter field periods which limits recall issues. Response rates in the Family Development and Self-Sufficiency (FaDSS), LIFT, Jefferson County Colorado Works, and Work Success sites were consistently lower throughout the data collection period, and the early cohorts from these sites had exhibited longer field periods. For this reason, we proposed changing the incentive structure and amount for these sites for both the first and second follow-up surveys, from the approved two-tiered structure to a $50 incentive for completing each follow-up survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. This change was approved by OIRA in March 2020 via a non-substantive change request (OMB #0970-0506).

Since that time the COVID-19 pandemic necessitated changes in data collection methods, specifically the elimination of in-person data collection techniques including field locating for the impact evaluation’s follow-up surveys. There is no firm date for in-person data collection operations to resume but it will not be until January 2021 at the earliest, contingent on improvement in conditions related to the pandemic. The lack of in-person field location has depressed response rates across all six sites and we anticipate will continue to do so for the foreseeable future. At the same time, early results from the previous non-substantive change request to increase the survey incentive to $50 for four sites demonstrate that the incentive increase was effective at increasing the rate of response by web and telephone for survey cohorts released after the increase when compared to the MyGoals sites for which no incentive increase was implemented. For these reasons we request an increase in the first and second follow-up survey incentive for MyGoals participants to $50, irrespective of whether the participants complete the survey within the four-week “early bird” period. We believe increasing the survey incentives will help to increase the response rates and decrease the program-control group response rate differential in the MyGoals sites, and to avoid bias in the estimates of the programs’ effectiveness. Additional information regarding the justification for this change is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.

We have obtained response rates of 80 percent when conducting follow-up surveys with similar populations. In our evaluation of the Building Nebraska Families program (OMB #0970-0246), we achieved an 87 percent response rate on the 18-month follow-up survey and an 83 percent response rate on the 30-month follow-up survey. This program, which was conducted with a population similar to the current study, was designed to help TANF recipients and other low-income people enter, maintain, and advance in employment. For the Personal Responsibility Education Program (PREP) evaluation (OMB #0970-0398), we are on track to achieve response rates above 80 percent for the Healthy Families San Angelo program, a home-visitation program that targets a low-income population, similar to the current study. At this site, the cohorts for whom data collection is complete have a response rate of 85 percent on the one-year follow-up survey and 83 percent on the two-year follow-up survey. For the Parents and Children Together follow-up surveys, using the strategies outlined below, we achieved an 88 percent response rate for the low-income mothers and fathers in the healthy marriage program study (OMB #0970-0403). All of these examples demonstrate the usefulness of our responsive design strategies for achieving high response rates with low-income, at-risk populations. The combination of sound planning, using paradata and adaptive design, and our experience with at-risk populations produces balanced, high-quality data.

Dealing with Nonresponse

All analysis of the follow-up survey data will account for survey nonresponse using nonresponse weights. Weights will be calculated using standard techniques to estimate the probability of nonresponse as a function of baseline characteristics. The evaluation team does not anticipate significant item nonresponse based on prior experience asking similar questions with similar populations, as described in the studies above.

Some survey nonresponse is inevitable, although it will be minimized by providing incentives. The evaluation team will analyze nonresponse to assess whether the sample of follow-up survey respondents is representative of the full study sample. Using the data on participants’ characteristics collected at baseline, Mathematica will conduct statistical tests (chi-square and t-tests) to gauge whether the treatment group members who participated in data collection are representative of all the treatment group members, whether the control group members who participated in data collection are representative of all the control group members, and whether there are systematic differences in the treatment and control group members who responded to the survey.

The evaluation team will use two approaches to correct for potential nonresponse bias in the estimation of program impacts. First, the regression models described in A16 will adjust for observed differences between the characteristics of treatment and control group respondents. Second, because this regression procedure will not correct for differences between respondents and nonrespondents in each research group, sample weights will be constructed so that the weighted baseline characteristics of respondents in the treatment and control group in each program are similar to those of the full sample (respondents and nonrespondents). These weights will be constructed using data from the baseline surveys.

Maximizing Response Rates

Impact Study

Methods for maximizing response rates for the second follow-up survey are discussed below. These are consistent with the procedures proposed and approved for the first follow-up survey.


  • Use a tested questionnaire. As with the first follow-up, the collection of second follow-up survey data has been tailored to the specific circumstances of this evaluation, yet is based closely on the Evaluation of the Supplemental Nutrition Assistance Program (SNAP) Employment and Training Pilots baseline survey (OMB #0584-0604), a U.S. Department of Agriculture-funded initiative that received OMB approval, was extensively tested, and was successfully fielded. The goal of the SNAP Employment and Training evaluation was to rigorously test innovative strategies for increasing employment and earnings among SNAP participants and reducing their dependence on SNAP and other public assistance programs. Thus the population and goal of the SNAP Employment and Training evaluation was similar to the current study. A question-by-question justification for the items included in the second follow-up survey is presented in Attachment O. The second follow-up survey was also pretested with nine people.

  • Use a straightforward, undemanding questionnaire. The second follow-up survey is designed to be easy to complete. The questions use clear and straightforward language.

  • Use incentives. OIRA initially approved a two-tiered incentive structure with an “early bird” incentive that provides survey respondents $35 if they complete the survey within four weeks of the initial notification, and $25 if they complete after four weeks. The study team employed this incentive structure for participants in all six programs during the administration of both the first and second follow-up surveys until early Spring 2020. In March 2020, we proposed that the two-tiered incentive structure continue only among study participants in the two MyGoals sites in Baltimore and Houston, and proposed that participants from the other four sites (FaDSS, LIFT, Jefferson County Colorado Works, and Work Success) be offered a $50 incentive for completing each survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. The reason for proposing this change was that, given patterns of survey response for those four sites, there is a risk that our analysis will result in biased estimates of program impacts and will underrepresent participants in key groups. OIRA approved the change to a $50 incentive for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites in March 2020 (OMB #0970-0506). We are now requesting that the incentive offered to respondents from MyGoals sites who complete the first and second follow-up surveys as part of the impact evaluation be raised from the current differential incentive to $50 in alignment with the other study sites. The response rates for the follow-up surveys are at risk of being much lower than anticipated because in-person location has stopped due to COVID-19. We believe a higher survey incentive will help increase the response rates and decrease the program-control group response rate differential in the MyGoals sites, to avoid bias in the estimates of the programs’ effectiveness. We also request minor revisions to the survey instruments (Attachment C. First follow-up survey_rev and Attachment N. Second follow-up survey_rev) and notifications (Attachment I. Notifications_rev) to reflect the requested changes to the incentive structure and amount. Additional information regarding the justification for this request is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.

  • Allow respondents to complete the survey in different ways. The participants can complete the survey either online (using a computer, tablet, or smartphone) or by telephone.

  • Send reminder notifications. The evaluation team is using a combination of letters, emails, texts, and telephone calls to encourage participants to participate. These notifications are included in Attachment I. For example, the advance letter (and insert) is mailed to participants at the start of data collection. The email notification is emailed to participants who have not yet completed the survey about three weeks after the start of data collection. The refusal avoidance letter is mailed to participants who have not yet completed the survey and who we think will respond but are being avoidant or are delaying responding. A locating letter is sent to participants who have not completed the survey after all available contact information has gone through a locating process (described below). The advance materials for the survey originally informed study participants that the survey will take an average of 60 minutes to complete; however, because the average length of the interviews to date is 45 minutes, ACF proposed to change the burden estimate used in communication with study participants across all sites from 60 to 45 minutes. Changing the burden estimate to the lower, more accurate number, could increase the likelihood that sample members agree to complete a survey. OIRA approved this change in the burden estimate in March 2020.

  • Obtain accurate, up-to-date contact information. Detailed contact information is collected at baseline (Attachment B) that includes telephone numbers, addresses, and email addresses to aid in locating participants to complete the follow-up surveys. Detailed contact information is also collected for three relatives, friends, neighbors, and/or past employers whom the participant selects and who may be able to help locate the participants if they move. The evaluation team also requests updates from project staff, if they have any. Before the start of the second follow-up survey, participant contact information is updated through online database searches. The study team also works with study sites to obtain participant contact information from the programs with a focus on updating contact information for nonresponding sample members.

  • Use intensive locating methods, as needed. Participants are initially notified about the survey by mail and email and asked to complete it via the web, though they can also complete it via telephone at that time (Attachment I). At that point, they are offered the approved higher incentive to increase response rates and minimize differential response rates between treatment and control groups. After four weeks, the evaluation team will attempt to contact the participants via telephone at the numbers provided in the baseline data, in order to have them complete the survey via telephone. If participants cannot be reached by telephone, the evaluation team will contact the friends, family, neighbors, and/or past employers identified by the participant during the baseline data collection, for help in locating them. Customized, individual searches for contact information using specialized databases will be conducted next. Finally, before the COVID-19 pandemic, if study participants still could not be located, trained field locators would go in person to the study participant’s home and neighborhood. If they located the study participant, the field locators would lend him or her a smartphone to complete the survey. However, as of mid-March 2020, field locating operations have ceased due to COVID-19. No timetable for resumption of field locating has been established at this time.

  • Use paradata. Data is being collected on each attempt to contact a respondent including the mode, time, date, interviewer, and contact results. Examining these paradata helps to identify the most effective calling times and interviewers. Paradata is also used to determine which methods of contact (letters, emails, texts, or telephone calls) are proving to be the most successful in this study, so that the frequency and type of contacts can be adjusted to achieve high response rates.

  • Monitor response rates closely by group. Response rates are being monitored closely throughout the fielding period, with an eye to any treatment–control differences that may emerge. If treatment–control differences are observed, then the locating efforts will be intensified for the group with the lower response rate to minimize differential nonresponse.

  • Other mitigations to address emerging issues in response rates. As it became apparent that survey production in four sites would likely be insufficient to support unbiased estimates of program impacts, the contractor took additional steps to identify causes of non-response in these sites and to mitigate them.

Implementation Study

In a previous ICR, OIRA had also approved offering respondents who participate in the in-depth interviews for the implementation study, which are estimated to take 2.5 hours on average, a $50 gift card. As part of this non-substantive change request, we propose to offer participants a $60 gift card to complete interviews related to receiving coaching services during COVID-19. We propose to offer $10 more for these interviews than those conducted earlier. It will be more difficult to recruit study participants who are still actively engaged in the coaching programs for these interviews, because fewer people are still participating in the programs. We believe the increased incentive amount will help us recruit sufficient numbers of people to be interviewed. A $60 incentive was recently approved for in-depth interviews for the Next Generation of Enhanced Employment Strategies Project (OMB #0970-0545). The justification for the change in incentive amount for the in-depth participant interviews is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.

B4. Tests of Procedures or Methods to be Undertaken

The second follow-up survey was pretested on nine people similar to the survey’s target population to estimate survey length, assess respondents’ understanding of the survey questions, and identify improvements to the flow and structure of the instruments. We used cognitive interviewing and respondent and interviewer debriefings during these pretests.

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The individuals listed below consulted on the statistical aspects of the study to ensure the technical soundness of the research, or will be collecting and/or analyzing the data:

        1. OPRE

Hilary Bruck, Senior Social Science Research Analyst

Victoria Kabak, Social Science Research Analyst

Gabrielle Newell, Social Science Research Analyst


        1. Mathematica Policy Research

Dr. Sheena McConnell, Project Director

Dr. Quinn Moore, Deputy Project Director

Dr. Michelle Derr, Principal Investigator

Shawn Marsh, Survey Director

        1. Abt Associates

Dr. Alan Werner, Principal Investigator

Dr. Bethany Boland, Senior Analyst


University of Chicago

Dr. James Heckman, Measurement Expert

References

U.S. Department of Education. WWC Procedures and Standards Handbook. Washington, DC: Institute for Education Sciences, March 2014. Available at http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=19. Accessed July 14, 2016.



9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy