Supporting Statement B_v4_clean

Supporting Statement B_v4_clean.docx

OPRE Evaluation: Evaluation of Employment Coaching for TANF and Other Related Populations [Experimental impact study and an Implementation study]

OMB: 0970-0506

Document [docx]
Download: docx | pdf





Evaluation of Employment Coaching for TANF and Low-Income Populations

(Second Follow-Up Survey)



OMB Information Collection Request

0970-0506




Supporting Statement

Part B

Revised March 2020


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Hilary Bruck

Victoria Kabak


B1. Respondent Universe and Sampling Methods

The Office of Planning, Research, and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for a second follow-up survey conducted for the Evaluation of Employment Coaching for TANF and Related Populations (0970-0506). The objective of this evaluation is to provide information on coaching interventions implemented by Temporary Assistance for Needy Families (TANF) agencies and other employment programs. The evaluation will describe up to six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve their overall well-being. The evaluation will include both an experimental impact study and an implementation study. The second follow-up survey will contribute to the experimental impact study.

Programs selected for the evaluation, which are described in Supporting Statement A, are already participating in the study under the previous information collection requests (ICR) approved by OMB (0970-0506). Each program is expected to recruit 1,000 eligible people, for a total of 6,000 participants across all six programs. After participants consent to participate in the study (see Attachment A), half are randomly assigned to the treatment group and will be offered coaching services; the other half are randomly assigned to the control group and will not be offered these coaching services.

The previous ICR (0970-0506) covered data collection activities for both an impact and an implementation study. Approved data collection activities for the impact study include: (1) baseline data collection and (2) the first follow-up survey. Approved data collection activities for the implementation study include: (1) semi-structured staff interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions.

This ICR seeks clearance for a second follow-up survey for the impact study (Attachment N). The follow-up survey will be administered to 1,000 participants per program. If the study includes more than 1,000 participants per program, then the survey will be administered to a random sample of 1,000 study participants. We expect that 80 percent will complete the survey for a total of 800 respondents per program (approximately 4,800 across all six programs).

B2. Procedures for Collection of Information

The second follow-up survey will be made available to treatment and control group members approximately 21 to 24 months after random assignment. Study participants will be contacted approximately one week before the start of data collection by mail, to notify them of the upcoming survey request (Attachment I).

Table B.1 reports program-level minimum detectable impacts on outcomes obtained from survey data. We assume a study sample of 1,000 people per program (500 each in the treatment and control groups). With an 80 percent response rate, the sample of survey respondents would include 800 people per program (400 in the treatment group and 400 in the control group).

Table B.1. Minimum detectable effects on survey-based outcomes, by size of survey sample

Sample size (treatment and control)

Minimum detectable effect

500

0.25

1,000

0.18

2,000

0.13

Assumptions: People are assigned with equal probability to the treatment and control groups. We assume that covariates in the regression model will explain 20 percent of the variation in the outcome measures. All power calculations are based on the following formula: , where is the inverse t distribution with degrees of freedom, is the significance level of the test, is the level of Type II error, is the variance in outcomes explained by baseline characteristics, is the number of participants after attrition, and is the fraction of study participants in the treatment group. We assume and power is 80 percent . We assume 20 percent attrition in the survey data.

These samples are large enough to detect the expected impacts of the programs, even accounting for attrition in the survey sample. With a survey sample of 1,000 study participants (which implies an analysis sample of 800 people based on an 80 percent response rate), we will be able to detect an impact of 0.18 standard deviations. Standardized evidence reviews, such as the What Works Clearinghouse, consider effect sizes of 0.25 standard deviations or larger as substantively important (U.S. Department of Education 2014).

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Expected Response Rates

The response rates for MyGoals participants in completed monthly cohorts are on target to reach 80 percent with small differences in the response rates between treatment and control groups, and these response rates have been achieved with shorter field periods which limits recall issues. Response rates in the Family Development and Self-Sufficiency (FaDSS), LIFT, Jefferson County Colorado Works, and Work Success sites have been significantly lower, and the early cohorts from these sites have exhibited longer field periods. Given patterns of survey response in these four sites overall and by research group, there is a risk that our analysis will result in biased estimates of program impacts and will underrepresent participants in key groups. For this reason, we propose changing the incentive structure and amount for these sites for both the first and second follow-up surveys, from the approved two-tiered structure to a $50 incentive for completing each follow-up survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. The justification for this change is included in Attachment P (“Request to change burden and incentive structure-amount”).

We have obtained response rates of 80 percent when conducting follow-up surveys with similar populations. In our evaluation of the Building Nebraska Families program (OMB control number 0970-0246), we achieved an 87 percent response rate on the 18-month follow-up survey and an 83 percent response rate on the 30-month follow-up survey. This program, which was conducted with a population similar to the current study, was designed to help TANF recipients and other low-income people enter, maintain, and advance in employment. For the Personal Responsibility Education Program (PREP) evaluation (OMB control number 0970-0398), we are on track to achieve response rates above 80 percent for the Healthy Families San Angelo program, a home-visitation program that targets a low-income population, similar to the current study. At this site, the cohorts for whom data collection is complete have a response rate of 85 percent on the one-year follow-up survey and 83 percent on the two-year follow-up survey. For the Parents and Children Together follow-up surveys, using the strategies outlined below, we achieved an 88 percent response rate for the low-income mothers and fathers in the healthy marriage program study (OMB control number 0970-0403). All of these examples demonstrate the usefulness of our responsive design strategies for achieving high response rates with low-income, at-risk populations. The combination of sound planning, using paradata and adaptive design, and our experience with at-risk populations produces balanced, high-quality data.

Dealing with Nonresponse

All analysis of the second follow-up survey data will account for survey nonresponse using nonresponse weights. Weights will be calculated using standard techniques to estimate the probability of nonresponse as a function of baseline characteristics. The evaluation team does not anticipate significant item nonresponse based on prior experience asking similar questions with similar populations, as described in the studies above.

Some survey nonresponse is inevitable, although it will be minimized by providing incentives. The evaluation team will analyze nonresponse to assess whether the sample of second follow-up survey respondents is representative of the full study sample. Using the data on participants’ characteristics collected at baseline, Mathematica will conduct statistical tests (chi-square and t-tests) to gauge whether the treatment group members who participated in data collection are representative of all the treatment group members, whether the control group members who participated in data collection are representative of all the control group members, and whether there are systematic differences in the treatment and control group members who responded to the survey.

The evaluation team will use two approaches to correct for potential nonresponse bias in the estimation of program impacts. First, the regression models described in A16 will adjust for observed differences between the characteristics of treatment and control group respondents. Second, because this regression procedure will not correct for differences between respondents and nonrespondents in each research group, sample weights will be constructed so that the weighted baseline characteristics of respondents in the treatment and control group in each program are similar to those of the full sample (respondents and nonrespondents). These weights will be constructed using data from the baseline surveys.

Maximizing Response Rates

Impact Study

Methods for maximizing response rates for the second follow-up survey are discussed below. These are consistent with the procedures proposed and approved for the first follow-up survey.


  • Use a tested questionnaire. As with the first follow-up, the collection of second follow-up survey data has been tailored to the specific circumstances of this evaluation, yet is based closely on the Evaluation of the Supplemental Nutrition Assistance Program (SNAP) Employment and Training Pilots baseline survey (OMB control number 0584-0604), a U.S. Department of Agriculture-funded initiative that received OMB approval, was extensively tested, and was successfully fielded. The goal of the SNAP Employment and Training evaluation was to rigorously test innovative strategies for increasing employment and earnings among SNAP participants and reducing their dependence on SNAP and other public assistance programs. Thus the population and goal of the SNAP Employment and Training evaluation was similar to the current study. A question-by-question justification for the items included in the second follow-up survey is presented in Attachment O. The second follow-up survey was also pretested with nine people.

  • Use a straightforward, undemanding questionnaire. The second follow-up survey is designed to be easy to complete. The questions use clear and straightforward language.

  • Use incentives. The Office of Management and Budget’s Office of Information and Regulatory Affairs (OIRA) approved a two-tiered incentive structure with an “early bird” incentive that provides survey respondents $35 if they complete the survey within four weeks of the initial notification, and $25 if they complete after four weeks. We have employed this incentive structure for participants in all six programs during the administration of both the first and second follow-up surveys to date. We are now proposing that the two-tiered incentive structure continue only among study participants in the two MyGoals sites in Baltimore and Houston, and propose that participants from the other four sites (FaDSS, LIFT, Jefferson County Colorado Works, and Work Success) be offered a $50 incentive for completing each survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. The reason for proposing this change is that, given patterns of survey response for those four sites, there is a risk that our analysis will result in biased estimates of program impacts and will underrepresent participants in key groups. The four sites for which we propose the higher monetary incentive are different from the MyGoals sites which are run in conjunction with public housing programs where participants live, and as a result the study participants from these programs have exhibited higher levels of cooperation with the evaluation. The response rates for MyGoals participants in completed monthly cohorts are on target to reach 80 percent with small differences in the response rates between treatment and control groups, and these response rates have been achieved with shorter field periods which limits recall issues. The response rates for the other four sites have been significantly lower and have required longer field periods for the first follow-up survey, with some monthly cohorts having to be closed before completion of the survey because they are due to start of the second follow-up survey. The justification for the $50 incentive for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites is included in Attachment P (“Request to change burden and incentive structure-amount”).

  • Allow respondents to complete the survey in different ways. The participants will be able to complete the survey either online (using a computer, tablet, or smartphone) or by telephone.

  • Send reminder notifications. The evaluation team will use a combination of letters, emails, texts, and telephone calls to encourage participants to participate. These notifications are included in Attachment I. For example, the advance letter (and insert) will be mailed to participants at the start of data collection. The email notification will be emailed to participants who have not yet completed the survey about three weeks after the start of data collection. The refusal avoidance letter will be mailed to participants who have not yet completed the survey and who we think will respond but are being avoidant or are delaying responding. A locating letter will be sent to participants who have not completed the survey after all available contact information has gone through a locating process (described below). The advance materials for the survey currently inform study participants that the survey will take an average of 60 minutes to complete. Because the average length of the interviews to date is 45 minutes, ACF proposes to change the burden estimate used in communication with study participants across all sites from 60 to 45 minutes. Changing the burden estimate to the lower, more accurate number, could increase the likelihood that sample members agree to complete a survey. Additional information is provided in Attachment P (“Request to change burden and incentive structure-amount”).

  • Obtain accurate, up-to-date contact information. Detailed contact information will be collected at baseline (Attachment B) that includes telephone numbers, addresses, and email addresses to aid in locating participants to complete the follow-up surveys. Detailed contact information will also be collected for three relatives, friends, neighbors, and/or past employers whom the participant selects and who may be able to help locate the participants if they move. The evaluation team will also request updates from project staff, if they have any. Before the start of the second follow-up survey, participant contact information will be updated through online database searches. The study team also works with study sites to obtain participant contact information from the programs with a focus on updating contact information for nonresponding sample members.

  • Use intensive locating methods, as needed. Participants will initially be notified about the survey by mail and email and asked to complete it via the web, though they will also be able to complete it via telephone at that time (Attachment I). At that point, they will be offered a higher incentive to increase response rates and minimize differential response rates between treatment and control groups. After four weeks, the evaluation team will attempt to contact the participants via telephone at the numbers provided in the baseline data, in order to have them complete the survey via telephone. If participants cannot be reached by telephone, the evaluation team will contact the friends, family, neighbors, and/or past employers identified by the participant during the baseline data collection, for help in locating them. Customized, individual searches for contact information using specialized databases will be conducted next. Finally, if study participants still cannot be located, trained field locators will go in person to the study participant’s home and neighborhood. If they locate the study participant, the field locators will lend him or her a smartphone to complete the survey.

  • Use paradata. Data will be collected on each attempt to contact a respondent including the mode, time, date, interviewer, and contact results. Examining these paradata will help to identify the most effective calling times and interviewers. Paradata will also be used to determine which methods of contact (letters, emails, texts, or telephone calls) are proving to be the most successful in this study, so that the frequency and type of contacts can be adjusted to achieve high response rates.

  • Monitor response rates closely by group. Response rates will be monitored closely throughout the fielding period, with an eye to any treatment–control differences that may emerge. If treatment–control differences are observed, then the locating efforts will be intensified for the group with the lower response rate to minimize differential nonresponse.

  • Other mitigations to address emerging issues in response rates. As it became apparent that survey production in four sites would likely be insufficient to support unbiased estimates of program impacts, the contractor took additional steps to identify causes of non-response in these sites and to mitigate them. These steps are described in Attachment P (“Request to change burden and incentive structure-amount”).

B4. Tests of Procedures or Methods to be Undertaken

The second follow-up survey was pretested on nine people similar to the survey’s target population to estimate survey length, assess respondents’ understanding of the survey questions, and identify improvements to the flow and structure of the instruments. We used cognitive interviewing and respondent and interviewer debriefings during these pretests.

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The individuals listed below consulted on the statistical aspects of the study to ensure the technical soundness of the research, or will be collecting and/or analyzing the data:

        1. OPRE

Hilary Bruck, Senior Social Science Research Analyst

Victoria Kabak, Social Science Research Analyst


Business Strategy Consultants

Gabrielle Newell, Contract Social Science Research Analyst


        1. Mathematica Policy Research

Dr. Sheena McConnell, Project Director

Dr. Quinn Moore, Deputy Project Director

Dr. Michelle Derr, Principal Investigator

Shawn Marsh, Survey Director

        1. Abt Associates

Dr. Alan Werner, Principal Investigator

Dr. Bethany Boland, Senior Analyst


University of Chicago

Dr. James Heckman, Measurement Expert

References

U.S. Department of Education. WWC Procedures and Standards Handbook. Washington, DC: Institute for Education Sciences, March 2014. Available at http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=19. Accessed July 14, 2016.


8


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy