OMB HtE PA 36M_Supporting Statement Part A REV_02-08-08

OMB HtE PA 36M_Supporting Statement Part A REV_02-08-08.doc

Enhanced Services for the Hard-to-Employ Demonstration and Evaluation: Philadelphia 36-Month Follow-Up Data Collection

OMB: 0970-0336

Document [doc]
Download: doc | pdf








Contract No.: HHS 233-01-0012

Contract Amount: $23.78 million



SUPPORTING STATEMENT

FOR OMB CLEARANCE: PART A


DHHS/ACF/ASPE/DOL

ENHANCED SERVICES FOR THE HARD-TO-EMPLOY (HtE)

DEMONSTRATION AND EVALUATION PROJECT




PHILADELPHIA 36-MONTH DATA COLLECTION INSTRUMENT

September 26, 2007




Prepared for:




U.S. Department of Health and Human Services



Office of Assistant Secretary for Planning

Administration for Children and Families

and Evaluation

370 L’Enfant Promenade, SW

200 Independence Avenue, SW

Washington, DC 20447

Washington, DC 20201

Phone: 202-401-5070

Phone: 202-260-0384

Project Officer: Girley A. Wright

Project Officers: Peggy Halpern and Nicole Gardner-Neblett



U.S. Department of Labor

Prepared by:



Employment and Training Administration

MDRC

200 Constitution Avenue, NW

16 East 34th Street, 19th Floor

Washington, DC 20210

New York, NY 10016

Phone: 202-693-3654

Phone: 212-532-3200

Project Officer: Roxie Nicholson

Project Directors: David Butler



TABLE OF CONTENTS


A. JUSTIFICATION 1


A1. Circumstances Necessitating Data Collection 1

A1.1 Background 1

A1.2 Description of the Philadelphia in the Hard-to-Employ Evaluation 3

A1.3 Research Contribution of the 36-month Survey 6


A2. How, by Whom, and for What Purpose Are Data to be Used 7


A3. Use of Information Technology for Data Collection to Reduce Respondent Burden 8


A4. Efforts to Identify Duplication 9

A4.1 Reasons Why Available Information Cannot Be Used 9


A5. Burden on Small Business 9


A6. Consequences if Data Collection is Not Conducted 9


A7. Special Data Collection Circumstances 9


A8. Form 5 CFR 1320.8(d) and Consultations Prior to OMB Submission 10


A9. Justification for Respondent Payments 11

A9.1 The Use of Incentives 12


A10. Privacy............. 13


A11. Questions of a Sensitive Nature 13


A12. Estimates of the Hour Burden of Data Collection to Respondents 14


A13. Estimates of Capital, Operating, and Start-Up Costs to Respondents 14


A14. Estimates of Costs to Federal Government 14


A15. Changes in Burden 15


A16. Tabulation, Analysis, and Publication Plans and Schedule…………………………….15

A16.1a Assessment of Data Quality and File Construction 15

A16.1b Analysis Plans 15

A16.2 Publication Plans and Schedule 18


A17. Reasons for Not Displaying the OMB Approval Expiration Date 18


A18. Exceptions to Certification Statement 18



B. COLLECTION OF INFORMATION USING STATISTICAL METHODS


B1. Sampling 19

B1.1 Minimum Detectable Effects for Key Outcomes in Effect Size Units 19


B2. Procedures for Collection of Information 20

B2.1 Procedures for the Administration of the Survey 20


B3. Maximizing Response Rates 21


B4. Pilot Testing 21


B5. Consultants on Statistical Aspects of the Design 22



LIST OF APPENDICES


A: 36-Month Follow-Up Survey


B: Federal Register Published 60-Day Notice


C: Federal Register Draft 30-Day Notice


D: Statute/Regulation Authorizing Evaluation and Data Collection: Social Security Act, Section 1110


E: References

A. Justification

The Enhanced Services for the Hard-to-Employ Demonstration and Evaluation Project (HtE) seeks to learn what services improve the employment prospects of low-income persons who face serious obstacles to steady work. The project is sponsored by the Office of Planning, Research and Evaluation (OPRE) of the Administration for Children and Families (ACF), the Office of the Assistant Secretary for Planning and Evaluation (ASPE) in the U.S. Department of Health and Human Services (HHS), and the U.S. Department of Labor (DOL).

The HtE project is a multi-year, multi-site evaluation that employs an experimental longitudinal research design to test four strategies aimed at promoting employment among hard-to-employ populations. The four include: 1) intensive care management and job services program for Rhode Island Medicaid recipients with serious depression; 2) job readiness training, worksite placements, job coaching, job development and other training opportunities for recent parolees in New York City; 3) pre-employment services and transitional employment for long-term participants receiving Temporary Assistance to Needy Families (TANF) in Philadelphia; and 4) two-generational Early Head Start (EHS) services providing enhanced self-sufficiency services for parents, parent skills training, and high quality child care for children in low-income families in Kansas and Missouri.

This document requests OMB approval for the 36-month follow-up survey in the Philadelphia site of the HtE evaluation. The survey is intended to expand our understanding of the longer-term effects of employment services provided to hard-to-employ TANF recipients. This wave of data collection builds upon the 15-month follow-up effort, for which the survey instrument was previously approved by OMB (OMB Control Number: 0970-0276).

A1. Circumstances Necessitating Data Collection

This section provides a brief summary of the literature discussing past evaluations of employment programs targeted at hard-to-employ TANF recipients and the need for these programs in a post-welfare reform system. We then provide a short description of the Hard-to-Employ Evaluation in Philadelphia and discuss key components of the evaluation, sources of data, and constructs of interest. This section concludes by highlighting the research contribution of the 36-month data collection effort and instrument.

A1.1. Background

As welfare caseloads nationwide have declined, policymakers, program administrators, and researchers have increasingly focused attention on long-term and hard-to-serve recipients who have not made a stable transition off welfare. While many TANF recipients receive welfare grants for a short period due to a crisis situation or brief unemployment, a substantial proportion of the caseload is composed of hard-to-serve recipients, who often remain on TANF for longer periods. Many of these recipients face significant barriers to employment, such as physical health problems, mental health conditions, substance abuse issues, and limited employment and educational backgrounds.1


Until the 1990s, recipients with serious barriers were often exempt from requirements to participate in employment-related activities.  During that decade, partly as a result of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, many states began to extend work requirements to cover a broader share of the TANF population.2 TANF reauthorization, passed in January 2006, further strengthens the participation mandate, making it crucial that welfare agencies focus on working with hard-to-employ recipients.3 Welfare time limits and economic fluctuations – including the economic downturn from 2001 to 2003 – also increase the need to offer these recipients effective services to assist them to transition from welfare to work.4


Over the past thirty years, numerous studies have provided insight into which programs are most effective in assisting recipients to transfer from welfare to work; however, fewer studies have targeted the more disadvantaged recipients receiving welfare. An analysis of the results from 20 welfare-to-work programs targeted at the general welfare population concluded that the programs generally increased earnings about as much for the more disadvantaged groups (defined in this study as long-term welfare recipients with no high school diploma and no recent work history) as for the less disadvantaged groups. However, the more disadvantaged groups earned considerably less than the others. This outcome suggests that it may be necessary to target resources and develop specific programs to meet the needs of the most disadvantaged TANF recipients.5


The National Supported Work Demonstration, implemented in the 1970s, remains one of the most comprehensive evaluations to date of programs targeted at recipients who are harder to serve. The program offered subsidized employment to long-term welfare recipients, and showed particularly large impacts for the most disadvantaged participants within the sample (e.g., very long-term recipients and those without a high school diploma).6


As the welfare system evolved to provide cash assistance only temporarily, the subsidized employment model evolved as well. Facing time-limited welfare and an emphasis on meeting participation rates through employment-related services, administrators shortened the period of subsidized employment and increased the focus on the transition to permanent work. The modified model became termed the transitional employment model. Policymakers and practitioners have recently turned to this restructured model as a promising approach to assist hard-to-employ TANF recipients to leave the welfare rolls. However, experimental research has not yet been conducted to assess the effectiveness of this model and to understand for which subgroups it is most effective.7


Another model often used with hard-to-employ TANF recipients is an intensive case management model focusing on barrier assessment and removal prior to work. While the transitional work model places participants immediately into work, on the assumption that barriers will surface and be resolved through the working process, this second model attempts to identify and treat barriers upfront in order to prepare participants to enter work. However, this model has also not yet been rigorously tested.8


The Philadelphia Hard-to-Employ site tests both the transitional employment model and the upfront barrier-removal model with TANF recipients who have been identified as hard-to-serve – those who received TANF for at least a year and/or do not have a high school diploma.9 The evaluation compares each program group with a control group that is not required to participate in any program. It seeks to understand whether the programs improve recipients’ employment, income, earnings, and welfare receipt outcomes, as compared with recipients in the control group. The study will also examine which program model works best for which subgroups of recipients.


A1.2. Description of the Philadelphia Site in the Hard-to-Employ Evaluation

The Philadelphia of the Hard-to-Employ Evaluation tests two employment models designed to increase the employment and economic outcomes of hard-to-employ TANF recipients. Both models grew out of programs that were already being implemented in Philadelphia and that administrators felt showed promise in assisting the more disadvantage recipients to transition off welfare into permanent work. The two models are as follows:

  • The transitional work model, being implemented by the Transitional Work Corporation (TWC), a long-standing provider of transitional work to TANF recipients in Philadelphia. The TWC model begins with a two-week orientation, consisting of intensive job-readiness activities. After the orientation, participants are placed into a transitional job, usually with a government or non-profit agency, for which TWC pays minimum wage for up to six months. TWC identifies on-site work-partners to provide additional guidance and act as on-the-job mentors during the transitional work period. Recipients are required to work 25 hours per week, as well as participate in 10 hours of professional development activities at TWC. These activities may include job search and job readiness instruction as well as GED preparation and other classes. During the transitional work period, TWC staff work with participants to find permanent, unsubsidized jobs. If recipients do not find a permanent job during the six-month period, staff continue to work with them after the period to obtain full-time employment. TWC then provides retention services to participants for six to nine months after placement in a permanent job. In addition, the program offers bonuses of up to $800 for recipients who retain their full-time jobs for up to six months. The services offered to participants in the Hard-to-Employ demonstration are the same as those offered to TANF recipients at TWC who are not part of the study.

  • The program focusing on pre-employment barrier-removal strategies, the Success Through Employment Preparation (STEP) program, is run by the Jewish Employment and Vocational Service (JEVS). This program was developed specifically for this study, based on another program offered to Philadelphia TANF recipients, and serves only study participants. In the STEP program, outreach staff first conduct home visits and carry out any initial barrier-removal efforts needed to bring all participants assigned to this group into the office. Once recipients are enrolled, the program begins with an extensive assessment period to identify participants’ barriers. The results of the assessments are analyzed by specialized staff who then meet with the participant and his/her primary case manager to design a plan to address the participant’s barriers. Treatment can include various life skills classes (including, for example, GED preparation, ESL classes, support groups, and professional development sessions) and counseling with the behavioral health specialists, as well as ongoing case management meetings. If barriers are considered very severe, staff may refer participants to outside organizations for further assessment and treatment. After completing the life skills courses, participants then work with job coaches and job developers to find permanent employment. The timing of this process depends on participants’ individual motivation levels and barriers to employment, but usually does not begin before participants have completed assessment and the team has designed treatment plans. To avoid overlap with the TWC model, participants in the STEP group cannot participate in subsidized employment.


The target population for the study is TANF recipients who have received cash assistance for at least 12 months in their lifetime or who do not have their high school diploma. The study does not include “U” cases10 (two-parent cases, with some exceptions), recipients who are exempt from participation or have good cause not to participate, and recipients who are currently employed.


This evaluation is a multi-year evaluation consisting of several components. The three main components of the evaluation are:


A process and implementation analysis focusing on program operations and challenges encountered. The goals of this analysis are to: 1) describe how the Philadelphia programs operate; 2) generate data that will help explain program impacts; 3) provide feedback to HHS and to the sites on program performance; and, 4) assess the feasibility and replicability of the program model. The data sources include data from the Pennsylvania Department of Public Welfare on program group and control group participation, data from the TWC and STEP databases providing more detail on program group participation, observations of program activities, field research (formal interviews and discussions with program administrators, line staff, and other informants), and case file reviews.

An impact analysis examining net effects of the two programs on participants’ employment, education and economic outcomes, participation in employment and training services, receipt of benefits and services such as food stamps and mental health services, housing and household information, health and health care coverage, child care, and child outcomes. This analysis will compare outcomes for participants in the experimental groups (i.e., those randomly assigned to the TWC group or the STEP group) with their counterparts in the control group using data from administrative records, participant surveys collected 15 months after random assignment, and, pending OMB approval, the 36-month survey that is being proposed here.

Data for the impact analysis are collected on the following key constructs:

  • Baseline demographic and descriptive data. Baseline demographic information for the sample is drawn from common information that was collected as part of the study intake procedures. It covers data such as participants’ gender, race, employment history, education history, number of children and age of youngest child, and number of months of TANF receipt at baseline.

  • Participant employment and earnings. MDRC is collecting wage data on individual participants from the National Directory of New Hires. This is a national database maintained by the Office of Child Support Enforcement, and therefore would provide information on earnings from employment both within and outside of Pennsylvania. In addition, administrative data records will be supplemented with the survey information collected 15 months after random assignment. The survey that is being proposed for the 36-month data collection effort will also include employment and earnings measures. (The advantage of administrative records is that they are available for all study participants and will provide the most accurate record of reported earnings and employment for the full study sample. Self-reported survey data, in contrast, is only available for a sub-sample of the study population and is subject to the limitations of response rates. However, the advantage of survey data is that it will capture at least some off-the-books and informal employment and sources of income.11 Moreover, the survey includes other key information about work and employment status that are not readily available from administrative data such as wages, benefits, work hours, work schedules, occupational complexity, and job mobility. The impact study will use both of these sources to evaluate the effects of the program.)

  • Public assistance receipt. Data from state administrative records track participants’ public assistance receipt in Pennsylvania for each sample member. These data are maintained by the Pennsylvania Department of Public Welfare. This information will be supplemented with information on public assistance receipt from the 15-month follow-up survey. Similar measures are included in the proposed 36-month follow-up survey.

  • Participants’ health, health-coverage, psychological well-being, and child care use; and child outcomes. Key aspects of participants’ health, health care coverage, psychological well-being, child-care use, and child outcomes will be assessed using survey information collected 15 months after random assignment. MDRC is also proposing that this information be collected on the 36-month survey.

  • Program and services participation data. MDRC has obtained participation records from the TWC and STEP programs. These data provide information on participation in the programs, such as the number of hours participated and the types of activities. The TWC data will also include information on earnings from transitional jobs. These data will be supplemented by participation records from the Pennsylvania Department of Public Welfare indicating both program group and control group participation. In addition, the 15-month survey contained measures to obtain information on program and control group members’ services receipt. The proposed 36-month survey also includes measures of services receipt.

A cost study of the programs is also planned for the final report.

Timeline for the current evaluation. Random assignment was conducted from October 2004 to May 2006. The process and implementation analysis is on-going. The summary of the findings from the implementation analysis will be included in the final report for the evaluation. The 15-month follow-up survey is currently being fielded and is expected to be in the field until fall 2007. The 36-month follow-up survey is scheduled to begin in January 2008 and will be fielded until 2009. An implementation and early impact report is scheduled for 2008 and the final report is expected in 2010.

A1.3. Research Contribution of the 36-Month Survey.

The purpose of this document is to request OMB approval of the data collection instrument for the HTE Philadelphia 36-month survey. Data collected with this instrument will allow the study to address the following key research questions:

  • What are the longer-term effects of employment programs targeted at hard-to-employ TANF recipients across a variety of key outcomes, including participants’ employment, education and economic outcomes? How do they affect participants’ household income and receipt of benefits and services such as food stamps?

  • What service receipt differential is experienced by program and control groups over time as a function of program participation? How do these programs affect participation in employment and training services?

  • What are the impacts of these employment services on participants’ health and health care coverage?

  • What are the impacts of these employment services on child care and child outcomes?

The survey is designed to collect data on a wider range of outcomes measures than is available through welfare, Medicaid, Food Stamps, and Unemployment Insurance records.

From a policy perspective, understanding the extent to which these two employment models impact the well-being of hard-to-employ TANF recipients has important implications for TANF services aimed at increasing employment and decreasing TANF receipt. To the extent that we find positive impacts on participants’ employment and economic outcomes, the findings from the HtE evaluation may argue in favor of increasing the availability of these programs to TANF recipients.



A2. How and by Whom, and for What Purpose Are Data to be Used

We plan to conduct a survey approximately 36 months post-random assignment from sample members in the Philadelphia Hard-to-Employ study.12 The data collected at 36 months post-random assignment will be linked with other sources of data already approved for collection (e.g., administrative data, 15-month survey, and program participation data).

The data will contribute materially to the ability of the Philadelphia Hard-to-Employ evaluation to measure the effectiveness of different strategies to increase employment among hard-to-employ TANF recipients, with the long-term goal of making families better off. Specifically, data collected will enable us to determine whether or not the resources allocated to the two strategies did, in fact, lead to less welfare receipt, increased employment, and higher incomes. The 36-month survey data will also help us determine which approaches are most effective for which subgroups of recipients.

Although administrative records data (such as TANF and food stamp payment records, and earnings and employment records from the New Hires database) will play an important role in the evaluation, they leave some important gaps in knowledge about a range of outcomes that are very relevant to the study. The 36-month survey will yield important data not available through administrative records, providing information on educational attainment, characteristics of jobs held during the follow-up period (such as wage rates, hours worked, and fringe benefits), participation in employment-related services, child care use, and the receipt of child care subsidies. This type of information cannot be obtained with the administrative records that are being collected. The survey also provides information on sources of program and control members’ income—including child support payments, Earned Income Tax Credit (EITC), and disability payments—that are unavailable from the administrative data collected by MDRC. Furthermore, the survey is the only source of information on earnings and other income received by other members of respondents’ households.

While some program effects may be evident in the data collected via the 15-month survey, the 36-month survey will be important in the understanding of the long-term effects of the programs on participants’ employment, economic, and other outcomes.

The survey will also provide important information for the study’s cost analysis, by detailing the types of activities and work supports the individual has participated in or received one year prior to the survey interview. This information, coupled with data collected from the 15-month survey, will be helpful for establishing the cost of the program interventions. While program records may be a good source of cost data for the two program groups, there is no way to collect similar information on the control group, since in most cases TANF or other programs do not track individuals after they leave public assistance and thus have little information on them. In addition, based on past experiences, site program tracking data systems are often incomplete and inaccurate in recording actual activity attendance or service receipt.

Finally, because current and recent welfare recipients are a very mobile population, it is likely that some of our sample members may have moved out of state since the start of random assignment. In these cases, we may not obtain their public assistance receipt outcomes from the administrative data since we are only collecting Pennsylvania public assistance records. The 36-month survey data will help fill in this gap.

36-Month Survey Modules

The 36-month survey is comprised of several modules. Most of these modules have two purposes: (1) to provide a systematic description of respondent employment, wage, wage progression, employment trajectory, and other work experiences; and (2) to measure the differences in employment, wage progression, income, and other outcomes between the program groups and a similar group of respondents who were not eligible for the programs. What follows is a summary of the respective roles that each of these modules will play in the Philadelphia HTE evaluation:

  • Participation in Employment-Related and Education Activities (Section A): To measure the extent of participation in a range of activities (including job search and education and training), for both the program and control groups.

  • Educational Attainment (Section B): To measure the extent to which the program groups attained educational credentials, as compared to the control group.

  1. Employment History (Section C): To measure the extent to which the program groups find employment, stay employed in either the same job or different jobs, and increase their wage rates or hours worked (i.e., earnings) or change to jobs with greater benefits or career opportunities or with more acceptable working conditions, as compared to the control group.

  2. Marriage, Household Composition, and Child Care (Section D): To measure the marital status and family composition of the program groups, as compared to the control group. To understand the extent to which respondents used child care, received reimbursement and incurred out-of-pocket expenses for this care, and experienced employment instability because of child care issues.

  • Housing (Section E): To measure the housing status of the program groups, as compared to the control group.

  • Health Coverage (Section F): To measure the extent to which respondents have health coverage, funded by employers or other private sources, or funded by government programs like Medicaid and CHIP.

  • Household Income (Section G): To measure income, the primary income sources (such as child support, Supplemental Security Income (SSI), and EITC) during a one month period at the time of the interview for the program groups, as compared to the control group.

  • Health Status (Section I): To measure the extent to which program group respondents or their family members have any key health problems, as compared to the control group.

  • Child Outcomes (Section J): To measure school outcomes, type of child care, and problem behaviors for program group children, as compared to control group children.

A3. Use of Information Technology for Data Collection to Reduce Respondent Burden

The use of improved technology has been incorporated into the data collection design wherever possible to reduce respondent burden. When information is available from a centralized, computerized source, such information has not been included in the data collection instruments described in this submission. For example, historical cash assistance (TANF), Food Stamps, and UI data will be obtained through administrative records.

A4. Efforts to Identify Duplication

The survey will focus on information that cannot be found in administrative records or other existing sources. The survey will facilitate the collection of data on, for example, respondents’ experiences in accessing program services, their physical and emotional well-being, their children’s health and behavior problems, and other barriers to employment. These types of information are not available routinely or systematically in program or administrative records.

A4.1 Reasons Why Available Information Cannot Be Used

Comparable information from other sources does not exist for the variables covered in 36-month survey. MDRC will use administrative data as the primary source for UI-covered earnings, TANF payments, and Food Stamp payments. However, administrative data are not available for most of the other outcomes described earlier and, even when available, present problems. The collection is very costly; many of these data sources are replete with different types of missing records and are maintained by different types of systems in each state. Further, for some data, administrative records – such as program tracking data – are only available for the program groups and not the control group. The lack of comparability would make it difficult to estimate differences among the research groups.

A5. Burden on Small Business

Does not apply. All respondents are individuals.

A6. Consequences if Data Collection is Not Conducted

If the 36-month follow-up survey is not conducted, we will not be able to adequately evaluate the longer-term impacts of particular employment programs aimed at hard-to-employ TANF recipients. The analysis of the short- and long-term impacts would be limited because changes in many important outcomes, such as barriers to employment (like depression or substance use), the experience of program services, job quality, job duration, wages, and child well-being, cannot be captured in administrative records data.

If the data are not collected, program operators and policy makers will also receive less information about whether these particular programs can lead to impacts on hard-to-employ TANF recipients. The implementation and process study also depends on the collection of survey data at the 36-month follow-up to obtain information on the services that are received by members of the program and control groups. This information is critical to fully understanding the service receipt differential between members of the program and control groups, as all groups receive the same survey instrument.

A7. Special Data Collection Circumstances

No such circumstances.





A8. Form 5 CFR 1320.8(d) and Consultations Prior to OMB Submission

The 60-day Federal Register notice soliciting comments for the Philadelphia HTE 36-month data collection instruments was posted in the Federal Register on June 19, 2007 (Volume 72, Pages 33762 - 33763). Copies of the published 60-day and draft 30-day Federal Register notices are located in Appendices B and C.

The Philadelphia 36-month survey builds upon previous surveys conducted to obtain similar participant outcomes of employment services. We have consequently developed the instrument for the 36-month survey based largely upon a 42-month survey conducted for the Employment Retention and Advancement (ERA) Demonstration, which provided services designed to help low-income participants find work, stay in work, and advance. In doing so we can assure that, to an appropriate degree, the questions we pose allow for useful comparisons between the data resulting from this endeavor and from other large-scale surveys. The ERA study was also conducted by MDRC, and the 42-month survey was designed to measure very similar outcomes as the HTE Philadelphia 36-month survey. The ERA 42-month survey instrument was previously submitted to OMB and was approved on August 15th, 2005 (OMB Control No. 0970-0285).

The 42-month ERA survey also builds on previous survey research in other large-scale studies as well as a prior ERA 12-month survey. Most of the questions in the HTE survey were included exactly as they were in the ERA 42-month survey, which in turn were included exactly as they were in the ERA 12-month survey, although some questions were modified at both points.

The other surveys13 from which questions were drawn are: 1) the 12-month client survey designed by MDRC for the ERA evaluation (OMB approval No. 0970-0242); 2) the 36-month client survey designed by MDRC for the Connecticut Jobs First evaluation; 3) the 15-month surveys designed by MDRC for the Hard-to-Employ evaluation (OMB approval No. 0970-0276); 4) the client survey designed by MDRC for the Vermont Welfare Restructuring Project; and 5) the longitudinal surveys designed by MDRC for the Project on Devolution and Urban Change.

Given the breadth and depth of MDRC’s expertise in welfare-to-work research, the Philadelphia HTE 36-month survey and the 42-month ERA survey were, for the most part, developed internally. The 42-month ERA survey builds on the earlier 12-month ERA survey, which was reviewed by Denise Polit of Humanalysis, Inc., Sheldon Danziger of the University of Michigan and Susan Hauan of ASPE. In addition, the 42-month survey was developed and reviewed by senior staff at MDRC (Dan Bloom, Senior Research Associate and Policy Area Director; Stephen Freedman, Senior Research Associate; Gayle Hamilton, Senior Fellow; Richard Hendra, Senior Research Associate; Jo Anna Hunter, Senior Research Associate; and Barbara Goldman, Vice President), ACF (Nancye Campbell, ERA Project Officer; Patrice Richards, Social Science Research Analyst; and Karl Koerper, Director of the Division of Economic Independence) and ASPE (Dale Hitchcock). We also wish to remind readers that in all of the work on which we have drawn to build this survey, we have worked with many leaders in the social policy research field, including people working in academic, government and nonprofit settings. This long tradition of collaborative work will certainly influence the refinement, implementation and analysis of this survey.



A9. Justification for Respondent Payments

Parents who agree to participate in the survey will receive a payment of $15. The purpose of the payment is to improve response rates by decreasing the number of refusals, enhancing respondent retention, and providing a gesture of goodwill to acknowledge respondent burdens. The payments are being proposed in addition to many of the techniques suggested by OMB to improve response rates that have been incorporated into our data collection effort and are described in Section B3, because our experience has shown that small monetary incentives are useful when fielding data collection instruments with hard-to-employ populations as part of a complex study design.

The best statement of current thought on incentives is the Symposium on Providing Incentives to Survey Respondents convened in October 1992 by the Council of Professional Associations on Federal Statistics (COPAFS) for OMB. COPAFS asked Richard Kulka of NORC to write a review of the literature in light of what was learned at the symposium. Kulka concluded, “the greatest potential effectiveness of monetary incentives appears to be in surveys that place unusual demands upon the respondent, require continued cooperation over an extended period of time, or when the positive forces on respondents to cooperate are fairly low.” Kulka also wrote, “there is evidence that increasing the size of a monetary incentive will result in increases in survey response and/or response quality, although there is also consistent evidence that this benefit may rather quickly reach 'diminishing returns', whereby large incentives no longer result in appreciable increases in survey response.”14 We have based the amount of the incentive to be paid for these data collection elements on prior research conducted in this area, and MDRC’s and the survey firm’s prior experience interviewing similar populations.

In addition, more than two decades of survey research support the benefits of offering incentives. Hazard, citing evidence from a 1974 study by Ferber and Sudman found that the effects of incentives are contingent upon respondent burden (i.e., the effort needed to cooperate), the amount of the incentive, and the economic level of the respondent.15 A study by Berlin, et al. found that incentives increased the response rates of respondents with low levels of literacy, as well as lowering interviewer costs.16 James also found that an incentive was effective in lowering non-response rates and that any incentive lowered the number of interviewer visits per case.17 The Mack et al. study of responders to the Survey of Income and Program Participation (SIPP) found that incentives reduced non-response rates in initial and subsequent interviews, and were particularly effective in reducing non-response rates in poor and African-American households.18 Moreover, the use of incentives has been found to be efficacious for increasing the response rates of in-home and sensitive subject matter surveys.19

Finally, our prior experience fielding data collection instruments with economically disadvantaged and TANF-receiving populations also supports the evidence that incentives increase response rates. For example, in a follow-up interview with Jobs Corps applicants, experimental evidence showed that incentives increased response rates and greatly increased search efficacy. Experience in these and similar studies of disadvantaged populations suggest that incentives can help convince reluctant respondents to participate.20

We believe that the studies summarized here, and MDRC’s previous experiences with fielding surveys with low-income populations, make a strong case for the use of respondent payments for completing the survey.

A9.1 The Use of Incentives

To be effective, the amount of the incentives must fit the burden of the survey. We have based the amounts of the incentives for the 36-month data collection effort based on what was previously approved by OMB and paid to HtE sample members for their participation in the 15-month follow-up, prior research, and MDRC’s and the survey firm’s prior experience interviewing similar populations. We propose that respondents who agree to participate in the 36-month survey receive a payment of $15, in the form of a check.

These amounts reflect current practice in fielding surveys using similar instruments. For example, the proposed incentive is similar to the size of the incentive found to be effective for the Project on Devolution and Urban Change survey efforts. For this study, a $20 incentive was given to respondents who completed the 90-minute interview in 2001.

The instrument that will be used to collect follow-up data from HtE sample members has unique aspects that make administration difficult and threaten response rates. We are therefore requesting clearance to offer a small incentive to all sample members who complete each survey. Aspects of the data collection effort that also make it more difficult to obtain high completion rates are:

  • The surveys include questions that could be perceived as intrusive and therefore could make respondents uncomfortable (i.e., questions about their mental health).

  • The subject matter of the interview is not intrinsically interesting to respondents. Moreover, many participants may have negative feelings about the other services received that are of interest, such as welfare, Medicaid, job training, etc.

  • Other difficulties in administering the surveys come from the population itself. Educationally and economically disadvantaged groups, such as those in the HtE sample, have been found to be more difficult than the general population to convince to participate in surveys.

Thus, we are requesting clearance to offer small incentives to those who complete the survey to obtain response rates that will yield credible results, to avoid the bias that could result from selective non-response, and to reduce item non-response. We are aiming to achieve an 80 percent survey completion rate for the follow-up survey. Even with the best data collection practices, it would be very difficult, if not impossible, to obtain such a high completion rate without incentives.





A10. Privacy

MDRC and the survey firm – HumRRO – will protect against breach of privacy of participants participating in the 36-month survey. These procedures for assuring and maintaining privacy will be consistent with the provisions of the Privacy Act and with ethical guidelines of professional organizations. Interviewers will attempt to conduct the interview at a time and place that allows the utmost privacy for respondents. Respondents will receive information about privacy protections at the outset of the interviews. They will be informed that all of the information they provide will be kept private to the extent permitted by law and that study results will be presented only in aggregate form. Participation in the survey will be voluntary. At the time of data collection for the 36-month follow-up, participants can choose not to participate in the survey.

MDRC’s and HUMRRO’s in-house records of names, addresses, Social Security numbers, and tracing information for all sample members will not be attached to interview or assessment data and will not be made available to anyone outside appropriate staff of MDRC and HUMRRO. All records identifying respondents will be kept in locked storage at MDRC, and respondents will be identified solely by a code number. Any coding, data entry and analysis requiring identification of individuals or households will use code numbers only, and a secret password will be necessary to access the data file. No data will ever be reported in such a way that individuals can be identified.

The importance of maintaining privacy will be emphasized during interviewer training, and any interviewer who knows a respondent will not be permitted to interview that respondent. All staff, including coders and computer programmers, will be required to sign a privacy pledge.

While conducting the survey, the interviewer may observe or become aware of situations where there is potential harm to the respondent. Some areas of inquiry on the survey address sensitive issues; thus, completion of this survey may increase the stress experienced by already at-risk study participants. An introductory script will inform all study participants that information may be revealed to the appropriate authorities if the person appears to be a serious threat to anyone. MDRC will work with the survey contractor to develop a process for reporting potentially threatening situations to the appropriate authorities.   

In addition, although every effort will be made to keep research records private, there may be times when federal or state law requires the disclosure of such records, including personal information. This is very unlikely, but if disclosure is ever required, the research team will take all steps allowable by law to protect the privacy of personal information.

A11. Questions of a Sensitive Nature

Questions in all components of the survey are potentially “sensitive” for respondents. Respondents are asked about highly personal topics, some even stigmatizing. The questions we have included were selected in part because they have been widely used in previous research and are respected among experts. Moreover, all will be pilot tested prior to the survey’s full implementation, and if problems arise in regard to any specific items, their inclusion will be reconsidered (because the sample for the pilot test will only include nine study participants, our understanding is that this effort does not require a separate OMB review and approval process, and these hours are not included in our burden estimates, although the participants selected to participate in the pilot test will include separate respondents from those included in the actual survey effort). Also, all survey forms will contain instructions that explain questions before they are posed. Finally, respondents will be informed by program staff prior to the start of the interview that their answers are private (except in certain cases, outlined in Section A10), that they may refuse to answer any question, that results will only be reported in the aggregate, and that their responses will not have any effect on any services or benefits they or their family members receive. As mentioned in Section A10, MDRC and its contracted survey firm employ numerous safeguarding procedures to ensure privacy.

A12. Estimates of the Hour Burden of Data Collection to Respondents

Participation in the survey at the 36-month follow-up is completely voluntary. No sanction or penalty will be applied to respondents receiving state or federal assistance who choose not to provide information. Respondent payments, as described in Section A9, will be offered to each sample member who participates in the survey.

The estimated response burden by instrument/component was calculated based on the time budgeted for the administration of the survey. Assuming a response rate of 80 percent, the maximum number of respondents for the survey is expected to be 1,555 participants. These numbers were then multiplied by the average length of the survey (45 minutes) and divided by 60 to determine the total burden in number of hours. The response burden breakdown is shown in the table below.


Instrument

Expected Number of Respondents

Number of Responses per Respondent

Average Burden per Response

Total Burden (Hours)

Philadelphia

36-month Participant Survey1,555

1

45 minutes

(or .75 hours)

1,166


TOTAL PERSON HOURS




1,166



A13. Estimates of Capital, Operating, and Start-Up Costs to Respondents

Not applicable. The 36-month follow-up data collection will be conducted by a subcontracted survey firm.

A14. Estimates of Costs to Federal Government

ACF, ASPE and DOL are funding these activities.  The estimated cost for designing, administering, processing, and analyzing the 36-month follow-up data is $780,000.  On a year-by-year basis, these expenses are estimated to be: 

Year           Cost

2007           $140,000

2008           $320,000

2009           $320,000



A15. Changes in Burden

The 36-month follow-up is a new data collection effort and does not involve a change in burden.

A16. Tabulation, Analysis and Publication Plans and Schedule

A16.1a Assessment of Data Quality and File Construction

Assessing and monitoring the quality of the data from the survey. The follow-up survey will go through a rigorous series of tests for completeness and quality. Staff at the survey firm will review the initial cases completed by each interviewer as well as perform occasional spot checks after that. Editing/coding staff will review questionnaires for quality and consistency after this initial period. Interviewers will be apprised of any problems found and will be retrained as needed. During the coding of data, coder reliability checks will be undertaken repeatedly to verify that coding procedures are being followed correctly. Data entered into computer files will be assessed for missing information, outliers, and other data problems according to standard procedures. If necessary, questionnaires will be recoded. The survey firm will deliver to MDRC data sets of completed cases at agreed-upon intervals, along with marginal frequencies. The data and frequencies will be reviewed for outliers, unusual distributions and inconsistencies between data items.

Data file construction. Data from the 36-month survey will then be merged with data from other sources. That is, data from the 36-month survey will be combined with previously collected data, including that routinely collected by welfare departments and administrative records information relating to welfare receipt, earnings, and program tracking (if available) and data collected from the 15-month follow-up survey.

Tabulation. None of the tables will present individual-level data, all of the results and sample characteristics will be presented in aggregate form.

A16.1b. Analysis Plans

As previously indicated, the HtE evaluation in Philadelphia incorporates a random assignment analytic design. We offer a brief outline of how we will address the project’s long-term analytical goals, with a focus on how the follow-up survey data will be useful in that process.

Estimating overall impacts. Although the use of a randomized design will ensure that simple comparisons of experimental and control group means will yield unbiased estimates of program effects, the precision of the estimates will be enhanced by estimating multivariate regression models that control for factors at baseline that also affect the outcome measures. Such impacts are often referred to as “regression-adjusted” impacts. Examples of factors that may affect outcomes are the sample members’ age, number of children, prior employment, and baseline barriers to employment.

Most of the analyses of overall impacts will result in estimation models that, in their basic form, can be expressed as follows:

(1) Yij = F (T, Xni, Uij)

where

Y is a vector of outcomes (e.g., post RA employment, earnings, welfare receipt, children’s behavioral adjustment and early literacy and math skills)

T is the treatment variable indicating whether the individual is a member of the program group

X is a vector of baseline characteristics to be controlled (e.g., the sample member’s baseline education level or prior employment)

U is a vector corresponding to the residual (error) term

i is the subscript designating the individuals in the sample

j is the subscript designating the various outcomes of interest

n is the subscript designating the various personal characteristics to be controlled.

It is useful to arrange question items into two groups. First are objective questions about experiences in the time period between random assignment and the interview -- questions about jobs, employment and training activities, income and earnings. Included in this category are both economic and non-economic outcomes. Second are subjective questions designed to measure knowledge and perceptions of their work environment. As noted earlier, the justification for these outcomes is that respondent’s perceptions are important to assessing the treatment difference created by the program.

Since we will analyze multiple outcomes, we will explore the possibility of adjusting estimates to account for this fact, for example, by using a Bonferroni correction (Darlington, 1990) or other omnibus test (such as those discussed in Cooper & Hedges, 1994). We will also be examining the pattern of impacts across multiple outcomes to determine whether hypotheses regarding the expected impacts of the intervention are supported across multiple outcomes.

Program/Control Group Differences in Economic Outcomes. Economic outcomes include data on earnings, employment, job retention, wages, wage progression, and income. Each of these factors will be analyzed using the impact model outlined above. In addition to simple "ever employed" and "number of months employed" measures from the employment and earnings module, a range of variables will be constructed to measure job retention, job quality, and advancement. The construction of "joint outcomes" allows us to examine experimentally the program's effects on job retention. Simply comparing the number of continuous months in the same job for the program and control groups would not be an experimental comparison, since it only uses individuals from both groups who were employed since random assignment. Creating joint outcomes allows us to use the entire program group and the entire control group. For example, using the following three outcomes – "ever employed since date of random assignment and employed at survey interview," "ever employed since date of random assignment and no longer employed at survey interview," and "never employed since date of random assignment"—allows us to put the entire program group and the entire control group into one of these three categories. This type of analysis has been conducted in several recent evaluations, such as the National Evaluation of Welfare-to-Work Strategies (NEWWS), the Minnesota Family Investment Program (MFIP), and the Self-Sufficiency Project (SSP), to examine impacts on employment duration and stability.

Program/Control Group Differences in Non-Economic Outcomes. Non-economic outcomes include data on participation in education and training activities, barriers to employment, work environment, housing, and children outcomes.

These additional non-economic outcome measures would enrich the evaluation by increasing the comprehensiveness of the information available for assessing the program's overall effects. They are significant, we believe, because they can provide policymakers with information on the effects of the interventions on people's lives that are not captured by or easily seen in the more standard employment, earnings, and welfare measures. Thus, we would use the impact findings of these measures to provide a context for interpreting the program's basic earning and welfare impacts.

For some of these analyses, we will use individual survey items or pre-existing scales and measures. In some cases, however, we may create scales using multiple items. In building these scales, we would use standard social science methodologies.21 For example, the first step would be to identify the set of items in the survey that were intended to address the same broad topic, such as skills required on the current or most recent job. We would then examine inter-item correlations for the full set of questions designed to measure this outcome and conduct a factor analysis to determine which items in the set “go together” and appear to be measuring the same underlying construct. Next, we would estimate Cronbach's alpha to assess the reliability of the scale. We would add and delete items as appropriate to maximize Cronbach's alpha. After selecting the final set of items for a given scale, we would then produce an overall scale score for each respondent by summing her scores on each of the items in the scale. The overall scale scores for all respondents would then be used as an outcome measure for the impact analysis. We have used this general approach successfully in several previous evaluations, especially the more recent evaluations with child outcomes data.22

Subgroup analyses. Previous evaluations of welfare-to-work programs have found that, in some cases, impacts are bigger for certain types of respondents based on their demographic characteristics or circumstances at baseline. The MFIP program, for example, produced larger earnings impacts for recipients living in public housing than for those in private housing.23 It is easy to imagine that particular employment strategies might also be more effective for certain types of participants, such as those with relatively modest barriers to employment. For this reason, it is essential to go beyond the examination of overall impacts of the Philadelphia HTE programs to examine impacts among subgroups defined by level of disadvantage and other characteristics. For example, impacts might differ for individuals according to their level of education at program entry, prior work experience, number and ages of children, and prior welfare receipt. Exhibit B1.1, showing minimum detectable effects for various sample sizes, indicates whether the impacts can be estimated with precision when the sample is split into various subgroups. This information will guide our analyses of subgroups.

An analysis of subgroup impacts involves estimating a program’s effects for each subgroup separately, using the regression-adjusted model mentioned earlier, and then comparing the two impacts. The standard errors of each of the impacts are used to assess whether the two impacts are statistically significantly different from each other. Subgroup impacts estimated in this way are referred to as unconditional subgroup impacts, because they show the gross effect of a particular characteristic, such as education level, on a program’s impacts. As an example, earnings impacts in a program may be lower for individuals without a high school degree, as compared with their more educated counterparts. However, this difference may arise not because of education per se, but because less educated individuals are also less likely to have recent work experience, which also affects how they may benefit from the program. In this case, it would be of interest to estimate conditional subgroup impacts, or impacts by education level that also control for prior work experience. These impacts would be obtained by pooling the sample and estimating one impact model, in which education level and prior work experience are interacted with all of the other variables in the model and with the program group dummy variable (T in the previous model). For example, if the coefficient on the interaction of program status and education is reduced in size once the interaction of program status and prior work experience is included, we can conclude that some part of the effect of education on the program’s impacts is due to its correlation with prior work experience.

Nonexperimental analyses. Several types of non-experimental analyses will help complement the estimation of Philadelphia HTE’s impacts. Alternatively, a more complex method that attempts to recreate an experimental comparison is "propensity score matching," in which impacts are estimated by comparing outcomes for participants in the program groups with outcomes for "matched" individuals from the control group. Finally, the survey data will provide useful descriptive information on the circumstances of hard-to-employ welfare recipients and low-wage workers. To further illustrate, using data for the control group, we can examine the extent to which hard-to-employ welfare recipients engage in employment service and training on their own, as well as the types of jobs they hold and the levels of wage growth they achieve.

A16.2. Publication Plans and Schedule.

Follow-up survey instruments will be administered to participants approximately 36 months after the participants were randomly assigned. Fielding is expected to begin as early as January 2008 and end in 2009.

Findings from the 36-month follow-up survey will be part of the impact, implementation, and cost analyses. The results will be published in a series of reports based on the results of these analyses. Preliminary results will be available in 2009 with a final report being produced in 2010, as outlined in section A1.2.

A17. Reasons for Not Displaying the OMB Approval Expiration Date

Not applicable. We intend to display the OMB approval number and expiration data on all data collection instruments and materials.

A18. Exceptions to Certification Statement

Not applicable. We have no exceptions to the Certification Statement.

1 For example, one study synthesized results from a common survey that was administered to welfare recipients in six states in 2002. It found that 40 percent of recipients across the six states lacked a high school diploma or GED, 21 percent had a physical health limitation, 30 percent met the diagnostic criteria for major depression or were experiencing severe psychological stress, and 29 percent had a child with health problems. (Hauan and Douglas, 2004.)

2 Bloom and Butler, 2007.

3 TANF reauthorization strengthens the participation mandate in several ways. It adjusts the caseload reduction credit – by which states can reduce their minimum required participation rate if they reduce their caseload – so that the baseline year against which the current caseload is compared is 2005, rather than 1995. The bill also requires states to count families receiving TANF through separate state programs – programs that receive no federal TANF funding but do receive state funding that counts toward the state’s MOE requirement – toward the participation rate. In addition, the bill calls on HHS to disseminate more explicit regulations on countable activities, and requires states to implement stricter internal controls to verify reporting procedures. (Greenberg and Parrot, 2006.)

4 According to the National Bureau of Economic Research, the economy went into recession beginning March 2001. Employment declines lasted through August 2003.

5 Michalopoulos and Schwartz, 2000.

6 MDRC Board of Directors, 1980. The National Supported Work Demonstration showed different results for different subgroups: for example, it did not show significant results for ex-offenders, but did show significant results for welfare recipients.

7 However, the non-experimental research into transitional work is promising. For example, a survey of transitional jobs programs found that they were successful at finding permanent jobs for 50 to 75 percent of hard-to-serve participants who began the programs (Richer and Savner, 2001). In addition, a study of six transitional work programs found that placement rates into permanent, unsubsidized employment for participants who completed the programs ranged from 81 to 94 percent (Kirby et al., 2002). See also Pavetti and Strong, 2001.

8 MDRC’s Employment Retention and Advancement project has one site – Minneapolis, MN – that tests an upfront barrier-removal, intensive case management strategy, although participants in this program may be placed into transitional employment. Results of this test are forthcoming.

9 The transitional employment model being studied in Philadelphia is similar to the transitional employment model being tested in the New York site for this project; however, the New York program is targeted at ex-offenders, rather than TANF recipients.

10 A family meets the criteria for the unemployed parent category if: it is a two-parent household with at least one common child; and, at least one parent is able to work; and, both parents are unemployed, or at least one parent has work in which the net earned income of the budget group, after allowable deductions, is less than the family size allowance for the budget group, or at least one parent has “on the job training” in a project that is approved or recommended by the Job Service of the Road to Economic Self-sufficiency through Employment and Training (Pennsylvania’s TANF program).

11 In order to increase the accuracy of responses about “off-the-books” income, prior to administering the survey, study participants will be told that their responses to the survey will be kept private to the extent permitted by law, and will not be shared and will not be used to verify their eligibility for services or other agencies. The accuracy of the information being collected will also be dependent upon the interviewers’ abilities to develop rapport with the respondents, a strategy which has been used to successful in ethnographic studies collecting information about low-income mothers’ off-the-books employment and other informal sources of income (e.g., Edin & Lein, 1997). Furthermore, even though some study participants may withhold information about informal sources of income, we do not except this to bias the quality of the information gathered across the program and control groups and, thus any impacts of the intervention will likely not be biased.

12 Some sample members might be surveyed up to a few months later than their 36-month post-random assignment date.

13Copies of all surveys referenced are available upon request.

14 Kulka, 1992.

15 Hazard, 2002.

16 Berlin et al., 1992.

17 James, 1997.

18 Mack, Huggins, Keathley, & Sudukchi, 1998.

19 Hazard, 2002.

20 Moffitt, 2004. 

21For a discussion of these methods, see DeVellis, 1991.

22 See Gennetian & Miller, 2000.

23 Miller et al., 2000.




File Typeapplication/msword
File TitleAdditional Information to be Collected as Part of the 15-month Followup
AuthorPamela Morris
Last Modified Byrich
File Modified2008-02-09
File Created2008-02-09

© 2024 OMB.report | Privacy Policy