Supporting Statement Extension_v5

Supporting Statement Extension_v5.doc

Plan for Evaluation of the Trade Adjustment Assistance Program

OMB: 1205-0460

Document [doc]
Download: doc | pdf


SUPPORTING STATEMENT FOR

THE SUBMISSION FOR THE EXTENSION OF THE

INFORMATION COLLECTION FOR

THE EVALUATION OF THE TRADE ADJUSTMENT ASSISTANCE PROGRAM


OMB CONTROL NO. 1205-0460


1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection.  Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.


An extension of the approved Information Collection Request (ICR) for the Impact Evaluation of the Trade Adjustment Assistance (TAA) Program is needed in order to complete data collection activities for this study. Previously, the Office of Management and Budget (OMB) issued a Notice of Action (NOA)(ICR Reference Number 200606-1205-009) on November 15, 2006 authorizing the collection of information for an evaluation of the Trade Adjustment Assistance (TAA) Program. The NOA approved the following data collection activities: 1) administration of a baseline and follow-up survey of individual TAA participants and comparison group members; 2) col-lection of administrative records from the TAA and Unemployment Insurance (UI) systems; 3) collection of qualitative data through semi-structured interviews with state- and local-level TAA, Workforce Investment Act (WIA), and rapid response staff, during site visits to program offices; and 4) the administration of a survey of TAA Coordinators in all local areas. The data were to be used to support the estimation of net program impacts (using a comparison group methodology) as well as to learn about programmatic and administrative practices that may have a bearing on the performance of the program.


The expiration date for the ICR, as identified in the NOA, was November 30, 2009, by which time the data collection for this evaluation was expected to be concluded. However, the project schedule was substantially delayed by an unusually lengthy process (18 months) before the initial OMB clearance was obtained, and then by a protracted period (22 months) spent acquiring states’ administrative TAA and UI records, needed to draw the participant and comparison group samples. Many of the 25 randomly selected sample states were reluctant to provide the data extracts, citing confidentiality concerns and workload issues. Numerous rounds of negotiation with the states were required to address these concerns before memoranda of understanding (MOUs) with the states were in place and data was provided. Instead of the anticipated six months after OMB approval, the process of data acquisition lasted nearly two years.


All 25 states in the original sample, plus one alternate state, did eventually provide the administrative records on which the selection of treatment and comparison group members was based, protecting the evaluation’s goal of generating impact estimates that are generalizable to the TAA program nationally. However, because of the delays cited above, an extension of the approval for the ICR is now needed in order obtain follow-up data on employment and earnings outcomes. A single follow-up survey at 25 months will be conducted, as approved by OMB (in an NOA, ICR Reference Number 2008-12-1205-001, dated December 17, 2008) rather than the two follow-up surveys (at 15 and 30 months) as originally planned. The burden for this data collection will therefore be lower than proposed under the first ICR, even though the sample has been slightly expanded in order to assure a sufficient number of responses, due to lower-than-anticipated response rates for different subgroups.


The extension of the data collection period is needed in order to complete this impact evaluation and to be responsive to the Office of Management and Budget’s Program Assessment Rating Tool review, which cited the need for updated, high-quality information about the TAA program’s effectiveness (to be used in the development of legislation, budget proposals, regulations, administrative guidance and technical assistance). The data to be collected from the follow-up survey, states’ administrative records, and subsequent site visits, if this extension is approved, are critical to developing estimated impacts of the program and for understanding how the program has been administered.


Background Information:


Section 172 of WIA is the authority by which the Employment and Training Administration (ETA) will collect the information proposed in this evaluation.


Since 1962, TAA has represented a federal commitment to compensate workers who have suffered a trade-related job loss, and to provide them with services that help them adjust to changes in market circumstances. The current TAA program provides training, income support, and other reemployment and supportive services to workers who lose their jobs or have their work hours or salary reduced because of increased imports or shifts in production to foreign countries.

The Trade Adjustment Assistance Reform Act of 2002 (Pub. L. 107-210) reauthorized the TAA program for five years and amended the prior law in a number of ways. For example, it consolidated TAA and North American Free Trade Agreement Transitional Adjustment Assistance programs into a single program, broadened eligibility to include secondarily affected workers, and created two new benefits: the Health Coverage Tax Credit (HCTC) and Alternative TAA for eligible workers 50 years old and above. The law also included provisions designed to change how the program is administered, such as the requirement that states must ensure that rapid response assistance as well as appropriate core and intensive services are made available.


Given the program changes, the size of the TAA program, and its central role in federal efforts to help and compensate trade-affected workers, a rigorous study of current TAA operations and their effects on participants’ employment-related and other outcomes is an important priority. The most recent comprehensive study of the TAA program (Corson et al, 1993) was conducted using samples from the late 1980s. However, because of changes in the TAA program, the TAA caseload, and labor market conditions, results from that study may no longer apply to the TAA program as it operates today.

The TAA evaluation has two main parts: an impact study and a process study. The impact study is structured to address the following research questions that are potentially of interest to policy makers:


  • What is the overall impact of TAA on participants’ employment-related outcomes?


  • Do program impacts differ for subgroups of participants defined by their demographic characteristics (such as age, education level, pre-layoff wage, and industry)?


  • What are program impacts for participants who receive specific TAA services and benefits (such as those who receive training, the HCTC, and Alternative TAA)?


  • Do impacts vary for participants in states and local areas with different program features (such as the extent of program integration within One-Stop Career Center Systems and the ability of the TAA program to deliver services in a timely manner)?


  • How do program impacts differ depending on TAA petition features (such as type of petitions, number of affected workers, certification determination processing time, and industry)?


  • What are program take-up rates for all potentially eligible workers and for subgroups of potentially eligible workers?


To meet these analysis objectives, the evaluation uses a comparison group methodology where TAA and comparison group samples were selected using a two-stage, stratified sample design. In the first stage, 25 states were randomly selected in geographic strata with probabilities proportional to the expected number of TAA participants in the state (see Section B of this Supporting Statement, below), which was revised after the lengthy process associated with the original OMB clearance. Because all 25 originally selected states eventually provided data, and the one replacement state did as well, the resulting sample now includes 26 states. Both the impact and the process analyses are being conducted in these states so that the study can link data sources and findings from these analyses.


Two samples of TAA and comparison group workers were selected from the 26 states: 1) workers potentially eligible for TAA, sampled from lists of workers that certified firms provide to state agencies, and 2) workers who received a first Trade Readjustment Allowance (TRA) payment after exhausting their UI benefits. A matched comparison sample of UI claimants was drawn for each of these “treatment” groups using UI claims data in the same states. Propensity scoring methods have been used to select the comparison samples. The research sample consists of 24,000 workers in the certified-worker sample, 12,000 in the TRA-beneficiary sample, and 72,000 in the comparison sample. The study first used UI claims data to select a comparison group sample that was twice as large as the TAA sample, and then the comparison sample will be refined by re-matching comparison to TAA group members using richer matching variables from the baseline interview data and other sources.


Program impacts will be estimated by comparing the average outcomes of those in the treatment and comparison groups. The evaluation will use key outcome measures for the impact analysis from two data sources: 1) administrative UI claims and earnings data, and 2) telephone interviews conducted with a random subset of sample members at baseline and 25 months later (rather than the 15- and 30-month follow-up surveys originally planned). The study will examine impacts on the following key outcomes that are hypothesized to be affected by TAA participation: 1) reemployment services; 2) education and training; 3) employment and earnings; 4) receipt of UI benefits; 5) receipt of other welfare benefits; 6) non-labor market outcomes, such as health status, health insurance coverage, and mobility; and 7) changes in quality of life following job loss, in terms of earnings, employment, and non-labor market outcomes compared to the pre-separation period.


A benefit-cost analysis will also be conducted. It will examine benefits and costs from different perspectives (such as those of society and participants) and will provide information on how the benefits and costs are distributed among the different groups. The measured benefits will fall into three categories: 1) benefits of increased output resulting from the additional productivity of TAA participants; 2) benefits or costs from changes in the receipt of UI benefits; and 3) benefits from the reduced use of other programs and services (such as non-TAA-funded education and training services and public assistance benefits). Program costs will include TRA benefits paid to program participants; training, relocation, and job search allowances paid to program participants; training-related costs; and administrative program costs. Data for the benefit-cost analysis will come from interviews with the study sample; process analysis site visits; TAA cost reports; federal and state educational, training, and welfare agencies; and existing data from established databases and surveys.


A process study is also being conducted to understand programmatic services, management practices, and institutional structures of TAA and other programs and funding streams that serve TAA-certified workers and TAA participants. Site visits are being conducted in the same states as in the impact study; thus, it will provide key information for interpreting impact study findings, in that process study findings can be related to estimations of impacts for subgroups defined by key state and local area program characteristics and features. In addition, an Internet/mail survey of TAA coordinators has also been conducted to provide additional information about program services. Findings from these sources will also be used to explore how to improve TAA operations and services.


Data sources are described below, along with an annotation as to whether the data have already been collected, are in the process of being collected, or will be collected in the future (including under the extension period for which approval is now being sought). The data sources include:


TAA Petition Data. These data contain information on all petitions filed by applicants (such as firms, workers, unions, or TAA program staff) that DOL uses to make TAA certification determination decisions. These data were used to develop the frame for selecting states for the evaluation, because they contain information on the estimated number of workers affected by the certification (see Section B.2, below). They also provide descriptive information on certification rates and on the types of industries that are certified, and will be used to define subgroups by petition features in the impact analysis. These data have already been collected.


Certified Worker Lists. The universe from which the study will select the certified-worker sample was obtained from lists, provided by certified firms to state agencies, of workers laid off during the TAA certification period. Because states are required to notify workers in writing about their potential program eligibility, these lists will contain identifying and contact information. The identifying information was used to match workers in the lists to the UI claims data to identify those who received UI benefits (described below), and the contact information was used to locate sample members for interviews. These data have already been collected.


UI and TRA Claims Data. These data will be used in the evaluation in several important ways. First, the data has been used to define the sample of TRA beneficiaries. Second, the data has been used to define the frame from which comparison groups were selected, and provides the variables used for matching potential comparison group members to TAA members. These same matching variables will also be used to define key subgroups for which subgroup impacts will be estimated. Third, the data will provide information on key outcome measures for the impact analysis concerning the number of weeks and dollar amounts of UI benefits received during the follow-up period. Finally, the UI claims data will contain contact information that will be needed to locate TAA and comparison group sample members for interviews. UI and TRA Claims data used to draw the treatment and comparison group samples have already been obtained from each of the 26 participating states. Updated data files for sample members will be requested from these same states in early 2009 (covered by the existing clearance) and again in 2010 to include information on subsequent UI and TRA claim recipiency.


UI Wage Records. UI wage records will be used to measure earnings during the follow-up period. These data provide an alternative earnings source to those provided by the survey data, and will provide earnings data for the full sample rather than for the much smaller survey sample. The collection of UI wage records for sample members will commence in early 2009 (covered by the existing clearance), and updated data will be requested in 2010.


TAA and WIA Service Use and Training Data. The TAPR and WIASRD files will be used in the descriptive analysis to describe the training experiences of all TAA participants and their use of TAA-funded services (such as job search and job relocation allowances) and WIA-funded services. The collection of TAA service usage and training data will commence in 2009 (covered by the existing clearance), and updated data on TAA and WIA service usage will be requested in 2010.


Baseline and Follow-up Survey Data. Because the administrative records do not provide sufficient detail for a full examination of a number of key evaluation questions, the study will also rely heavily on survey data. Survey data will provide detailed information—that will be consistent across states—on reemployment and training services received from TAA and other sources. The survey data will also provide data on job characteristics (such as hourly wages, available fringe benefits, and occupations) that are not captured in the UI wage records. The follow-up data will also provide information on other key outcome measures, such as overall health status, health insurance coverage, and the receipt of public assistance. Finally the survey data will provide baseline characteristics needed for re-matching comparison to TAA sample members, for defining key population subgroups, and for constructing control variables for the regression models. The baseline survey is in process (expected completion in February 2009); a follow-up survey will commence in June 2010.


Survey of TAA Officials. Data from this Internet/mail survey provides a picture nationally of the services and administration of TAA at the state and local levels. This survey has been completed.


Qualitative Data Collection Through Site Visits to State TAA and WIA Workforce Agencies and Local TAA Offices. Information will be gathered from interviews with state and local staff during five rounds of site visits. The first two rounds of site visits have already been completed, and the third round is underway. The final two rounds will be conducted in subsequent years.


Tables 1, 1A, 1B, 1C, and 1D display study questions and outcome measures in relation to the data elements and source. Appendices A through C provide the data collection tools for which clearance is now being sought, including the follow-up survey (Appendix A), the request for state administrative data (Appendix B), and the Field Protocols for qualitative data collection through subsequent site visits (Appendix C).


2. Indicate how, by whom, and for what purpose the information is to be used.  Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.


The information to be collected will be used to understand and analyze the impacts of the program overall and for different target groups, and to understand how the program operates in terms of services, administrative practices and organizational structure. The information will be used by policy makers in the Department of Labor, other parts of the Administration, and the Congress in the formulation of legislative and regulatory policy, as well for determining appropriate technical assistance to improve the operation of the TAA program.


Information collected as part of this evaluation thus far, which has been covered under the existing OMB clearance, has been used to prepare Briefing Papers, which have been used in the preparation of ETA’s Training and Employment Guidance Letters (which provide guidance regarding program operations to state and local TAA administrators) and the development of ETA positions relating to legislative proposals to reauthorize the TAA programs.


3. Describe whether, and to what extent, the collection of information involves use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection.  Also describe any consideration of using information technology to reduce burden.


Computer Assisted Telephone Interviewing (CATI) is being used to conduct interviews for the survey of TAA and comparison group members. CATI was selected because telephone interviews are more cost-effective and impose a lower burden on respondents than in-person interviews. CATI is more cost-effective than paper-and-pencil interviewing for many reasons, including the fact that CATI programs accept only valid responses and can be programmed to check for logical consistency across answers. Interviewers are thus able to correct errors during the interview, eliminating the need to call back respondents to obtain missing data. Also, calls are being made through an auto-dialer, linked to the CATI system, which virtually eliminates dialing error. The automated call scheduler will simplify scheduling and rescheduling of calls to respondents at their convenience and can assign cases to specific interviewers, for example, those who are fluent in Spanish.



TABLE 1
TAA evaluation RESEARCH questions and data sources

Research Question

Study Component

Data Sources

How does the TAA program operate, and what are challenges to implementation and operation?

Process Analysis

In-person interviews with state and local TAA staff during five rounds of site visits; mail survey of TAA Coordinators in all local areas; TAPR and WIASRD administrative records data

What is the overall impact of TAA on participants’ employment-related outcomes?

Overall Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Matching variables used to select comparison group: Baseline interviews, UI claims data, published local-area employment-related statistics

Control variables used in regression models to estimate impacts: Baseline interviews and UI claims data

Data items shown in Table 1A (matching and control variables) and Table 1B (outcome measures)

Do program impacts differ for subgroups of participants defined by their demographic characteristics?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and UI claims data

Data items shown in Table 1C

How do program impacts differ depending on TAA petition features?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data: TAPR and WIASRD data.

Subgroup variables: Petition data

Data Items shown in Table 1C

What are program impacts for participants who receive specific TAA services and benefits?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and TAPR data

Data items shown in Table 1C

Do impacts vary for participants in states and local areas with different program features?


Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and process analysis data

Data items shown in Table 1C

Is TAA cost-effective from the perspective of society as a whole?

Benefit-Cost Analysis

Various sources

Data items and data sources shown in Table 1D

TABLE 1A
DAta sources to oBTAIN MATCHED COMPARISON SAMPLE


Data Item

Data Sources


Initial Matching Variables



Demographic Information


Gender

UI Claims

Age

UI Claims

Race/ethnicity

UI Claims


Job Characteristics


Base-period earnings

UI Claims; UI Wage Records

North American Industry Classification System (NAICS) of main base-period employer

UI Claims


UI Claim and Benefit Data


Benefit year begin date

UI Claims

First claim week begin date

UI Claims

Claim type

UI Claims

Maximum benefit amount (MBA)

UI Claims

Weekly benefit amount (WBA)

UI Claims


Profiling


Claimant placed in WPRS selection pool

UI Claims

Profiling score (if available)

UI Claims

Profiling referral to reemployment services

UI Claims


Local Labor Market Information in County of Residence


Unemployment rate

U.S. Bureau of Census

Poverty rate

U.S. Bureau of Census

Percent manufacturing

U.S. Bureau of Labor Statistics

Population growth

U.S. Bureau of Labor Statistics

Metropolitan codes

U.S. Bureau of Census



Additional Variables for Re-matching After Conducting Baseline Interviewa



Demographic Information


Highest diploma or degree received

Baseline interview

Native language and limited English proficiency

Baseline interview

Household size

Baseline interview

Number of children

Baseline interview

Health status

Baseline interview

Marital status and spouse employment

Baseline interview


Characteristics of Pre-UI Job


Occupation

Baseline interview

Tenure

Baseline interview

Hours worked per week

Baseline interview

Hourly wage

Baseline interview

Available fringe benefits

Baseline interview

Reasons left job

Baseline interview

Union membership

Baseline interview

Received severance pay

Baseline interview

Looked for work after job ended

Baseline interview

Expected and actual recall status

Baseline interview


Employment Experiences During the Previous Three Yearsa


Number of jobs held in the previous three years

Baseline interview

Total earnings in the prior year

Baseline interview; UI Wage Records


Other Income


In the past year, whether received:


Food Stamps

Baseline interview

Cash assistance from TANF, Supplemental Security Income (SSI), Social Security Retirement, Disability, Survivors Benefits (SSA), or General Assistance (GA)

Baseline interview


Total household income in the previous calendar year


Baseline interview


Owned home, rented, or lived in public housing


Baseline interview


Covered by health insurance


Baseline interview



Control Variables Used in Regression Models to Estimate Program Impacts



Same as the Matching Variables Listed Above


UI Claims; Baseline interview; Published local-area and employment-related statistics


aData items pertain to the period before the worker got laid off from the job that led to the receipt of UI benefits.


TABLE 1B

DAta sources to MEASURE OVERALL IMPACTS

Outcome Measure

Data Sources


Reemployment Service Receipt


Receipt of rapid response services prior to job layoff, types of services received, and who provided them

Follow-up interviews; WIASRD

Whether reemployment services were received after job loss

Follow-up interviews

Types of reemployment services received (such as job search assistance, job referrals, help with resume, information on how to change careers, career assessment, occupations in demand, information on education and training programs, whether received counseling about training options)

Follow-up interviews; WIASRD

Main place where reemployment services were received

Follow-up interviews

Receipt of job search, relocation and transportation allowances

Follow-up interviews

Whether received a letter stating that participation in services was mandatory to receive UI benefits

Follow-up interviews

Whether services were helpful in finding a job or identifying training

Follow-up interviews


Education and Training Services


Whether participated in any education and training programs

Follow-up interviews; TAPR; WIASRD

Reasons for nonparticipation

Follow-up interviews

Number of programs

Follow-up interviews

Hours spent in education and training

Follow-up interviews

Type of program (type of skills training or general education program)

Follow-up interviews; TAPR; WIASRD

Place where received education or training

Follow-up interviews

Cost of program, funding sources, and out-of-pocket costs

Follow-up interviews

Whether and when completed program

Follow-up interviews; TAPR

Whether received a certificate or degree

Follow-up interviews

Sources of income support while in program

Follow-up interviews

Satisfaction with program

Follow-up interviews

Highest diploma or degree received

Follow-up interviews

Overall Employment and Earnings


Labor force status

Follow-up interviews

Employed, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Weeks employed, overall and by period

Follow-up interviews

Hours employed, overall and by period

Follow-up interviews

Earnings, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Number of jobs

Follow-up interviews

Ratio of weeks employed per year, post-displacement to pre-displacement, overall and by period

Follow-up interviews

Ratio of earnings per year, post-displacement to pre-displacement, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Job Characteristics


Occupation, industry, and type of employer

Follow-up interviews

How found job

Follow-up interviews

Whether recalled from former employer

Follow-up interviews; TAPR

Hours worked per week

Follow-up interviews

Hourly wage

Follow-up interviews

Available fringe benefits (health, paid vacation, paid holidays, paid sick leave, retirement)

Follow-up interviews

Union membership

Follow-up interviews

Reasons left job

Follow-up interviews

Looked for work after job ended

Follow-up interviews

Ratio of hours worked per week, post-displacement to pre-displacement

Follow-up interviews

Ratio of hourly wage, post-displacement to pre-displacement

Follow-up interviews

Change in the availability of fringe benefits, post-displacement to pre-displacement

Follow-up interviews

Other Income


Total amount received:


UI benefits

UI Claims

Pension benefits

Follow-up interviews

Cash assistance from TANF, Supplemental Security Income (SSI), Social Security Retirement, Disability, Survivors Benefits (SSA), or General Assistance (GA)

Follow-up interviews

Food Stamps

Follow-up interviews

Total household income

Follow-up interviews

Ratio of total household income in the past year, post-displacement to pre-displacement

Follow-up interviews

Owned home, rented, or lived in public housing

Follow-up interviews

Health and Health Insurance


Health status

Follow-up interviews

Type of health problems and how long had problem

Follow-up interviews

Time covered by health insurance

Follow-up interviews

Main type of health insurance

Follow-up interviews

Out-of-pocket costs for health insurance

Follow-up interviews

Change in health and health insurance status, post-displacement to pre-displacement

Follow-up interviews

Marriage, Children, and Mobility


Marital status and spouse employment

Follow-up interviews

Household size

Follow-up interviews

Number of children

Follow-up interviews

Number of states lived in

Follow-up interviews

Change in marital status and spouse employment, post-displacement to pre-displacement

Follow-up interviews


a
Data items pertain to the period before the worker got laid off from the job that led to the receipt of UI benefits.

TABLE 1c


DAta sources to MEASURE subgroup IMPACTS defined by worker characteristics, TAA program experiences, and TAA program features

Data Sources

Data Sources


Outcome Measures



Same as Table 1B


Same as Table 1B



Subgroups Based on Worker Characteristics at the Time of Job Layoff



Age


UI Claims; Baseline interview

Race and Ethnicity

UI Claims; Baseline interview

Gender

UI Claims; Baseline interview

English Proficiency

Baseline interview

Education Level

Baseline interview

Health Status and Health Insurance Coverage

Baseline interview

Poverty Status

Baseline interview

Marital Status and Spouse Employment

Baseline interview

Whether Profiled for UI Services

UI Claims; Baseline interview

Industry of Pre-layoff Job

UI Claims; Baseline interview

Full-time Work Status

Baseline interview

Pre-layoff Earnings Level

UI Claims; Baseline interview

Available Fringe Benefits on Job

Baseline interview

Likely Job Recall Status

Baseline interview

Region

UI Claims; Baseline interview

Rural/Urban Status

UI Claims; Baseline interview

Local Unemployment Rate

Published local-area statistics



Subgroups Based on TAA Participants’ Program Experiences



Extent of Notification About TAA Services


Interviews

Types of TAA-Related Reemployment Services Received

Interviews; TAPR

Participation in TAA Training, and Types of Training Received

Interviews; TAPR

Training Program Completion Status

Interviews; TAPR

Training Waiver Status

TAPR

TRA Benefit Receipt

UI Claims

Received a Job Search/Relocation/Travel Allowance

Interviews; TAPR

Whether Co-Enrolled in WIA

WIASRD

Received a Health Coverage Tax Credit

Interviews

Received a Wage Subsidy as Part of the ATAA Program (for those 50 and older)

Interviews






Subgroups Based on TAA Petition Features


Type of petitioner (worker, firm, other)

Petition

Number of affected workers

Petition

Certification determination processing time

Petition

Industry for the article produced by firm

Petition



Subgroups Based on TAA Program Features



State Performance Level


TAA National Office

State TAA Funding Levels per Participant

TAA cost reports

Number of TAA Participants in State

TAPR

Proportion of Participants Who Receive Training

TAPR; Interviews

Proportion Who Receive TRA Benefits

UI Claims

Staff Experience Levels

Site Visits; Local area survey

Extent of Linkages of the TAA Program with One-Stop Centers

Site Visits; Local area survey

Extent of State Versus Local Control in Setting Policies and Procedures

Site Visits; Local area survey

Timeliness of Rapid Response Services

Site Visits; Local area survey

Quality of the MIS System

Site Visits; Local area survey


TABLE 1D


DAta sources FOR THE BENEFIT-COST ANALYSIS



Data Sources

Data Sources


Benefits



Output


Baseline and follow-up interviews; Published sources on fringe benefits and effective tax rates


Reduced Use of Other Programs and Services


Other Training-related programs

Baseline and follow-up interviews; Published sources on costs of education and training programs

Public assistance (other than UI)

Baseline and follow-up interviews; Published sources on administrative costs of transfer programs


Value of Free Trade


Costs


Review of literature


Receipt of UI Benefits


Baseline and follow-up interviews; Published sources on administrative costs of transfer programs


Program Costs


TRA payments

UI Claims

Allowances (such as job search, relocation, transportation, and subsistence)

TAA Cost Reports

Training costs

TAA Cost Reports

Administrative costs

TAA Cost Reports

4.  Describe efforts to identify duplication.  Show specifically why any similar information already available cannot be used or modified for use for the purposes described in item 2 above.


The evaluation will utilize administrative records data from a wide range of sources, as well as survey data and process-study data. In addition to the administrative data, baseline-survey data, and process-study data already collected, the evaluation will collect: 1) UI and TRA Claims data, to provide information on UI and TRA benefit receipt during the follow-up period; 2) UI wage records, to measure earnings during the pre-separation and follow-up period for all sample members; 3) TAPR and WIASRD records, to describe service receipt and the training experiences of sample members; 4) Follow-up survey data, to describe subsequent employment and other outcomes and service receipt; and 5) Additional process-study data through site visits, to characterize program operations.


There is no way of answering the study’s research questions without this additional data collection. The last impact analysis of the TAA program was conducted using a sample of TAA participants in the late 1980s (Corson et al. 1993), and, as OMB’s PART review notes, updated impact estimates are much needed. While the current study has collected process-study data and information from the baseline survey and administrative records, these data cannot be used to answer questions of program impacts unless an extension of the clearance is granted, which will allow the collection of follow-up information on employment and other outcomes. Similarly, additional rounds of process-study data collection are needed to closely link process study findings to the estimation of impacts, and to observe how program operations change in response to TAA reauthorization, which is expected imminently.


Moreover, administrative records data themselves are not sufficient for conducting the study, and therefore the study will rely also on survey data collected on a random subset of sample members. The survey data will provide more detail on TAA program experiences, training and reemployment experiences from other sources, key outcome measures, and baseline characteristics needed for matching and defining key population subgroups. The baseline survey, which is just concluding, is providing information on the extent to which workers are notified about their TAA eligibility, reasons eligible workers accept or do not accept the TAA offer, and participants’ satisfaction with the program, and has collected information on workers’ demographic characteristics and pre-layoff employment-related experiences. The follow-up survey, to be covered by this new clearance, will capture services and benefits received by sample members outside the agencies for which administrative data are available, and will provide detailed information on the characteristics of jobs found by sample members (such as hourly wages, available fringe benefits, and occupations) and on earnings that are not captured in the UI wage records. The survey data will also provide other key outcome measures, such as overall health status, health insurance coverage, and the receipt of public assistance.


5.  If the collection of information impacts small businesses or other small entities (Item 5 of OMB Form 83-I), describe any methods used to minimize burden.


No small businesses or other small entities will be interviewed for this survey.


6.  Describe the consequences to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles in reducing burden.


If the information collection is not conducted, Federal program or policy activities will not be informed by high quality information upon which to base critical decisions regarding what changes are necessary to enhance the effectiveness of the program.


The evaluation taken as a whole is a one-time event. However, the data collection is occurring in phases. For example, the survey consists of two rounds of data collection—at baseline and 25 months later. The two rounds are designed to measure baseline information and both short- and long-term program impacts. The baseline survey, which concluded in February 2009, covers the period prior to job layoff, as well as the period between job layoff and the interview date. In the baseline interview data was collected on: 1) workers’ demographic characteristics and pre-layoff employment-related experiences (that will be used to re-match comparison to TAA group members, to define key worker subgroups, and to construct detailed control variables for the regression models); 2) worker experiences with the TAA program and the receipt of specific types of reemployment and training services; and 3) key employment-related outcome measures covering the post-layoff period. The follow-up interviews, to be covered by the new clearance, will collect information on key outcome measures pertaining to the period since the previous interview date.


Administrative data are also being collected in waves. The first extracts of UI and TRA claimant data, already requested of states and in hand, have been used for defining the treatment and comparison group samples. A second round of administrative data collection will be used to update the claimant histories of sample members and will be used to reclassify some sample members as program participants who might not have been receiving TRA at the time the sample was selected but who are receiving TRA at this interim point. UI wage data will also be collected at this time, so that a history of pre-layoff employment and earnings can be established; collecting UI wage data at this interim point is imperative, because some states archive UI data periodically, so waiting longer for collecting employment history information may mean that the data simply become unavailable. Finally, a final round of administrative data—UI claimant and wage data and program service data—will be collected just in advance of the preparation of the Final Report, so that the evaluation has as complete a history of UI benefit receipt, services, and employment and earnings as is possible given the project’s timeline.


Qualitative data from the site visits is also being collected in waves, and each wave has its own distinctive purpose. The first wave focused on learning about the early implementation of program changes in association with the 2002 TAA Reform Act, so that ETA’s Trade Office could learn about and address implementation challenges. The second and third waves were conducted to state-level and local-level TAA offices for states contributing data to the impact study, to learn about implementation issues that would have a bearing on program effectiveness. These waves have already been conducted or are in process. The fourth and fifth waves will be conducted to learn about ongoing adjustments in the TAA program and to glean information about promising practices in program implementation.


7. Explain any special circumstances that would cause an information collection to be conducted in a manner:


  • requiring respondents to report information to the agency more often than quarterly;


  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;


  • requiring respondents to submit more than an original and two copies of any document;


  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;


  •  in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;


  • requiring the use of statistical data classification that has not been reviewed and approved by OMB;


  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or


  • requiring respondents to submit proprietary trade secrets, or other confidential information, unless the agency can demonstrate that it has instituted procedures to protect the information’s confidentiality to the extent permitted by law.


None of the special circumstances are applicable to this data collection. In all respects, the data will be collected in a manner consistent with federal guidelines. There are no plans to require respondents to report information more than quarterly, to prepare a written response to a collection of information within 30 days of receiving it, to submit more than one original and two copies of any document, to retain records, or to submit proprietary trade secrets. The statistical survey will produce valid and reliable results that can be generalized to the universe for the study, and it will include only statistical data classifications that OMB has reviewed and approved. It will include a pledge of confidentiality that is supported by authority established in statute or regulation and by disclosure and data security policies that are consistent with the pledge. It will not unnecessarily impede sharing of data with other agencies for compatible confidential use.


8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency’s notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments.  Specifically address comments received on cost and hour burden.


Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.


Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every 3 years even if the collection of information activity is the same as in prior periods.  There may be circumstances that may preclude consultation in a specific situation.  These circumstances should be explained.

a. Federal Register Notice and Comments


The public was given an opportunity to review and comment on this request for an extension to the data collection on March 30, 2009 (Federal Register Notice Volume 74, pp 14159-14160). No comments relevant to the information collection request were received.


b. Consultations Outside the Agency

Consultations on the research design, sample design, data sources and needs, and study reports have occurred during the study’s design phase and will continue to take place throughout the study. The purpose of such consultations is to ensure the technical soundness of the study and the relevance of its findings, and to verify the importance, relevance, and accessibility of the information sought in the study. The contractor, Social Policy Research Associates (SPR), and its subcontractor, Mathematica Policy Research (MPR), have provided substantial input to DOL for the evaluation. Table 2 displays the senior technical staff from these organizations that were consulted in developing the design, the data collection plan, and the questionnaire.


TABLE 2

CONTRACTOR TECHNICAL STAFF


Name

Affiliation

Telephone Number

Dr. Ronald Damico

Social Policy Research Associates

(510) 763-1499

Dr. Peter Schochet

Mathematica Policy Research

(609) 279-6887

Patricia Nemeth

Mathematica Policy Research

(609) 275-2294

Dr. Frank Potter

Mathematica Policy Research

(609) 936-2799

Jeffrey Salzman

Social Policy Research Associates

(510) 763-1499

Richard West

Social Policy Research Associates

(510) 763-1499

Dr. Paul Decker

Mathematica Policy Research

(609) 275-2290



  1. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.


The original data collection plan approved by OMB authorized incentive payments to survey sample members. The strategy of providing compensation for participation in the study draws on an extensive literature documenting its importance in achieving high levels of cooperation with surveys. Research has shown, for example, that even modest compensation can increase the response rates to surveys and lower the cost of data collection without compromising the quality of the data (Singer 2002; Singer et al. 1999a and 1999b). Further, generous incentives can help obtain a high cooperation rate and avoid the cost of using field interviewers to go to the sample members’ homes to attempt interviews. Offering generous incentive amounts can minimize the overall costs of a survey by reducing the length of the field period and the number of contact attempts needed to achieve the targeted response rate (Markesich and Kovac 2003).


Because of ambiguity in the literature on the relative effectiveness of pre- versus post-payment incentive strategies for telephone surveys (Singer et al. 1999), OMB approved, as part of the original clearance package, an experiment to investigate the effects of pre- versus post-payment incentives during the baseline interviewing, using a large national sample of UI claimants.

For the experiment, treatment and comparison group members were randomly selected to be in one of the three groups: 1) a group that received a $25 post-payment sent by check upon completion of the survey (60 percent of the sample); (2) a group that received a $2 cash pre-payment and a $25 post-payment upon completion of the survey (20 percent of the sample); and (3) a group that received a $5 cash pre-payment and a $20 post-payment upon completion of the survey. Advance letters were sent out and baseline interviewing commenced in March of 2008.


Results as of August 2008 showed that the prepaid incentives had a small effect on interview completion rates. The overall response rate was about 44 percent for the two prepayment groups, compared to 39 percent for the post-payment-only group. The overall difference in the response rates by incentive type is statistically significant. However, the response rates were low regardless of incentive structure. (These results can be found in Appendix E.)


Since administrative data from states was received in waves, at different times, sample selection and subsequent survey interviewing commenced in waves. The response rates reported above, then, included results for sample members who had been in the field for various lengths of times. However, for sample members in the seven states where the survey has been conducted for the longest period of time, the overall response rate as of August 2008 was still low, at about 46 percent, with averages of 60 percent for those who received TAA benefits and services, 48 percent for TAA eligibles who did not receive services (TAA nonparticipants), and 40 percent for the comparison group of UI claimants. For the state that has been worked the longest (since March 13, 2008) the response rate was only 47 percent overall, with a rate of 66 percent among TAA participants, 44 percent for TAA nonparticipants, and about 40 percent among comparison group members.


Because of these low response rates, a request was submitted to OMB in September 2008 to permit changes in survey procedures and to conduct a second experiment comparing different (and higher) incentive payments (all described in Appendix E). Approved changes to survey procedures included:


  • Sending all correspondence to sample members (e.g., the advance, refusal conversion, and locating letters) on U.S. Department of Labor (DOL) letterhead, over a DOL official’s signature, with a DOL contact number, rather than using MPR letterhead from the survey subcontractor (MPR) with an MPR manager’s signature, as was used until that point. This more “official” correspondence, it was hoped, would receive greater attention from respondents and lend greater legitimacy to the request than a letter provided from MPR;


  • Using priority mail, with its visually prominent red, white and blue exterior envelope, for sending refusal conversion letters to respondents, based on the successful use of priority mail by MPR in other studies. In addition, all existing non-respondents were to be sent a follow-up postcard with the amount of the incentive prominently displayed, so as to alert potential respondents and their family members;


  • Reviewing current procedures using social security numbers to locate addresses and phone numbers, and assuring that the most productive methods were being systematically applied across all cases, thus taking full advantage of the availability of social security numbers in the TAA study;


  • Reviewing the CATI production records to determine the most productive interview completion times and, as needed, increasing the number of interviewers for these time periods;


  • Selecting a core of the most elite refusal converters at MPR and increasing their work hours on this project;


  • Conducting a CATI interviewer debriefing in order to identify what approaches are most successful for making contact with TAA households and assuring that all of the voice-mail messages left by interviewers clearly identify the incentive amounts; and


  • Conducting additional refusal conversion training as needed.


The experiment involving increases in incentive payments, approved by OMB and implemented simultaneously with the procedural changes noted above, included the amounts were as follows:


  • TAA Participants – New sample cases who were TAA participants, as well as those TAA participants who had been contacted previously but had not responded (i.e., “existing” cases), were split into two equal groups. Half were offered a $25 incentive payment for completing the interview, while the other half was offered a $50 payment.


  • TAA Nonparticipants and Comparison Groups - New sample members and nonrespondents in the remaining groups (that is, TAA nonparticipants and all comparison group members) were split into three groups constituting 20, 40 and 40 percent of this overall group. A $25 incentive payment was offered to the 20 percent group, $50 was offered to 40 percent and $75 was offered to last 40 percent.


After six weeks, results of this new experiment were assessed, with results reported to OMB. These findings are detailed in Appendix E. In summary, response rates for all existing cases increased under the new regime, and they increased as well for new cases released for interviewing in comparison to response rates at an equivalent time after release for cases interviewed under the old regime. For example, during the eight week follow-up period, the overall response rate for existing cases increased from 41 percent to 55 percent. Response rates increased for all sample groups, but the increases were larger for TAA nonparticipants and control group members (about 16 percentage points) than for TAA participants (about 8 percentage points).


Procedural changes and the larger incentive payments both played a role in increasing response rates. For example, response rates increased with the introduction of the new procedural changes even for the group still offered a $25 incentive, especially among TAA nonparticipants and comparison group members. However, larger incentive payments also played a role. For example, among existing cases who were TAA nonparticipants or in the comparison group, the response rate was significantly higher for the $50 and $75 incentive groups than for the $25 incentive group. Importantly, however, there was no significant difference between the $50 and $75 incentive for these cases.


Based on these findings, OMB in an NOA, dated December 5, 2008 (ICR Reference Number 2008-11-1205-002) authorized incentive amounts for the remainder of baseline survey data collection as follows:


  • TAA Participants Already in the Field. The incentive amount for all TAA participants that have been previously contacted would be increased to $50.


  • TAA Participants Who Were New Cases. These individuals would be offered only the $25 incentive.


  • All Other Cases (including existing and new cases of TAA nonparticipants and comparison groups). These would be offered a $50 incentive.


Additionally, all procedural changes would be continued.


Following this guidance, we propose to use the following incentive structure for the follow-up survey:


  • TAA Participants Offered $25 at Baseline. These would be offered an additional $25 incentive upon completion of the follow-up survey.


  • All Other Cases. All other cases would be offered a $50 incentive upon follow-up survey completion.


10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.


SPR and MPR will follow procedures consistent with provisions of the Privacy Act for assuring and maintaining confidentiality. Confidentiality agreements have already been established with states in the collection of their administrative records. Additionally, respondents to the baseline interviews have received information about confidentiality protection in an advance letter describing the survey and again at the outset of the interview as part of the interviewer’s introductory comments. For the follow-up survey, similar procedures will be followed. Specifically, respondents will be informed that all information they provide will be treated confidentially. Interviewers will be trained in confidentiality procedures and will be prepared to describe these procedures in full detail, if needed, or to answer any related questions raised by respondents.


All data items that identify respondents will be kept by SPR and MPR for use in assembling records data and in conducting the interview. Any data received by the U.S. Department of Labor, Employment and Training Administration will not contain personal identifiers, which will thus preclude individual identification.


In addition, the following safeguards are routinely used by research team members to assure confidentiality in the collection of survey data:

  • Access to sample selection data with personal identifying information is limited to those that have direct responsibility for providing the sample. These data are destroyed at the conclusion of the research.

  • Identifying information is maintained in a separate file from interview data. The files are linked only with a sample identification number.

  • Access to link-files containing sample identification numbers connecting the research data and the respondents’ identification is limited to a few persons who have a need to know this information.

  • All files containing confidential information are encrypted.

  • Access to any hard-copy documents is strictly limited. Physical precautions include use of locked files and cabinets, shredders for discarded materials, and interview control procedures.

The research team also will use standard methods to guard against inadvertent disclosure.1 These include methods to be used with tabular results of frequency data and tabular results of magnitude data, as well as methods to be used in preparing public-use files. With respect to tabular results, our intent is to report only those results with adequate statistical precision. In general, this will be a more limiting condition than is strictly necessary from the standpoint of ensuring adequate safeguards against inadvertent disclosure. Thus, the guidelines to be reported below should be viewed as minimal conditions; in actuality, much more stringent conditions will be applied in most cases. The guidelines are as follows:

Tabular Results of Frequency Data. For tabular results of frequency data, a risk of inadvertent disclosure will be avoided by adherence to these two conditions:

  • No cell shall be reported if the number of respondents is less than 10 and

  • No single cell shall solely account for a row or column total.

Should these conditions be violated in initial tabulations, rows or columns will be combined, as necessary, until the conditions are satisfied.

Tabular Results of Magnitude Data. For tabular results of magnitude data, we will require each cell value to be based on 10 or more respondents and will apply the (n,k) rule, using a value of 2 for n and of .6 for k. Thus, no cell value shall be reported if any two respondents contribute at least 60% to the cell’s total value. Should these conditions be violated in initial tabulations, rows or columns will be combined, as necessary, until the conditions are met.

Reporting Microdata. One of this project’s deliverables is a public-use file of microdata. Following customary guidelines, the following safeguards will be implemented to guard against inadvertent disclosure:

  • No personal identifiers will be appended to any record,

  • Units of geography will not be identified,2

  • The employer from which the individual was dislocated will not be revealed, nor will the TAA petition number nor the industry of dislocation.

  • Key information drawn from administrative data that could be used to identify an individual (including enrollment date, date of training, and date of exit) will be rounded (e.g., dates will be reported in mmyyyy format, rather than mmddyyyy format) and random perturbations will be applied, and

  • Variables will be bottom-coded or top-coded, if extreme values are present.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private.  This justification should include the reasons why the agency considers these questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.


The survey for the TAA evaluation contains a minimal set of items that may be considered sensitive in nature. These questions include the receipt of income by the sample member from jobs covering the pre- and post-layoff period, income by spouses or partners, income from pensions, public assistance receipt, and total household income. Questions about income and public assistance receipt are necessary to construct the primary outcome measures for the study. TAA provides training and other reemployment services to help participants prepare for and obtain suitable employment. Thus, the primary purpose of the program is to improve the long-term earnings and income of program participants and to reduce their reliance on public assistance. Consequently, it is necessary that the study obtain data to measure the economic well-being of study participants.


As described in item 10 above, all respondents will be assured of confidentiality at the outset of the interview. All survey responses will be held in strict confidence. In collecting all information, SPR and MPR will comply with the requirements of the Privacy Act of 1974. All questions in the current survey, including those deemed potentially sensitive, have been pre-tested and used extensively in prior surveys with no evidence of harm.


12. Provide estimates of the hour burden of the collection of information. The statement should: Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated.  Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates.  Consultation with a sample (fewer than 10) of potential respondents is desirable.  If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance.  Generally, estimates should not include burden hours for customary and usual business practices.


The total hour burden for information collected for TAA study when the original clearance package was submitted was estimated to be 11,867 hours. The revised estimate is 9,236 hours, including hours expended for data collection that has already occurred as well as data still to be collected. Table 4 shows the revised burden estimates, broken down into burden that will occur under the period covered by the original clearance (November 2006 to November 2009), as well as burden that we estimate for data collection during the extension period for which we are requesting approval. The reduction in overall burden from 11,867 hours to 9,236 hours is occurring because the original clearance assumed the survey would be conducted at three points in time—at baseline, 15 months later, and 15 months after that. The evaluation’s new plan is to conduct the baseline survey (which is just concluding) and a single follow-up survey at 25 months after baseline. Replacing the 15-month and 30-month surveys with a single 25-month follow-up survey, approved by OMB in an NOA on December 17, 2008, was needed because the delays that the project has experienced to date necessitate a telescoping of remaining project activities, given that project funds expire September 2011. Additionally, the change to burden comes about because we had originally planned on collecting administrative data from 25 states, but now have 26 states in the analysis sample.


The hour burden was calculated based on an estimate that it will take: 1) each state 24 hours of staff time to process our data requests, 2) each respondent 35 minutes to complete the baseline interview (based on actual pretests), 3) each respondent 30 minutes to complete the follow-up interviews (based on actual pretests), 4) 1,955 hours to administer the process visit protocols to state- and local-area staff, and 5) 20 minutes for the TAA Coordinator in each local area to complete the survey, as well as 10 minutes for each state telephone screener (based on actual pre-tests).


TABLE 4

RESPONDENT HOURS BURDEN FOR THE TAA EVALUATION


Activity

Total
Respondents

Frequency

Average Minutes
per Response

Burden Hours

Burden Under the Original Period (November 2006 to November 2009)


Impact Analysis





State Administrative Data

26

Twice

480

416

Baseline Survey

7,965

One time

35

4,646


Process Analysis





Administration of Process Visit Protocols





1: Initial Implementation

144

One time

90

216

2: Impact Sample State Visits

150

Twice

100

500

3: Impact Sample Local Visits

280

One time

85

397

Survey of All Local Areas





State phone screener

50

One time

10

8

Local area survey

700

One time

20

233

Burden Under the Proposed Extension (November 2009 to Project Completion)


Impact Analysis





State Administrative Data

26

Once

480

208

25-Month Follow-up Survey

3,540

One time

30

1,770


Process Analysis





Administration of Process Visit Protocols





4: TAA Reauthorization

180

One time

100

300

5: Promising Practices

325

One time

100

542

Total Estimated Burden




9,236




The total burden cost of collecting the follow-up survey, covered by this clearance request, is $28,320 (the 30 minutes to complete the follow-up surveys multiplied by 3,540 completers and by an estimated average hourly wage of $16.3 This burden cost would be offset by the respondent incentive payment for each interview completed.


13. Provide an estimate for the total annual cost burden to respondents or recordkeepers resulting from the collection of information.  (Do not include the cost of any hour burden shown in Items 12 and 14).


The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component.  The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information.  Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred.  Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.


Respondents will incur no startup or ongoing financial costs. There are no record keepers.


If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance.  The cost of purchasing or contracting out information collections services should be a part of this cost burden estimate.  In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.


The proposed information collection plan will not require the respondents to purchase equipment or services or to establish new data retrieval mechanisms. These costs are not expected to vary.


Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.


We do not expect responding agencies to purchase equipment or services in order to respond to this information collection plan effort. 


14. Provide estimates of annualized costs to the Federal government.  Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information.  Agencies may also aggregate cost estimates from Items 12, 13, and 14 in a single table.


The total cost to the federal government of the contractor carrying out this study is $10,453,957, to be expended over the 93 months of the evaluation. Of the total amount, costs of $4,053,307 have already been expended (as of December 27, 2008), and approximately an additional $1,780,000 will be spent by the period covered by OMB’s existing NOA. The remaining $4,620,650 will be spent during the period covered by this new clearance. Of the total amount, approximately $3.0 million will have been used for developing a research design, consulting with project advisors, carrying out an initial implementation study, carrying out analysis, preparing reports, developing a public use file, and carrying out project management. Data collection for the evaluation will cost approximately $7.4 million, including amounts covered under the existing NOA and the extension period for which approval is being requested. Total data collection costs are as follows:

A) Total Cost of Collecting Administrative Data: $2,189,397. This figure includes the costs of collecting lists of certified workers from states, from which the analysis sample of TAA-eligibles will be drawn, and collecting Unemployment Insurance wage records and claimant data, and program participant data. This budget estimate includes: 1) loaded labor costs, including the costs of requesting the data files from states and preparing them for analysis and 2) payments to states to reimburse them for the cost of preparing data files

C) Total Survey Administration Costs: $3,715,814. This figure includes the costs of selecting the treatment and comparison group samples and administering the baseline and the follow-up surveys. Costs for conducting these surveys include the loaded labor cost for senior research staff, programmers, survey supervisors, telephone interviewers, and data clerks and locators, and the M&S costs, including telephone costs, facilities costs, the costs for respondent payments, as well as indirect expenses.

D) Process-Study Data Collection: $1,514,872. This figure includes the contractor’s loaded labor costs and travel costs associated with the site visits, and costs associated with the TAA Administrator survey.

In addition, estimated costs to the government for all aspects of this evaluation total $660,000 including the following:

  • Staff level management of the evaluation: $400,000;

  • Development and clearance of Memoranda of Agreement with states: and associated clearance process: $125,000;

  • Oversight by other USDOL agencies, including SOL, BLS, OASAM and ASP: $15,000 and

  • Review and publication of various papers resulting from the evaluation: $120,000


15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.


This is a request for a one-time extension of an existing data collection. As noted, the project is over three years behind schedule, due to delays in obtaining approval for the first ICR and in obtaining state administrative data necessary to select the survey samples. Annualized hour and respondent burden have been reduced due to adjustments to the survey protocol, specifically, decreasing the follow-up surveys from two to a single survey, with a slightly expanded sample to assure a sufficient number of responses, due to lower-than-anticipated response rates for different subgroups.


16.     For collections of information whose results will be published, outline plans for tabulation and publication.  Address any complex analytical techniques that will be used.  Provide the time schedule for the entire project, including beginning and end dates of the collection of information, completion of report, publication dates, and other actions.

A. Tabulations. A wealth of information will be collected and tabulated in this study around two broad areas of inquiry. These are: 1) what are the program’s net impacts on employment and earnings, both overall and for specific subgroups, and 2) how does the TAA program operate (i.e., who receives which services, at what quality, under what administrative arrangements). The specific tabulations will reflect the multiple types of analyses discussed below.


B. Analytic Approaches. The two research questions cited above are inextricably connected, in that the proper interpretation of outcomes can derive only from a solid understanding of the TAA program’s administration and services. At the same time, each research question has its own logic, and each gives rise to its own analysis methods. Accordingly, the evaluation will entail impact analyses, a benefit-cost analysis, and a process study, as described below.

Impact Analyses: The impact analysis for the TAA evaluation will address the effectiveness of TAA services and benefits on key participant outcomes from several perspectives. The global analysis will examine the overall impacts of the TAA program for the full sample, while the targeted analysis will address the important policy questions of what works and for whom.


Global Analysis. The impact analysis will first estimate the extent to which the TAA program changes the average outcomes of program participants relative to what these outcomes would have been in the absence of the program. Theoretically, because the procedure used to select the comparison groups will have yielded well-matched comparison groups, this impact can be estimated as a simple difference in outcomes between groups. However, regression procedures will be used to estimate these impacts, for two reasons. First, these procedures produce more precise impact estimates, to the extent that the covariates included in the models are correlated with the outcome measures. Second, regression procedures can adjust for any differences in the observable characteristics of TAA and comparison group members due to interview nonresponse and to residual differences after matching.


The study will estimate variants of the following regression model:


where y is an outcome variable at a specific time point, TAA is an indicator variable equal to 1 for TAA group members and 0 for comparison group members, Xs are baseline explanatory variables used in the matching process, is a mean zero disturbance term, and , , and are parameters to be estimated. The estimate of represents the regression-adjusted impact estimate of TAA on the outcome variable, and the associated t-statistic can be used to gauge the statistical significance of the impact estimate.4 The estimates of across the many outcome measures that will be examined for the study will form the basis for assessing the effects of TAA program services.


Appendix F describes the mathematical formulas that will be used to obtain the parameter estimates and their associated variances under a design-based inference approach. The Appendix displays formulas for continuous outcome measures (such as earnings and UI benefits received over a given follow-up period), as well as binary outcome measures (such as whether the worker is employed, has been recalled to his or her separating job, and has health insurance). Appendix F also describes specific methods that will be used to construct weights for the analysis, including probability weights, and adjustments for nonresponse and poststratification.5


Finally, under the certified-worker design, the study will obtain samples of both TAA participants and TAA nonparticipants in TAA-certified firms.6 Because different patterns of impacts for these two groups are expected, the study will estimate separate impacts for each one, although the study will also estimate impacts for the pooled sample (using the appropriate weights) to examine TAA effects for the full population of those covered by a certification. In addition, separate models will be estimated using the certified-worker and TRA-beneficiary samples.


Targeted Analysis. The targeted analysis will use a more refined approach than the global analysis to examine the effects of TAA on key outcomes. The targeted analysis will address the important policy questions of what works, and for whom does TAA work. Specifically, it will address the following research questions (see Table 1C):


  • Do impacts differ for workers who receive different services and benefits? What are the impacts for those who receive long-term training? For those waived from training? For those who use the HCTC? For those over 50 who receive Alternative TAA services? For those who receive TRA benefits? For those who receive assessment, counseling, or placement assistance?


  • Do impacts differ for workers with different baseline characteristics? Do impacts differ by age, race/ethnicity, education level, pre-layoff earnings level, industry, region, and the local unemployment rate?


  • Do impacts differ for workers with different petition features? How do impacts vary by the number of affected workers, certification determination processing time, industry, and type of petitions?


  • Do impacts differ among states with different administrative or organizational features or structures? How do impacts vary according to states’ performance levels? According to the ability of the TAA program to deliver adjustment services in a timely manner? According to the extent of integration of services and programs within the One Stop Career Center system?


In the targeted analysis, the study will first examine thoroughly, using interview and program data, the services and benefits that sample members received. Then researchers will gauge the extent to which TAA workers participate in various program components (such as job training and the HCTC). If participation levels in some program components are very low overall or for key worker subgroups to which these services are targeted, then program impacts for these program components are expected to be small. Similarly, understanding the nature and amount of services that the comparison group receives will help us assess whether impacts for specific program components or for specific groups of workers are likely to be large or small. Moreover, process analysis findings will clarify the nature of services and the structure of program operations and how these may affect outcomes and impacts.


Impact results for those who receive different program services and benefits can provide important information on how to improve services and to develop and expand the program. The estimation of these subgroup impacts, however, is complicated by two factors. First, there are likely to be differences in the characteristics of those who receive different services (which could lead to sample selection biases). Consequently, comparing outcomes of TAA group members who receive specific services to the outcomes of those who receive other services (or to the outcomes of the full comparison group) may yield biased estimates. Second, because there may be considerable overlap in the receipt of particular program services, it may be difficult to disentangle the effects of some program components from the effects of others.


The study will use a two-step estimation process to address these complexities. First, during the contextual analyses, the researchers will construct various service-receipt indicator variables to signify the key program services and benefits that TAA group members receive. For example, it is likely that indicators will be constructed for TRA beneficiaries who are waived from the training requirement, those who use the HCTC, those who participate in Alternative TAA, and those who receive both TRA benefits and job training. If appropriate, other indicators will be constructed for combinations of these training services or other services such as assessment, counseling or placement assistance. Importantly, indicator variable values for comparison group members will be the same as the values for their matched TAA group members.


In the second stage, researchers will estimate impacts for those receiving a specific array of TAA services, by comparing the average outcomes of TAA group members within a service-receipt category to the average outcomes of their matched comparison group members. These subgroup impact estimates will be obtained by including in equation (1) explanatory variables formed by the interaction of service-receipt and TAA indicator variables.7 Researchers will include these interaction terms one at a time, but they will also conduct analyses where these interaction terms are included simultaneously to help disentangle the effects of some program components from others. It is expected that these analyses will yield informative results, because the baseline characteristics of TAA group members in specific service receipt cells are expected to be similar to those of their comparison group members.


Next, researchers will determine the extent to which TAA benefits workers with different personal characteristics, a question with important policy implications both for the operation of the program and for the development of other programs designed to serve this population. The study will use UI and baseline interview data to construct these worker subgroups. We expect that the subgroups (pertaining to the pre-intervention period) will include age, race and ethnicity, gender, industry (such as steelworkers), education level, marital status, pre-layoff earnings level, likely job recall status, region, and the local unemployment rate (see Table 1C). We will obtain subgroup impact estimates using procedures very similar to those described above for the service-receipt subgroups.


Additionally, the study will examine whether TAA petition features affect TAA impacts. Using petition data, researchers will construct worker subgroups (based on the number of affected workers, certification determination processing time, type of petitioner, and industry), and compute impact estimates in a way similar to the estimation of service-receipt subgroup estimates. Impacts are expected to differ across these groups. For example, workers who exert the effort to petition when their firms fail to do so might value TAA benefits more highly than workers in other firms and, thus, these workers might have higher program participation rates and larger impacts.


Finally, the study will estimate impacts for subgroups defined by key state program features, using information from the process analysis on key features that vary across states and that are likely to contribute to overall program effectiveness (see Table 1C). Researchers will estimate these subgroup impacts by grouping states with a particular program feature, and by comparing the mean outcomes of TAA and comparison group members within those states. The study will also use hierarchical linear (HLM) models to help disentangle specific program features from others. In these HLM models, the 26 state impact estimates (or larger number of local-area impact estimates) will be regressed on a small number of key program features, so that the effects of a particular program feature can be assessed holding constant the effects of other features.


The targeted analyses will generate impact estimates for a large number of outcome measures and for many subgroups. In each analysis, formal statistical tests will be conducted to determine whether TAA-comparison group differences exist for each outcome measure and subgroup. However, an important challenge for the evaluation is to interpret the large number of impact estimates to assess the extent to which TAA makes a difference. Thus, researchers will carefully examine the pattern of results rather than focus on isolated results. For example, the evaluation will examine the magnitude of the significant impact estimates to determine whether the differences are large enough to be policy relevant, and check that the sign and magnitude of the estimated impacts are similar for related outcome variables and subgroups. In addition, researchers will determine whether the sign and magnitude of the impact estimates are robust with respect to alternative sample definitions, model specifications, and estimation techniques.


Benefit-Cost Analysis. A benefit-cost analysis will compare the monetary value of impacts to their costs in order to examine the extent to which the TAA program is cost-effective. The basic approach for measuring the benefits and costs of TAA will be to value key program impacts at market prices, which are readily available in most cases, straightforward to use, and provide a good measure of the value that society places on impacts.


The potential benefits and costs will fall into five categories:


  1. The benefits of increased output resulting from the additional productivity of TAA participants. TAA services are expected to increase the job skills of program participants, which may lead to long-term earnings gains. The additional output produced by program participants will be measured using the increase in their total compensation, which will include earnings and fringe benefits. The calculations will use the earnings impacts estimated using the UI wage records and survey data, and the costs of fringe benefits (such as paid leave, supplemental pay, health insurance, pensions, and savings plans) from published data sources. We will also estimate tax payments (federal income taxes and credits, payroll taxes, federal excise taxes, and state and local taxes) based on reported income and household composition.


  1. The benefits or costs from changes in the receipt of UI benefits. TAA might reduce the receipt of UI benefits if program reemployment services are effective in helping participants find jobs quickly. However, TAA might also increase UI exhaustion rates if recipients continue their training after becoming eligible for TRA services. The analysis will use estimated impacts on UI benefit receipt from the UI claims data, and information on UI administrative costs obtained from DOL.


  1. The benefits from the reduced use of other programs and services. TAA participants are expected to use fewer non-TAA-funded services than comparison group members. Such services include education and training programs and reemployment services not funded by TAA. The costs of these programs will be obtained as part of the process analysis. In addition, because of potential long-term earnings gains, the TAA group is expected to receive fewer public assistance benefits (such as Food Stamps, TANF, and general assistance) than the comparison group.


  1. Unmeasured benefits. TAA may provide other benefits that are difficult to measure, such as improvements in participants’ quality of life that may result from improvements in their employment opportunities, self-esteem, and health. TAA may also provide gains to society from freer trade.


  1. Program costs. Program costs will include: (1) TRA benefits paid to program participants (obtained using UI/TRA data); (2) allowances paid to program participants (such as job search, relocation, transportation, and subsistence allowances); (3) training-related costs; and (4) administrative costs. Researchers will calculate these costs using quarterly cost data that states provide to DOL as well as data that we will obtain as part of the process analysis.


The findings from the benefit-cost analysis will depend on the perspective from which benefits and costs are measured. Most of the benefits of TAA accrue to program participants, while the government pays most of the costs. Hence, the benefits and costs to participants will differ from the benefits and costs to the government and the rest of society. Consequently, benefits and costs will be examined from three different perspectives – those of: (1) society, in order to determine whether the aggregate benefits from the program are greater than the resources used by the program, abstracting from who enjoys the benefits and who bears its cost; (2) participants, in order to address whether TAA is a good investment for the workers themselves; and (3) the rest of society, to examine the extent to which TAA costs are offset by TAA’s benefits to everyone other than program participants (such as increased tax revenue and the reduced use of other programs and services).


Because TAA is designed to improve employment-related outcomes over the long run, the research will examine the appropriateness of extrapolating program benefits after the observation period. The extrapolation process, however, will depend on the pattern of the impact findings. For example, if earnings impacts grow near the end of the observation period, then program benefits will be estimated under various assumptions about the decay of future earnings impacts. Furthermore, a current dollar is worth more than a future dollar. Thus, a discount rate will be applied to all benefits (and costs) that accrue after the first year of the study observation period. Finally, the approach to the analysis will be to value program impacts on measurable, market-valued resources in the economy. This excludes many intangible, hard-to-measure benefits, such as improvements in health and in the quality of life. In addition, the analysis does not take into account the gains to society from freer trade resulting from beneficial effects of TAA on those who are adversely affected by it.


Process Study. The research questions associated with the process study concern how the TAA program is administered at the state and local levels, what institutional arrangements are used to deliver services (including relationships among TAA and other programs within the One-Stop system), how services are designed and delivered, who accesses services, and what system-level outcomes result. These questions can be addressed through two primary data sources: qualitative information gathered from the case studies and quantitative information available from the surveys and administrative data.


The data collected from the state and local site visits will be analyzed in a two-stage process. The first stage—a within-site analysis—will consist of the preparation of a detailed case study narrative for each state and local implementation site included in the study. During this stage, the wealth of information obtained from discussions, observations, and reviews of written materials will be organized into a coherent story of TAA program operations for the particular site. Case study narratives will be for in-house use by the SPR/MPR researchers, though site profiles can be developed to be shared with the Department of Labor (DOL) or the sites (at DOL’s request). The internal site-visit write-ups will include the “raw data” that will inform the cross-site analysis, which will, in turn, support the preparation of study briefings, Occasional Papers, and the Final Report. These will emphasize cross-site analysis that highlights common themes, reasons for variation in the way services have been designed, challenges to implementation, and promising approaches. At the cross-site level, descriptive analyses will examine the range of variation at the state and local levels across the case study sites, explanatory analyses will trace the importance of different contextual and implementation factors for service delivery patterns and outcomes, and evaluative analyses will identify the lessons learned from the experiences of state and local implementation and draw implications for policy.


Survey and administrative data will be used to detail aspects of TAA program services and operations. For example, survey data and administrative data on TAA participants will yield important insights on the nature of services received, relationships with other programs within the One-stop system, overall and by different subgroups of respondents, including reemployment services, training services, job search allowances, TRA allowances, participation in Alternative TAA, use of the health insurance tax credit (such as knowledge of the program, how informed about it), and so on. Similarly, by merging administrative data for the TAA and WIA programs, we can learn about the extent of co-enrollment and gain a full picture of the nature of services that participants receive across both programs. In addition, we intend to obtain TAA program data for multiple points in time, both before and after enactment of the TAA Reform Act, so that we can examine trends in service receipt and deduce what impact the Reform Act might have had on service receipt. Survey data can also be used to provide information about the TAA program’s take-up rates, another important issue to be examined as part of this study. The study will also be able to address who among eligible workers chose not to participate and why as well as how eligible workers were notified about the availability of program services and how soon were they notified after the petition was filed.

C. Publication Plans

Some publication associated with this study have already been prepared, including:


  • Report on Initial Implementation. This report presented cross-site findings from the Initial Implementation Study. This cross-site analysis detailed the range of variation in practices across states and local areas with respect to the 2002 TAA Trade Act.

  • Occasional Papers. In lieu of an Interim Report, occasional papers to address specific sets of issues related to the process and impact analyses have been produced. Four papers have already been prepared on the following topics: assessment and case management, linkages with One-Stop partners, Rapid Response services, and participation and exit determinations.

Additional report to be prepared include:


  • Occasional Papers. Additional papers will be prepared on topics that may include characteristics of TAA participants and their jobs (overall, trends over time, and in comparison to other dislocated workers); training and reemployment service receipt by TAA and comparison group members; TAA take-up rates; state data collection systems (nature and adequacy); the impact of performance accountability on program design; the role of the health insurance tax credit; and results from the promising-practices study. Additional topics will be developed in consultation with DOL on the basis of emergent study findings.

  • Final Report. The Final Report will present a comprehensive accounting of all findings and results amassed over the duration of the evaluation. It will cover results from the local-area and individual surveys, the multiple rounds of site visits, information on clients and services from administrative data, impact estimates on all key outcome measures, and results from the benefit-cost analysis. A draft report will be submitted in the Summer of 2011, and a final version will be submitted in September 2011.

D. Project Schedule


The evaluation began in January 2004 and has a projected end date of September 30, 2011. The timing of key activities is shown in Table 5. Items occurring before November 2009 are covered by the existing NOA; those to occur after this date will be covered by the extension for which approval is being sought.


17.  If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.


ETA will display the OMB control number and expiration date for any individual surveys under this clearance.


18. Explain each exception to the certification statement identified in Item 19, Certification for Paperwork Reduction Act Submissions, of OMB Form 83-I.

There are no exceptions taken to item 19 of OMB Form 83-1.

TABLE 5

SCHEDULE FOR THE TAA EVALUATION

Activity

Time Period

Study Design

Completed

Collect Process Data


First site visit

Completed

Second site visit

Completed

Third site visit

Underway

Fourth site visit

Sept 2009 – March 2010

Fifth site visit

July 2010 – Dec 2010

Conduct administrator survey

Completed

Collect Administrative Data


TAA Certified Lists

Completed

UI/TRA claimant data


First extract

Completed

Second extract

Jan 2009 – July 2009

Third extract

Aug 2010 – Dec 2010

UI wage data


First extract

Jan 2009 – July 2009

Second extract

Aug 2010 – Dec 2010

Participant data


First extract

Jan 2009 – July 2009

Second extract

Aug 2010 – Dec 2010

Select Samples

Completed

Collect Survey Data


Baseline

Underway

25-month follow-up

June 2010 – Dec 2010

Analysis and Reporting


Initial implementation study

Completed

Twelve occasional papers

Periodic (underway)

Final report

Jan 2011 – Sept 2011


B. COLLECTION OF INFORMATION INVOLVING STATISTICAL METHODS

1.  Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used.  Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample.  Indicate expected response rates for the collection as a whole.  If the collection had been conducted previously, include the actual response rate achieved during the last collection.


This section describes the potential respondent universes and sampling for the survey to be administered during the extension period; i.e., the 25-month follow-up survey of treatment and comparison group members. To provide an appropriate context, however, we also describe the study design for estimating impacts and the identification of, and sampling from, the universe from which the follow-up sample is drawn. Except as noted in italics, this information is substantively the same as that found in the original supporting statement for the evaluation.


Design of the Evaluation


The ideal design for the TAA impact evaluation would be random assignment, where workers eligible for TAA services would be randomly assigned either to a treatment group (who could receive TAA services) or to a control group (who could not). Persons in the treatment group could then be further randomly assigned to various TAA service groups in order to examine the relative effectiveness of particular program services and components. Random assignment ensures that the average characteristics of each research group would be similar, so unbiased estimates of the impacts of TAA participation overall and of specific program services could be obtained by comparing the mean outcomes of members of the treatment and control groups.


A random assignment design is clearly not feasible for the TAA evaluation, because TAA services cannot be denied to eligible workers (that is, under program rules, it would not be possible to construct a control group). Furthermore, it would not be feasible to randomly assign participants to different service groups, because TAA services are voluntary and are tailored to meet the needs of individual clients. Consequently, the evaluation will employ a comparison group design using state-of-the-art propensity scoring procedures to create comparison groups and obtain estimated impacts.


The sample design for the TAA impact evaluation must meet several critical analysis objectives. First, it must produce a sample that is representative of the national population of workers who are eligible for and receive TAA services and benefits, i.e., TAA program participants. Second, the sample design must also produce a sample that is representative of the national population of workers certified for TAA who are nonparticipants, in order to estimate program take-up rates and reasons for program participation and nonparticipation. Third, the sample design must generate comparison samples of dislocated workers who are as similar as possible to workers in the TAA participant and nonparticipant samples, except for the offer of TAA services. These comparison samples will be used to estimate likely outcomes of treatment group members in the absence of the TAA program. Finally, the sample design must provide sufficient statistical precision for estimating impacts that are relevant to a host of policy issues important to the proposed audiences for the research.


To meet these analysis objectives, the treatment (TAA) groups for the study have been selected from two sample universes, each of which has several advantages and disadvantages. The first (and primary) sample universe, labeled the certified-worker universe, consists of all workers nationwide who were laid off from TAA-certified firms during the period covered by certification, and who subsequently received a first UI payment. The second sample universe, labeled the TRA-beneficiary universe, consists of all workers who received TRA payments after they exhausted their regular UI benefits. Each sample design will be used to generate program impacts for TAA, and results from the two samples can be compared to examine the robustness and credibility of study findings under the quasi-experimental design. Hence, the use of the two TAA samples will improve the ability of the evaluation to yield informative conclusions about program impacts.


Universe of Certified Workers. The study obtained the sample frame for the certified-worker sample from all potentially TAA-eligible workers in lists that certified firms provided to states. These lists are available (and include the workers’ contact information) because, under the 1988 legislative changes to the TAA program, state agencies became required 1) to identify potentially eligible workers by obtaining lists of workers who were separated or partially separated from trade-affected firms during the period covered by certification and 2) to notify each potentially eligible worker in writing.


Importantly, in the Initial Implementation Study, all states in the sample indicated that they request lists of workers from certified employers. Furthermore, employers generally comply, although states sometimes have difficulty obtaining lists from smaller firms, and from companies that move their operations to another state or go out of business. Most states maintain these lists in machine-readable form. Thus, these lists are reasonably comprehensive and available, and contain identifying information on most workers who are potentially eligible for TAA services.


A random sample of workers from this certified-worker universe was selected as follows:


  • The contractor requested worker lists, from the 25 randomly selected states and one replacement state, which were supplied by firms that became certified for TAA between November 1, 2005 and October 31, 2006. From these lists, the contractor received data on about 160,000 workers from the 26 states. This schedule ensured that the sample was eligible for TAA services after the implementation of the 2002 reforms (which took effect in August 2003), and that the sample was not affected by seasonal layoff patterns and is representative of most workers laid off during the period covered by the certifications.8


  • The certified-worker sample was next restricted to those who received UI benefits. The study includes only UI recipients in the sample, because few UI nonrecipients are eligible to receive TAA benefits. Furthermore, because the comparison group sample was selected from UI recipients, UI claims records data were needed for matching purposes.


  • The contractor selected 24,000 certified workers meeting these criteria, using stratified random sampling methods. The number of sample members selected from each state was predetermined to obtain a self-weighting sample (see section B.2). Within each state, the contractor randomly selected workers within strata to ensure that key subgroup of workers are proportionately represented in the study samples. There was no plan to over-sample certain groups of workers, because that would have yielded a sample that is no longer self-weighting and that would reduce the precision of estimates for the full sample. Key stratifying variables included age, gender, race/ethnicity, and local area. The stratified samples were selected within each state by 1) assigning each sample member to a stratum; 2) calculating the number of workers to select from each stratum on the basis of the stratum’s share of the size of the sample universe in the state; and 3) randomly selecting the allocated number of sample members from each stratum.


  • Baseline and follow-up interviews will be conducted by telephone with a random subset of the sample. Telephone interviews are currently being conducted at baseline, and follow-up interviews will be conducted 25 months later. A 60-65 percent response rate of those in the sampling frame is expected to be achieved in each round of interviews. The sample allocation for the surveys is discussed in more detail later in this section.


The certified-worker sample can be used to address all key research questions pertaining to the impacts of the TAA program. The distribution of services and benefits received by the sample will be representative of those provided nationally to TAA certified workers. Thus, the sample can be used to examine the overall effectiveness of the services provided to TAA certified workers as well as the effectiveness for specific arrays of services and benefits (including those delivered by other programs within the One-Stop Career Center system). Furthermore, the sample can be used to address many important questions for the process study, such as the timing and types of services and benefits received by TAA program participants. Moreover, because the sample contains those who did not receive TAA services, it can be used to estimate program take-up rates, reasons for nonparticipation, and the extent to which nonparticipants receive other non-TAA services.


Universe of TRA Beneficiaries. The impact study is also selecting a nationally representative sample from the universe of TRA beneficiaries. The primary advantage of this sample universe over the certified-worker universe is that the UI records data contain information on all TRA beneficiaries nationwide, whereas the lists of certified workers that firms provide to states may not be fully representative of all TAA-eligible workers (although, as discussed, they are likely to be largely representative). The main disadvantage of the TRA-beneficiary sample is that it excludes those who did not receive TRA benefits but received other TAA services. Hence, the sample cannot be used to estimate impacts for these other service groups. Another important disadvantage of the TRA-beneficiary sample is that it cannot be used to examine issues pertaining to program take-up rates. We believe that the use of both the certified-worker sample and the TRA-beneficiary sample can improve the ability of the evaluation to yield informative conclusions about program impacts, because we will be able to compare the consistency of results using the two samples.


The TRA-beneficiary sample is currently being selected from the universe of TRA recipients as follows:


  • Information is being used from the 25 randomly-selected states on about 20,000 customers who received a TRA first payment between January 1, 2006 and December 31, 2006. Because TRA payments typically start about six months after workers start receiving UI benefits, there is significant overlap in the TRA-beneficiary and certified-worker participant sample frames (TAA participants are distinguished from nonparticipants in the certified-worker sample, because the former received a TRA payment).


  • The sample group will include 12,000 TRA beneficiaries selected using stratified random sampling methods. The key stratifying variables will be the same ones (discussed above) that were used to select the certified-worker sample.


  • Administrative records data will be collected for these sample members, but not interview data. To conserve costs, we will conduct telephone interviews with the certified-worker sample only, but not with the TRA beneficiary sample.


Selection of Comparison Groups. To effectively gauge the net impact of the TAA program on the employment-related outcomes of program participants, the study must determine what the outcomes of these participants would have been in the absence of the program. In order to do this, the evaluation is employing a quasi-experimental comparison group design—based on propensity scoring—to obtain estimated impacts. Consequently, the evaluation requires that data be collected from a comparison group of workers otherwise similar to those in the TAA samples.


One obvious approach, which was rejected, is to define the treatment group to consist of eligible workers in TAA-certified firms who became TAA participants, and to define the comparison group to consist of eligible workers in TAA-certified firms who did not become TAA participants. We believe this approach is seriously flawed for two reasons. First, program activity generated by TAA could affect all workers in certified firms regardless of whether they become TAA participants. This possibility is especially acute given the 2002 TAA Trade Act’s emphasis on providing rapid response assistance to workers as soon as possible after a petition is filed. Such services, to the extent they are successful, would obviate the need for TAA enrollment. Second, substantial selectivity bias may result by choosing a comparison group to consist of eligible workers who chose not to seek TAA services. For both these reasons, the comparison of outcomes between TAA participants and nonparticipants from among workers in certified firms would likely yield a seriously biased estimate of program impacts.


As the superior alternative, the study is obtaining comparison groups from manufacturing workers in each state’s regular UI program who were not eligible for TAA services and who lived in the same areas as the TAA sample. We believe for several reasons that this was the best source for obtaining the comparison group. First, the TAA population is a subset of the UI population, so that suitable matches for the TAA sample could be found. Second, matching was performed using UI records data that were obtained, at reasonable cost, for both the TAA and potential comparison group members and that contain fairly detailed demographic and employment-related information. Thus, developing a sample frame from which to select the comparison group was relatively straightforward. The main features of the comparison group design are as follows:


  • The study selected the comparison group for the certified-worker sample from the universe of those who received a UI first payment over the same period as the certified-worker sample. The variables used in the matching process were constructed from UI records data and were displayed in Table 1A.


  • Because of the importance to the evaluation of obtaining the best possible matches, the study will employ a two-stage matching process for selecting the comparison group for the certified-worker sample. In the first stage, UI data was used to obtain matched-comparison samples that were twice as large as the TAA samples. Baseline interviews are thus being conducted with more comparison than TAA group members. In the second stage, we will re-match comparison to TAA group members using richer matching variables from the baseline interview data. The resulting TAA and comparison group samples will be of similar size, and we will conduct follow-up interviews with these sample members only. This design will increase the comparability of the TAA and comparison groups, which will increase the credibility of the impact findings.


  • The study is selecting the comparison group for the TRA-beneficiary sample from the universe of UI exhaustees. This is because workers certified for TAA must first exhaust their regular UI entitlements (including Emergency Benefits) before they can receive a first TRA payment.


  • The study is using propensity score matching (Rosenbaum and Rubin 1983) to obtain the matched-comparison samples. Several recent, influential studies using propensity scoring were able to replicate experimentally based impact estimates (for example, Dehejia and Wahba 1999; Glazerman et al. 2002).


Within each state, the propensity scoring procedure was implemented in four steps:


  1. Estimate a probability model of TAA-eligibility status. A logit model was estimated, where a binary dependent variable that equals one for a TAA sample member and zero for potential comparison group members was regressed on the matching variables from the UI claims records. The contractor conducted separate models for the certified-worker and TRA-beneficiary samples.

  2. Assign a propensity score to each individual. The propensity score is the predicted probability from the logit model. It is a single number that is a function (weighted sum) of the individual’s values for the matching variables.

  3. Select comparison group members using propensity scores. For each TAA sample member, the contractor selected the comparison group member with the closest absolute propensity scores, or the “nearest neighbor.” The selection process was done with replacement, so that a potential comparison group member could be matched to several TAA sample members.

  4. Assess the adequacy of the matching process. The contractor compared the distribution of the matching variables and propensity scores of TAA and comparison group members within various propensity scoring classes (defined by the size of the propensity scores). If the matching process was determined to be unsatisfactory on the basis of these statistical tests, the contractor re-estimated the logit models by including interaction and quadratic terms as additional matching variables in the models (Dehejia and Wahba 1999; Rubin 2001). This process was continued until a satisfactory model specification was found.

The propensity scoring procedure yielded TAA and comparison groups with very similar observable characteristics. However, there may remain unobservable differences between the groups that are correlated with the key outcome measures, and these differences could lead to biased impact estimates. Although it is difficult to test for these unobservable differences, the contractor will employ several specification tests found in the literature to examine the validity of study findings using baseline interview data. One such test, used by Heckman and Hotz (1989), is to conduct the matching process using baseline characteristics measured several periods before the intervention begins. Earnings “impacts” in the ensuing (but still pre-intervention) period should equal zero if the matching process is successful. Another test that the contractor will use is to examine post-intervention impacts for those in the certified-worker sample who receive very few services; mean outcomes should be similar for these workers and their matched-comparison group members (that is, program impacts should be zero for this group).


Sample and Survey Allocation. The contractor considered several factors to design the appropriate sample allocation for the TAA evaluation. First, because the certified-worker sample contains both TAA participants and TAA nonparticipants, the study needed to specify how the sample should be divided across these two groups and what share of the interviews would be devoted to each. Second, the study needed to determine the sample allocation across the two TAA samples. Third, the study needed to determine the sample allocation across the TAA and comparison group samples. Finally, the study had to determine the number of interviews to conduct at baseline and 25 months.


In order to best meet the myriad study objectives within project resources, the sample allocation for the evaluation is as follows (see Table 6):


  • 12,000 TAA participants and 12,000 TAA nonparticipants were selected from the certified-worker lists. Because program take-up rates are expected to be about 30 percent for program-eligible workers, most of those in the certified-worker lists will be nonparticipants. To select our samples, we identified program participants and nonparticipants using UI records data information on TRA benefit receipt and selected 12,000 workers from each stratum. We obtained 24,000 matched-comparison group members for each TAA group, yielding a total sample of 72,000 workers. We will obtain administrative records data for these TAA and comparison group members, and survey data for a random subset of them.


  • A stratified random sample of 12,000 TRA beneficiaries is being selected. The contractor will select 24,000 matched UI exhaustees as comparison group members. They will collect administrative records data for these sample members, but not interview data.


  • Baseline interviews with about 8,000 sample members in the certified-worker sample will be completed. The evaluation is focusing on both the TAA participants and nonparticipants in the certified-worker sample, although a greater share of survey resources are being spent on the participant group, because we expect program impacts to be larger for this group. Thus, we are conducting twice as many interviews with TAA participants than with nonparticipants. Furthermore, we are conducting twice as many interviews with comparison than TAA group members. We expect to achieve a 60-65 percent response rate to the baseline interview and, hence, have released a stratified random sample of about 13,300 workers for baseline interviews. (In the Supporting Statement we submitted to seek initial clearance, we had planned on an 80 percent response rate; accordingly, 10,000 workers were to be released for the baseline survey to yield the 8,000 completes. Experience to date suggests that a 60-65 percent response rate is more realistic—even with the increased incentive payments we are now offering and other changes implemented to operational procedures (see the answer to question 9 in Section A. Thus, the number of workers released for interviewing was increased to 13,300 to ensure the same number of baseline completes.)


  • There will be 3,500 25-month follow-up interviews completed with those in the certified-worker sample. The 25-month interviews will be conducted with participants only. The contractor will update the TAA participant status designations using baseline interview and TRA benefits data (and if available, TAA program data). We expect to achieve a 60-65 percent response rate for the 25-month interview.


TABLE 6

SAMPLE ALLOCATION FOR THE TAA IMPACT EVALUATION



Certified-Worker Sample


TRA-Beneficiary Sample

Data Source

TAA Participants

Comparison Group for Participantsa

TAA Nonparticipants

Comparison Group for Nonparticipantsa


TRA
Beneficiaries

Comparison Group for TRA Beneficiariesa


Records Data


12,000


24,000


12,000


24,000



12,000


24,000

Number Released for Interviews (13,256)

2,875

5,760

1,506

3,115


0

0


Number of Completed Interviews (11,505)








Baseline (7,965)

1,770

3,540

885

1,770


0

0

25-month (3,540)

1,770

1,770

0

0


0

0


aFollow-up interviews will be conducted with only those comparison group members who are re-matched to TAA group members using baseline interview data.


2.  Describe the procedures for the collection of information including:

 

Statistical methodology for stratification and sample selection,

Estimation procedure,

Degree of accuracy needed for the purpose described in the justification,

Unusual problems requiring specialized sampling procedures, and

Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


a. Statistical Methodology


The impact evaluation will be conducted using samples from 26 states. These 26 states include the 25 “original” states that were randomly selected in geographic strata with probabilities proportional to the expected number of TAA-eligible workers in the state, and one replacement state for several original states in Region 2 that initially refused to participate in the study (but subsequently agreed to participate). The process analysis site visits will also be conducted in these same 26 states.


Selection of States. The study samples were selected from a random subset of states rather than from all states nationwide, for two reasons: 1) the TAA caseload is relatively concentrated and 2) sample selection and data acquisition costs increase significantly with the number of states selected. Although a clustered sample of states will result in a slight loss in the precision of the impact estimates (but no bias), the savings in resources and reduced administrative complexity provided by clustering more than offset this loss.


To select the 25 original states and replacement states, we obtained from DOL petition data on all TAA and NAFTA industry certifications from fiscal year (FY) 1999 through FY 2006. These petition data provide a sample universe from which to select the states, because each petition contains information on the estimated number of trade-affected workers (that is, those who were likely to lose their jobs in the period covered by the certification). The petition data contain information on nearly 12,000 certified firms, covering about 1.5 million dislocated workers. Annual state shares did not vary substantially from year to year, so the contractor used the most recent data from FY 2005 to FY 2006 to generate the sampling frame.


Table 7 displays: 1) state shares of the number of trade-affected workers (calculated as the simple average of the state shares for FY 2005 and FY 2006)9; 2) the estimated number of certified workers between November 1, 2005 and October 31, 2006 (the period covered by the study) in each state using projections that 120,000 workers nationwide would be certified during this period; and 3) state selection probabilities. The state selection probabilities (weights) were scaled to sum to 25, the number of original states included in the study. The data are ordered by state, according to their shares of the TAA population, from largest to smallest.


Using the figures in Table 7, we randomly selected 25 states with probabilities proportional to state shares of the eligible TAA population. Fifteen states (NC, CA, PA, MI, SC, GA, TN, OH, IL, IN, TX, NY, AL, KY, VA) were chosen with certainty.10 These 15 certainty states contain about 73 percent of the eligible TAA population.


The remaining 10 noncertainty states were randomly sampled from the universe of noncertainty states (including the District of Columbia and Puerto Rico), with the probabilities shown in column five of Table 7. We selected the noncertainty states by stratifying them by the six DOL regions and using a systematic sampling approach; this ensured that the sample of states would be dispersed geographically. Geographic stratification is a useful way of ensuring that the sample of states represents the full range of TAA programs and participants, because states within a geographic area tend to have similar industries, workers, and labor markets. The selected noncertainty states (WI, MO, NH, NJ, RI, FL, AR, CO, MN, WA) contain about 14 percent of the eligible TAA population. Consequently, our sample of certainty and noncertainty states contains about 87 percent of the eligible TAA population.


After we selected the 25-state sample, we also selected 6 “replacement” states for “original” states if they refused to participate in the study. We selected one replacement state for each region using the sampling techniques discussed above. During the state recruitment phase, both PA and VA were originally reluctant to participate in the study; thus, we recruited MD as a replacement state. PA and VA eventually agreed to participate, but we will include MD in the study because MD provided data and we started conducting baseline interviews there before PA and VA joined the study. However, in the analysis, we will estimate models with and without the MD sample to assess the sensitivity of study findings to the inclusion of this replacement state.


This process yielded the sample of states shown in Table 8. Importantly, the regional distribution of workers in the selected sample of states is very similar to the regional distribution across all states nationwide (Table 8).


Selecting the TAA Samples for the Impact Analysis. We generated self-weighting TAA samples to maximize the precision of the impact estimates for a given sample size of workers. We obtained the sample sizes in each of the selected states using the following formula:



where ns is the number of TAA-certified workers selected in state s, Ns is the total number of TAA-certified workers in state s, and ps is the probability that state s was selected (using the figures in column five of Table 7). The term f is the national sampling fraction for the population being sampled, and was selected so that the state samples will sum to about 12,000 for TAA participants in the certified-worker sample, to 12,000 for TAA nonparticipants in the certified-worker sample, and to 12,000 for TRA beneficiaries.


This formula set the sample in each state (ns) so that the probability of selection is f for all program-eligible workers. The total probability that a worker was selected is the probability the state was chosen (ps) times the probability that a person was chosen in the state (ns/Ns).

As an illustration, to obtain a self-weighting sample of 12,000 TAA participants from the certified-worker lists, state sample sizes were 1,174 in North Carolina, 1,114 in CA, 694 in PA, 324 in each of the noncertainty states (including MD), and so on.


Finally, as discussed, the study design calls for baseline and follow-up telephone surveys with a random subsample of the certified-worker sample and its comparison group. The survey sample was selected by state using stratified random sampling techniques, where strata were formed using gender, age, race/ethnicity, and local area.

TABLE 7

STATE SELECTION PROBABILITIES FOR THE TAA EVALUATION

State

DOL Region

Average Annual Share of Trade-Affected Workers in Certified Firms in FY 2005 to
FY 2006
a

Estimated Number of Trade-Affected Workers
in Sampling Period

State Selection Probability Under a 25-State Design

NC

3

9.78117

11,624

1

CA

6

9.53067

11,326

1

PA

2

5.78217

6,872

1

MI

5

5.69556

6,769

1

SC

3

4.85281

5,767

1

GA

3

4.78937

5,692

1

TN

3

4.58395

5,448

1

OH

5

4.45136

5,290

1

IL

5

4.26997

5,074

1

IN

5

3.97403

4,723

1

TX

4

3.61266

4,293

1

NY

1

3.54997

4,219

1

AL

3

3.04922

3,624

1

KY

3

2.55977

3,042

1

VA

2

2.55549

3,037

1

WI

5

2.36170

2,807

0.875942

MO

5

2.33185

2,771

0.864871

MA

1

1.92007

2,282

0.712144

AR

4

1.86412

2,215

0.691392

NJ

1

1.49139

1,772

0.553149

OK

4

1.47374

1,751

0.546602

MS

3

1.21773

1,447

0.451650

MN

5

1.16515

1,385

0.432148

CO

4

1.16381

1,383

0.431651

IA

5

1.09159

1,297

0.404865

OR

6

1.08076

1,284

0.400848

FL

3

1.00227

1,191

0.371737

NH

1

0.94459

1,123

0.350343

MD

2

0.89531

1,064

0.332066

WV

2

0.86163

1,024

0.319574

RI

1

0.83098

988

0.308206

WA

6

0.82463

980

0.305851

CT

1

0.71944

855

0.266836

AZ

6

0.57570

684

0.213524

ME

1

0.50183

596

0.186126

VT

1

0.37815

449

0.140254

KS

5

0.33184

394

0.123078

ID

6

0.24747

294

0.091785

UT

4

0.22758

270

0.084408

AK

4

0.20343

242

0.075451

NV

6

0.19396

231

0.071939

NE

5

0.18281

217

0.067803

LA

4

0.17836

212

0.066153

DE

2

0.16625

198

0.061661

SD

4

0.15865

189

0.058842

MT

4

0.12002

143

0.044515

PR

1

0.09734

116

0.036103

HI

6

0.06341

75

0.023518

NM

4

0.05145

61

0.019083

ND

4

0.04285

51

0.015893

WY

4

0

0

0

DC

2

0

0

0

Total


100.0000

118,840

25.0000

Source: DOL Petition Data on all Industry Certifications from FY 1999 to the second quarter of FY 2004.

a Figures pertain to the estimated number of trade-affected workers that are denoted in each petition.



TABLE 8
Selected states for the taa evaluation, BY REGION

Original 25-State Sample

Replacement State

Distribution of the Number of Workers in TAA-Certified Firms, by Region (Percentages)

25-State Sample

All States


Region 1



8

10

New Yorkc

CT



New Hampshire




New Jersey




Rhode Island





Region 2




10


10

Pennsylvaniac

Maryland



Virginiac





Region 3



35

32

Alabamac

Mississippi



Georgiac




Kentuckyc




North Carolinac




South Carolinac




Tennesseec




Florida





Region 4




7


9

Texasc

Utah



Arkansas




Colorado





Region 5




28


26

Illinoisc

Iowa



Indiana




Michiganc




Missouric




Ohioc




Wisconsinc




Minnesota





Region 6




12


13

Californiac

Arizona



Washington





c Denotes certainty state.

b. Estimation Procedures


The plans for the statistical analysis of the data for the process, impact, and benefit-cost analyses were discussed in A16 above.

c. Precision of Estimates


The evaluation will provide a broad range of information on the characteristics of TAA-certified workers, as well as on program impacts for the full sample and key subgroups defined by participant and program characteristics. Table 9 presents the precision of key estimates for these myriad analyses. The table presents 95 percent confidence intervals for examining a 50 percent characteristic (the most conservative assumption) for TAA participants and nonparticipants. The table presents also minimum detectable differences (MDDs) across the TAA and comparison groups on quarterly earnings and on a 50 percent characteristic (such as the employment rate or the percentage returning to their pre-layoff job). The MDDs are calculated for participants, nonparticipants, and the combined samples, as well as for estimates based on the records and follow-up interview samples. Notes to the table show our assumptions about confidence level, power, and reductions in variance due to regression. The precision of the estimates incorporates design effects due to the clustering of states selected for the analysis. Design effects for the MDD calculations are about 1.16 for impacts based on the follow-up interview sample, and about 2.9 for participant impacts based on the large records sample.


This design will yield adequate levels of precision both for the descriptive analyses of the demographic and training-related experiences of TAA-certified workers and for examining differences in the mean outcomes of the TAA and comparison groups. For example, for the overall participant sample, we would expect to detect a significant earnings impact if the true program impact was $110 or more using the administrative records sample and $236 or more using the survey sample. Because the previous TAA analysis (Corson et al. 1993) estimated the TAA impacts on earnings to be about $300 per quarter, our design could detect this benchmark impact using either the records or survey data. In addition, the MDDs are near target levels using the survey data for 50 percent subgroups of states or workers.


The study design also provides a sufficient level of precision for detecting earnings impacts to produce a positive net benefit of the TAA program from both the government’s and society’s perspective. TAA program costs are about $12,500 per participant.11 If we assume that 1) TRA benefits are a transfer from taxpayers to program participants (so that these payments do not enter the benefit-cost calculations from society’s perspective) and 2) TRA payments represent about 60 percent of program costs, then earnings would need to average about $320 per quarter during the follow-up period for benefits to society to offset costs. Again, this impact can be detected under the sample design.



TABLE 9


MINIMUM DETECTABLE DIFFERENCES (MDDs) AND 95 PERCENT

CONFIDENCE INTERVALS FOR THE TAA EVALUATION



Sample

Minimum Detectable TAA and Comparison Group Differences


95 Percent Confidence Interval

Quarterly Earnings (Dollars)

50 Percent Characteristic (Percentage Points)


50 Percent Characteristic (Percentage Points)


Records Sample






TAA Participants


110


1.8



1.5

TAA Nonparticipants

110

1.8


1.5

Participants and Nonparticipants

84

1.4


1.4

TAA Participants:





50 percent subgroup of states

156

2.6


2.1

25 percent subgroup of states

221

3.7


3.0

50 percent subgroup of workers across all states

133

2.2


1.7

25 percent subgroup of workers across all states

168

2.8


2.1



Follow-up Interview Sample






TAA Participants


236


3.9



2.6

TAA Nonparticipants (15-Month Sample Only)

322

5.4


3.5

Participants and Nonparticipants (15-Month Sample)

237

3.9


3.2

TAA Participants:





50 percent subgroup of states

333

5.6


3.7

25 percent subgroup of states

471

7.9


5.2

50 percent subgroup of workers across all states

323

5.4


3.5

25 percent subgroup of workers across all states

449

7.5


4.7


Note: The MDD calculations assume: (1) a 95 percent confidence level for a one–tailed test, (2) an 80 percent level of power, (3) that the variance of the estimates are reduced by 20 percent owing to the use of regression models, and (4) a standard deviation of $3,000 for quarterly earnings (based on results from Needels et al. 2002, Schochet et al. 2001, Corson et al. 1998, Bloom et al. 1993, and Corson et al. 1993). The MDDs were calculated using the following formula (the confidence intervals were calculated using a similar approach):


,


where R2 is the regression R-squared value, pc (=.78) is the population share in the certainty states, mc (mn) is the sample size for each research group in the certainty (noncertainty) states, sn (=8) is the number of noncertainty states in the sample, f (=.40) is the finite population correction in the noncertainty states, ρ (=.03) is the between-state variance as a percentage of the total variance of the outcomes based on previous studies, and c (=.30) is the correlation between the mean outcomes of TAA and comparison group members within the same state.

3.   Describe methods to maximize response rates and to deal with issues of non-response.  The accuracy and reliability of information collected must be shown to be adequate for intended uses.  For collections based on sampling, a special justification must be provided for any collection that will not yield reliable data that can be generalized to the universe studied.


a. Methods for Maximizing Response Rates


Several strategies that have been used to maximize the response rate to the baseline survey (see section A9) will also be used for the 25-month follow-up survey. First, before interviewing begins, an advance letter describing the purpose and sponsorship of the survey will be mailed to potential respondents. This letter will assure potential respondents that the caller is conducting a legitimate research interview and not soliciting donations or selling anything. Letters will be sent about a week before the sample is released to the CATI call scheduler. The letter will request up-to-date contact information and will provide a toll-free call-in number. Importantly, the letter will be sent on DOL letterhead, which has helped increase response rates for the baseline survey.


Second, detailed contact information from the baseline interview will be used to help locate those in the follow-up sample who completed baseline interviews. At the end of the baseline interview, respondents are requested to provide several pieces of contact information. This information, however, will not be available for those who did not complete baseline interviews.


Third, to the extent possible, the contractor will use experienced interviewers, supervisors, locators, and CATI programmers for the 25-month follow-up interview who also worked on the baseline interview. About 8,000 baseline interviews are expected to be completed, and the follow-up survey instrument is very similar to the baseline survey instrument. Thus, most survey staff conducting the 25-month follow-up will have had considerable experience and training conducting interviews with the TAA population and using the survey instrument. These staff will have been thoroughly schooled on data collection procedures, including methods for promoting cooperation among sample members, persuading reluctant respondents to participate, and attempting conversions with respondents who initially refused (except for hostile refusals). Staff who will not have worked on the baseline interview will be selected from the contractor’s experienced pool of interviewers and will be extensively trained. Bilingual interviewers will also be available for conducting interviews in Spanish, and an outside firm used for the baseline interview will be contracted to conduct follow-up interviews in other languages such as Mandarin and Hindi.


Fourth, the contractor will use call scheduling to allow respondents to select the time most convenient for them to be interviewed. The use of CATI will ensure control of sample releases, call scheduling, and questionnaire logic and completeness.


Fifth, the contractor will make extensive use of various on-line databases to try to locate sample members who have moved. For the follow-up interview, the contractor will attempt interviews with both respondents and nonrespondents to the baseline interview, because our experience suggests that interview response rates can be increased using this approach.


Finally, the follow-up survey will be kept simple and short, and will only include questions that are directly related to the intended use of the survey and that draw upon the respondents’ presumed expertise. Moreover, no information of a personal sensitive nature will be asked about. As discussed, the follow-up interview is a subset of the baseline interview, and thus, has been thoroughly tested and successfully administered to a large number of sample members.

It is expected these techniques, combined with the $25 monetary incentive to TAA participants who received that amount for their response to the baseline survey and the $50 incentive offered to all other sample members, will yield a 60-65 percent response rate to the 25-month follow-up interviews.


b. Addressing Nonresponse


When the baseline and follow-up surveys of TAA and comparison group members are completed, the contractor will conduct an analysis of nonresponses to assess whether the survey sample is representative of the initial population of UI and TAA customers. This analysis will be done using UI administrative claims and wage record data, which will be available for all sample members. These data will include demographic variables (gender, age, race/ethnicity), earnings measures (base period earnings and quarterly earnings from the UI wage records), and UI claim data (weekly benefit amount, maximum benefit amount, weeks collected, dollars collected, participation in reemployment services). If it appears that the respondent sample is not representative of the full UI and TAA populations, we will adjust sample weights for nonresponse using propensity scoring methods.


c. Reliability of Data Collection


The 25-month follow-up questionnaire is very similar to the baseline questionnaire for the study (with the only omission being the pre-intervention questions). As discussed, the baseline interview will be conducted with about 8,000 sample members, and has been thoroughly tested and successfully administered using CATI. The questions were designed to be easily understood by respondents; senior project staff and phone center supervisors have monitored hundreds of interviews to ensure that questions and response categories are appropriate. The questionnaire was built extensively on questionnaires developed for other DOL studies, including the Trade Adjustment Assistance Survey (OMB number 1205-0306; expiration date 3/31/1992), the Individual Training Account Experiment Survey (OMB number 1205-0441; expiration date 10/31/2006), and the National Job Corps Study Thirty-Month Follow-Up Interview (OMB number 1205-0360; expiration date 9/30/1998).


The use of CATI to conduct the follow-up survey will also help ensure the reliability of the data. It controls question branching (reducing item nonresponse due to interviewer error), modifies wording (providing memory aids and probes and personalizing questions), and constructs complex sequences that are not possible to produce or are less accurate in hard-copy surveys. The probes, verifications, and consistency checks are built into the system to standardize procedures. These procedures ensure the reliability of both the data collection methods and the data collected through those methods. CATI also allows contractor staff to monitor each interviewers’ work using silent call-monitoring equipment and video monitors that display the interviewers’ screen. Finally, the CATI program for the follow-up survey instrument will be developed using the thoroughly tested CATI program for the baseline survey instrument, which will increase the reliability of data collection.

4. Describe any tests of procedures or methods to be undertaken.  Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility.  Tests must be approved if they call for answers to identical questions from 10 or more respondents.  A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.


Nine pre-tests of the follow-up survey will be conducted with TAA participants in mid 2010 before follow-up interviewing begins. The pre-tests will assess the content and wording of individual questions, the organization and format of the questionnaire, respondent burden time, and potential sources of response error. The pretest results will be used to modify the questionnaire. We expect minor changes only, however, because the follow-up survey instrument is very similar to the baseline instrument, which is being administered to nearly 8,000 sample members.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The following persons contributed to, reviewed, and/or approved the design, instrumentation, and sampling plan:


Name

Affiliation

Telephone Number

Dr. Ronald D’Amico

Social Policy Research Associates

(510) 763-1499

Dr. Peter Schochet

Mathematica Policy Research

(609) 279-6887

Richard West

Social Policy Research Associates

(510) 763-1499

Dr. Frank Potter

Mathematica Policy Research

(609) 936-2799

Dr. Sheena McConnell

Mathematica Policy Research

(202) 484-4518






REFERENCES

Abadie, A. “Bootstrap Tests for Distributional Treatment Effects in Instrumental Variable Models.” Harvard University Working Paper. September 2005.


Abadie, A. and G. Imbens. “Large Sample Properties of Matching Estimators for Average Treatment Effects.” National Bureau of Economic Research Working Paper. March 2005.


Agodini, R. and M. Dynarski. “Are Experiments the Only Option? A Look at Dropout Prevention Programs.” Review of Economics and Statistics LXXXVI. February 2004.


Amemiya, T. Advanced Econometrics. Cambridge, MA: Harvard University Press. 1985


Baumgartner, R. and P. Rathbun. 1997. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, Virginia.


Berlin, M. et al. “An Experiment in Monetary Incentives.” In Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 1992.


Bloom, H., L.Orr, G. Cave, S. Bell, and F. Doolittle. The National JTPA Study: Title IIA Impacts on Earnings and Employment. Bethseda, MD: Abt Associates, 1993.


Burghardt, J. and J. Homrighausen. National Job Corps Study: Survey Results. Princeton, NJ: Mathematica Policy Research, Inc., 2002.


Church, A.H. “Estimating the Effects of Incentives on Mail Response Rates: A Meta-Analysis.” Public Opinion Quarterly, vol. 57, 1993, pp. 62-79.


Corson, W., P. Decker, P. Gleason, and W. Nicholson. International Trade and Worker Dislocation: Evaluation of the Trade Adjustment Assistance Program. Princeton, NJ: Mathematica Policy Research, Inc., April 1993.


Corson, W., K. Needels, and W. Nicholson. Emergency Unemployment Compensation: The 1990s Experience. Unemployment Insurance Occasional Paper 98-1. Washington, DC: U.S. Department of Labor, Employment and Training Administration, 1998.


Cochran, W. Sampling Techniques. New York: John Wiley and Sons, 1977.


Dehejia, R.H., and S.Wahba. “Causal Effects in Nonexperimental Studies: Reevaluating the Evaluation of Training Programs.” Journal of the American Statistical Association, vol. 94, no. 448, 1999.


DuMouchel, W. H., and G. Duncan. "Using Sample Survey Weights in Multiple Regression Analyses of Stratified Samples." Journal of the American Statistical Association 78(383):535-542, 1983.


Glazerman, S., D. Levy, and D. Myers. “Nonexperimental Replications of Social Experiments: A Systematic Review.” Princeton, NJ: Mathematica Policy Research, Inc., September 2002.


James, J. and R. Bolstein. “The Effect of Monetary Incentives and Follow-up Mailings on the Response Rate and Response Quality in Mail Surveys.” Public Opinion Quarterly 54, 1990.


Kish, L. Survey Sampling. New York: John Wiley and Sons, 1965.


Mack, S., V. Huggins, D. Keathley, and M. Sundukchi. 1998. “Do Monetary Incentives Improve Response Rates in the Survey of Income and Program Participation?” Proceedings of the Section on Survey Methodology, American Statistical Association, pp 529-34.


Maddala, G.S. Limited Dependent and Qualitative Variables in Econometrics. Cambridge U.K: Cambridge University Press. 1983.


Markesich, J. and M.D. Kovac. “The Effects of Differential Incentives on Completion Rates: A Telephone Survey Experiment with Low-Income Respondents.” Presented at the Annual American Association of Public Opinion Research, Nashville, TN, May 16, 2003.


Martin, E., D. Abreu, and F. Winters. 2000. “Money and Motive: Results of an Incentive Experiment in the Survey of Income and Program Participation.” Unpublished manuscript, Washington DC: U.S. Bureau of the Census.


Murray, D. Design and Analysis of Group-Randomized Trials. Oxford: Oxford University Press, 1998


Needels, K., W. Corson, and W. Nicholson. Left Out of the Boom Economy: UI Recipients in the Late 1990s. ETA Occasional Paper 2002-2003. Washington, DC: U.S. Department of Labor, Employment and Training Administration, May 2002.


Perez-Johnson, et al. The Effects of Customer Choice: First Findings from the Individual Training Account Experiment. Princeton, NJ: Mathematica Policy Research, Inc., December 2004.


Rosenbaum, P., and D. Rubin. “The Central Role of the Propensity Score in Observational Studies for Causal Effects.” Biometrika, vol. 70, 1983.


Rubin, D. “Use of Propensity Scores for Tobacco Litigation.” Harvard University Working Paper, 2001.


Schirm, A., and N. Rodriguez-Planas. The Quantum Opportunity Program Demonstration for Youth: Initial Post-Intervention Impacts. Washington, DC: Mathematica Policy Research, Inc., June 2004. (Also published as ETA Occasional Paper 2004-07, Washington, DC: U.S. Department of Labor, Employment and Training Administration.)


Schochet, P., J. Burghardt, and S.Glazerman. National Job Corps Study: The Impacts of Job Corps on Participants’ Employment and Related Outcomes. Princeton, NJ: Mathematica Policy Research, Inc., June 2001.


Shettle, C. and G. Mooney. 1999. “Monetary Incentives in Government Surveys.” Journal of Official Statistics 15:231-50


Singer, E., R. M. Groves, and A.D. Corning. “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation.” Public Opinion Quarterly, vol. 63, 1999, pp. 251-60.


Singer, E. and R. A. Kulka. “Paying Respondents for Survey Participation.” In Studies of Welfare Populations: Data Collection and Research Issues. Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs. Edited by Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press, 2002, pp. 105-28.


Smith, J. and P. Todd. “Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators?” Working Paper, 2000.


Subcommittee on Disclosure Methodology, Office of Management and Budget. 1994. Report on Statistical Disclosure Limitation Methodology: Statistical Policy Working Paper 22.


1 See Report on Statistical Disclosure Limitation Methodology, Subcommittee on Disclosure Limitation Methodology, Statistical Policy Office of the Office of Management and Budget, 1994.

2 A standard rule of thumb is that units of geography should be reported at a high enough level of aggregation such that there are no fewer than 100,000 individuals in the sampling frame in that unit. No single state would meet this criterion in this study.

3 The average wage for UI recipients reported in a recent study of this population (Needels et al 2002) is $16 per hour.

4 The study will also use this model to test the credibility of our comparison group design. By performing the propensity score matching using characteristics measured several periods before displacement, we can estimate the equation using “outcomes” measured prior to displacement. If the matching process was successful, the coefficient on the TAA indicator should be insignificantly different from zero.

5The contractor will also estimate the regression models without the sample weights to examine the robustness of study findings, and because there is some controversy in the literature about the appropriateness of using weights when estimating multivariate regression models in the absence of choice-based sampling.

6 TAA nonparticipants refers to those on worker lists supplied by employers as being covered by a certification, even though they never became a TAA participant; they are assumed to be TAA-eligible by virtue of being on the worker list, even though their TAA eligibility has not been conclusively established.

7 For instance, the study will estimate the following variant of equation (1):

where Sj is an indicator variable equal to 1 for TAA group members in service receipt category j and their matched comparison group members, and 0 for other TAA and comparison group members. In this model, the term, (1 + j), represents the program impact for TAA group members in service category j relative to their matched comparison group members, holding constant the effects of other services received by TAA group members as well as their baseline characteristics.

8 Workers covered by a certification include those laid off between one year prior to the petition filing date and two years after the petition certification date (which translates into a three to three-and-one-year layoff period). Thus, for the later certifications, our sample excludes workers laid off many months after the certification date, because these workers had not been laid off at the time we collected UI records and selected our samples.

9 Data on the estimated numbers of trade-affected workers were capped at 1,000 workers to remove the effect of a few outliers. This truncation affected less than 0.5 percent of all petitions.

10 The nine states with initial weights greater than 1 were chosen with certainty, because these states had more than 1/25 of the total weight. After removing these states, we also chose six additional states with certainty, because they had more than 1/16 (1 ÷ [25–9]) of the remaining total weight.

11 Total recent outlays for TAA are about $800-$900 million per year, and about 60,000-80,000 workers participate in the program each year.

15


File Typeapplication/msword
File TitleMEMORANDUM
AuthorCindy CMcClure
Last Modified Byryan.dan
File Modified2009-07-31
File Created2009-07-31

© 2024 OMB.report | Privacy Policy