SUPPORTING STATEMENT 1205-Onew TAA

SUPPORTING STATEMENT 1205-Onew TAA.doc

Plan for Evaluation of the Trade Adjustment Assistance Program

OMB: 1205-0460

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT FOR

PAPERWORK REDUCTION ACT OF 1995 SUBMISSION

INFORMATION COLLECTION PLAN FOR

EVALUATION OF THE TRADE ADJUSTMENT ASSISTANCE PROGRAM


1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection.  Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.


The information to be collected is for an evaluation of the Trade Adjustment Assistance (TAA) program. The evaluation will examine program impacts (using a comparison group methodology) as well as programmatic and administrative practices that may have a bearing on the performance of the program. The collection of information will be conducted via: 1) a baseline and follow-up survey of individual TAA participants and comparison group members, 2) administrative records from the TAA and Unemployment Insurance (UI) systems, 3) semi-structured interviews with state- and local-level TAA, Workforce Investment Act (WIA), and rapid response staff, and 4) a survey of TAA Coordinators in all local areas. These items can be found in Appendices A through E here.


Begun in January 2004, the evaluation was undertaken in response both to the Office of Management and Budget’s (OMB) Program Assessment Rating Tool review and to the passage of new legislation in 2002 reauthorizing and amending TAA. The TAA evaluation is thus intended to provide information on areas of OMB concern and to generate high quality information which will be of use in the development of legislation, budget proposals, regulations, administrative guidance and technical assistance. Section 172 of WIA (attached) is the authority by which the Employment and Training Administration (ETA) will collect the information proposed in this evaluation.


Since 1962, TAA has represented a federal commitment to compensate workers who have suffered a trade-related job loss, and to provide them with services that help them adjust to changes in market circumstances. The current TAA program provides training, income support, and other reemployment and supportive services to workers who lose their jobs or have their work hours or salary reduced because of increased imports or shifts in production to foreign countries.


The Trade Adjustment Assistance Reform Act of 2002 ((Pub. L. 107-210)) reauthorized the TAA program for five years and amended the prior law in a number of ways. For example, it consolidated TAA and North American Free Trade Agreement Transitional Adjustment Assistance programs into a single program, broadened eligibility to include secondarily affected workers, and created two new benefits: the Health Coverage Tax Credit (HCTC) and Alternative TAA for eligible workers 50 years old and above. The new law also included provisions designed to change how the program is administered, such as the requirement that states must ensure that rapid response assistance as well as appropriate core and intensive services are made available.


Given these recent program changes, the size of the TAA program, and its central role in federal efforts to help and compensate trade-affected workers, a rigorous study of current TAA operations and their effects on participants’ employment-related and other outcomes is an important priority. The most recent comprehensive study of the TAA program (Corson et al, 1993) was conducted using samples from the late 1980s. However, because of changes in the TAA program, the TAA caseload, and labor market conditions, results from that study may no longer apply to the TAA program as it operates today.

The TAA evaluation has two main parts: an impact study and a process study. The impact study is structured to address the following research questions that are potentially of interest to policy makers:


  • What is the overall impact of TAA on participants’ employment-related outcomes?


  • Do program impacts differ for subgroups of participants defined by their demographic characteristics (such as age, education level, pre-layoff wage, and industry)?


  • What are program impacts for participants who receive specific TAA services and benefits (such as those who receive training, the HCTC, and Alternative TAA)?


  • Do impacts vary for participants in states and local areas with different program features (such as the extent of program integration within One-Stop Career Center Systems and the ability of the TAA program to deliver services in a timely manner)?


  • How do program impacts differ depending on TAA petition features (such as type of petitions, number of affected workers, certification determination processing time, and industry)?


  • What are program take-up rates for all potentially eligible workers and for subgroups of potentially eligible workers?


To meet these analysis objectives, the evaluation will use a comparison group methodology where TAA and comparison group samples will be selected using a two-stage, stratified sample design. In the first stage, 25 states were randomly selected in geographic strata with probabilities proportional to the expected number of TAA participants in the state (see section B below). Both the impact and the process analyses will be conducted in these states so that the study can link data sources and findings from these analyses.


Two samples of TAA and comparison group workers will be selected from the 25 states:

1) workers potentially eligible for TAA, sampled from lists of workers that certified firms provide to state agencies and 2) workers who received a first Trade Readjustment Allowance (TRA) payment after exhausting their UI benefits. A matched comparison sample of UI claimants will be obtained for each of these “treatment” groups using UI claims data in the same states. Propensity scoring methods will be used to select the comparison samples. The research sample will consist of 24,000 workers in the certified-worker sample, 12,000 in the TRA-beneficiary sample, and 72,000 in the comparison sample. The study will first use UI claims data to select a comparison group sample that is twice as large as the TAA sample, and then refine the comparison sample by re-matching comparison to TAA group members using richer matching variables from the baseline interview data.


Program impacts will be estimated by comparing the average outcomes of those in the treatment and comparison groups. The evaluation will use key outcome measures for the impact analysis from two data sources: 1) administrative UI claims and earnings data and 2) telephone interviews conducted with a random subset of sample members at baseline and 15 and 30 months later. The study will examine impacts on the following key outcomes that are hypothesized to be affected by TAA participation: 1) reemployment services; 2) education and training; 3) employment and earnings; 4) receipt of UI benefits; 5) receipt of other welfare benefits; 6) non-labor market outcomes, such as health status, health insurance coverage, and mobility; and 7) changes in quality of life following job loss, in terms of earnings, employment, and non-labor market outcomes compared to the pre-separation period.


A benefit-cost analysis will also be conducted. It will examine benefits and costs from different perspectives (such as those of society and participants) and will provide information on how the benefits and costs are distributed among the different groups. The measured benefits will fall into three categories: 1) benefits of increased output resulting from the additional productivity of TAA participants; 2) benefits or costs from changes in the receipt of UI benefits; and 3) benefits from the reduced use of other programs and services (such as non-TAA-funded education and training services and public assistance benefits). Program costs will include TRA benefits paid to program participants; training, relocation, and job search allowances paid to program participants; training-related costs; and administrative program costs. Data for the benefit-cost analysis will come from interviews with the study sample; process analysis site visits; TAA cost reports; federal and state educational, training, and welfare agencies; and existing data from established databases and surveys.


A process study will be conducted to understand programmatic services, management practices, and institutional structures of TAA and other programs and funding streams that serve TAA-certified workers and TAA participants. Site visits will be will be conducted in the same states as in the impact study, thus it will provide key information for interpreting impact study findings, and an internet/mail survey of all local TAA coordinators. In addition, the process study findings will be related to estimations of impacts for subgroups defined by key state and local area program characteristics and features. Findings will also be used to explore how to improve TAA operations and services.


Information will be gathered from interviews with state and local staff during five rounds of site visits, from a survey of local TAA administrators, and from information in the Trade Act Participant Report (TAPR) on training and other services. Similar service data for both co-enrolled TAA participants and comparison group members will be obtained from the Workforce Investment Act Standardized Record Data (WIASRD).


Data sources are described below. Tables 1, 1A, 1B, 1C, and 1D also display study questions and outcome measures in relation to data elements and source. The data sources include:


TAA Petition Data. These data contain information on all petitions filed by applicants (such as firms, workers, unions, or TAA program staff) that DOL uses to make TAA certification determination decisions. These data were used to develop the frame for selecting states for the evaluation, because they contain information on the estimated number of workers affected by the certification (see B2 below). They will also provide descriptive information on certification rates and on the types of industries that are certified, and will be used to define subgroups by petition features in the impact analysis.


Certified Worker Lists. The universe from which the study will select the certified-worker sample will be obtained from lists, provided by certified firms to state agencies, of workers laid off during the TAA certification period. Because states are required to notify workers in writing about their potential program eligibility, these lists will contain identifying and contact information. The identifying information will be used to match workers in the lists to the UI claims data to identify those who received UI benefits, and the contact information will be used to locate sample members for interviews.


UI and TRA Claims Data. These data will be used in the evaluation in several important ways. First, the data will be used to define the sample of TRA beneficiaries. Second, the data will be used to define the frame from which comparison groups will be selected, and will provide the data for matching potential comparison group to TAA members. These same matching variables will also be used to define key subgroups for which subgroup impacts will be estimated. Third, the data will provide information on key outcome measures for the impact analysis concerning the number of weeks and dollar amounts of UI benefits received during the follow-up period. Finally, the UI claims data will contain contact information that will be needed to locate TAA and comparison group sample members for interviews.


UI Wage Records. UI wage records will be used to measure earnings during the follow-up period. These data provide an alternative earnings source to those provided by the survey data, and will provide earnings data for the full sample rather than for the much smaller survey sample.


TAA and WIA Service Use and Training Data. The TAPR and WIASRD files will be used in the descriptive analysis to describe the training experiences of all TAA participants and their use of other TAA-funded services (such as job search and job relocation allowances). They will also be used to describe TAA-funded training provided though WIA-related services.


Baseline and Follow-up Survey Data. Because the administrative records do not provide sufficient detail for a full examination of a number of key evaluation questions, the study will also rely heavily on survey data. Survey data will provide detailed information--that will be consistent across states--on reemployment and training services received from TAA and other sources. The survey data will also provide data on job characteristics (such as hourly wages, available fringe benefits, and occupations) that are not captured in the UI wage records. The follow-up data will also provide information on other key outcome measures, such as overall health status, health insurance coverage, and the receipt of public assistance. Finally the survey data will provide baseline characteristics needed for re-matching comparison to TAA sample members, for defining key population subgroups, and for constructing control variables for the regression models.


Survey of Local TAA Officials. Data from this internet/mail survey will provide a picture nationally of the services and administration of TAA at the local level.

TABLE 1
TAA evaluation esearch questions and data sources

Research Question

Study Component

Data Sources

How does the TAA program operate, and what are challenges to implementation and operation?

Process Analysis

In-person interviews with state and local TAA staff during five rounds of site visits; mail survey of TAA Coordinators in all local areas; TAPR and WIASRD administrative records data

What is the overall impact of TAA on participants’ employment-related outcomes?

Overall Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Matching variables used to select comparison group: Baseline interviews, UI claims data, published local-area employment-related statistics

Control variables used in regression models to estimate impacts: Baseline interviews and UI claims data

Data items shown in Table 1A (matching and control variables) and Table 1B (outcome measures)

Do program impacts differ for subgroups of participants defined by their demographic characteristics?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and UI claims data

Data items shown in Table 1C

How do program impacts differ depending on TAA petition features?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data: TAPR and WIASRD data.

Subgroup variables: Petition data

Data Items shown in Table 1C

What are program impacts for participants who receive specific TAA services and benefits?

Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and TAPR data

Data items shown in Table 1C

Do impacts vary for participants in states and local areas with different program features?


Subgroup Impacts

Outcome measures: Baseline and follow-up interviews; UI earnings data; UI claims data; TAPR and WIASRD data

Subgroup variables: Baseline interviews and process analysis data

Data items shown in Table 1C

Is TAA cost-effective from the perspective of society as a whole?

Benefit-Cost Analysis

Various sources

Data items and data sources shown in Table 1D


TABLE 1A
DAta sources to oBTAIN MATCHED COMPARISON SAMPLE


Data Item

Data Sources


Initial Matching Variables



Demographic Information


Gender

UI Claims

Age

UI Claims

Race/ethnicity

UI Claims


Job Characteristics


Base-period earnings

UI Claims; UI Wage Records

North American Industry Classification System (NAICS) of main base-period employer

UI Claims


UI Claim and Benefit Data


Benefit year begin date

UI Claims

First claim week begin date

UI Claims

Claim type

UI Claims

Maximum benefit amount (MBA)

UI Claims

Weekly benefit amount (WBA)

UI Claims


Profiling


Claimant placed in WPRS selection pool

UI Claims

Profiling score (if available)

UI Claims

Profiling referral to reemployment services

UI Claims


Local Labor Market Information in County of Residence


Population density

U.S. Bureau of Census

Population growth

U.S. Bureau of Census

Unemployment rate and volatility

U.S. Bureau of Labor Statistics

Total county employment

U.S. Bureau of Labor Statistics

Poverty rate

U.S. Bureau of Census

Percentage of county land in farming

U.S. Department of Agriculture



Additional Variables for Re-matching After Conducting Baseline Interviewa



Demographic Information


Highest diploma or degree received

Baseline interview

Native language and limited English proficiency

Baseline interview

Household size

Baseline interview

Number of children

Baseline interview

Health status

Baseline interview

Marital status and spouse employment

Baseline interview


Characteristics of Pre-UI Job


Occupation

Baseline interview

Tenure

Baseline interview

Hours worked per week

Baseline interview

Hourly wage

Baseline interview

Available fringe benefits

Baseline interview

Reasons left job

Baseline interview

Union membership

Baseline interview

Received severance pay

Baseline interview

Looked for work after job ended

Baseline interview

Expected and actual recall status

Baseline interview


Employment Experiences During the Previous Three Yearsa


Number of jobs held in the previous three years

Baseline interview

Total earnings in the prior year

Baseline interview; UI Wage Records


Other Income


In the past year, whether received:


Food Stamps

Baseline interview

Cash assistance from TANF, Supplemental Security Income (SSI), Social Security Retirement, Disability, Survivors Benefits (SSA), or General Assistance (GA)

Baseline interview


Total household income in the previous calendar year


Baseline interview


Owned home, rented, or lived in public housing


Baseline interview


Covered by health insurance


Baseline interview



Control Variables Used in Regression Models to Estimate Program Impacts



Same as the Matching Variables Listed Above


UI Claims; Baseline interview; Published local-area and employment-related statistics


aData items pertain to the period before the worker got laid off from the job that led to the receipt of UI benefits.


TABLE 1B

DAta sources to MEASURE OVERALL IMPACTS

Outcome Measure

Data Sources


Reemployment Service Receipt


Receipt of rapid response services prior to job layoff, types of services received, and who provided them

Follow-up interviews; WIASRD

Whether reemployment services were received after job loss

Follow-up interviews

Types of reemployment services received (such as job search assistance, job referrals, help with resume, information on how to change careers, career assessment, occupations in demand, information on education and training programs, whether received counseling about training options)

Follow-up interviews; WIASRD

Main place where reemployment services were received

Follow-up interviews

Receipt of job search, relocation and transportation allowances

Follow-up interviews

Whether received a letter stating that participation in services was mandatory to receive UI benefits

Follow-up interviews

Whether services were helpful in finding a job or identifying training

Follow-up interviews


Education and Training Services


Whether participated in any education and training programs

Follow-up interviews; TAPR; WIASRD

Reasons for nonparticipation

Follow-up interviews

Number of programs

Follow-up interviews

Hours spent in education and training

Follow-up interviews

Type of program (type of skills training or general education program)

Follow-up interviews; TAPR; WIASRD

Place where received education or training

Follow-up interviews

Cost of program, funding sources, and out-of-pocket costs

Follow-up interviews

Whether and when completed program

Follow-up interviews; TAPR

Whether received a certificate or degree

Follow-up interviews

Sources of income support while in program

Follow-up interviews

Satisfaction with program

Follow-up interviews

Highest diploma or degree received

Follow-up interviews

Overall Employment and Earnings


Labor force status

Follow-up interviews

Employed, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Weeks employed, overall and by period

Follow-up interviews

Hours employed, overall and by period

Follow-up interviews

Earnings, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Number of jobs

Follow-up interviews

Ratio of weeks employed per year, post-displacement to pre-displacement, overall and by period

Follow-up interviews

Ratio of earnings per year, post-displacement to pre-displacement, overall and by period

Follow-up interviews; UI wage records; TAPR; WIASRD

Job Characteristics


Occupation, industry, and type of employer

Follow-up interviews

How found job

Follow-up interviews

Whether recalled from former employer

Follow-up interviews; TAPR

Hours worked per week

Follow-up interviews

Hourly wage

Follow-up interviews

Available fringe benefits (health, paid vacation, paid holidays, paid sick leave, retirement)

Follow-up interviews

Union membership

Follow-up interviews

Reasons left job

Follow-up interviews

Looked for work after job ended

Follow-up interviews

Ratio of hours worked per week, post-displacement to pre-displacement

Follow-up interviews

Ratio of hourly wage, post-displacement to pre-displacement

Follow-up interviews

Change in the availability of fringe benefits, post-displacement to pre-displacement

Follow-up interviews

Other Income


Total amount received:


UI benefits

UI Claims

Pension benefits

Follow-up interviews

Cash assistance from TANF, Supplemental Security Income (SSI), Social Security Retirement, Disability, Survivors Benefits (SSA), or General Assistance (GA)

Follow-up interviews

Food Stamps

Follow-up interviews

Total household income

Follow-up interviews

Ratio of total household income in the past year, post-displacement to pre-displacement

Follow-up interviews

Owned home, rented, or lived in public housing

Follow-up interviews

Health and Health Insurance


Health status

Follow-up interviews

Type of health problems and how long had problem

Follow-up interviews

Time covered by health insurance

Follow-up interviews

Main type of health insurance

Follow-up interviews

Out-of-pocket costs for health insurance

Follow-up interviews

Change in health and health insurance status, post-displacement to pre-displacement

Follow-up interviews

Marriage, Children, and Mobility


Marital status and spouse employment

Follow-up interviews

Household size

Follow-up interviews

Number of children

Follow-up interviews

Number of states lived in

Follow-up interviews

Change in marital status and spouse employment, post-displacement to pre-displacement

Follow-up interviews


a
Data items pertain to the period before the worker got laid off from the job that led to the receipt of UI benefits.

TABLE 1c


DAta sources to MEASURE subgroup IMPACTS defined by worker characteristics, TAA program experiences, and TAA program features

Data Sources

Data Sources


Outcome Measures



Same as Table 1B


Same as Table 1B



Subgroups Based on Worker Characteristics at the Time of Job Layoff



Age


UI Claims; Baseline interview

Race and Ethnicity

UI Claims; Baseline interview

Gender

UI Claims; Baseline interview

English Proficiency

Baseline interview

Education Level

Baseline interview

Health Status and Health Insurance Coverage

Baseline interview

Poverty Status

Baseline interview

Marital Status and Spouse Employment

Baseline interview

Whether Profiled for UI Services

UI Claims; Baseline interview

Industry of Pre-layoff Job

UI Claims; Baseline interview

Full-time Work Status

Baseline interview

Pre-layoff Earnings Level

UI Claims; Baseline interview

Available Fringe Benefits on Job

Baseline interview

Likely Job Recall Status

Baseline interview

Region

UI Claims; Baseline interview

Rural/Urban Status

UI Claims; Baseline interview

Local Unemployment Rate

Published local-area statistics



Subgroups Based on TAA Participants’ Program Experiences



Extent of Notification About TAA Services


Interviews

Types of TAA-Related Reemployment Services Received

Interviews; TAPR

Participation in TAA Training, and Types of Training Received

Interviews; TAPR

Training Program Completion Status

Interviews; TAPR

Training Waiver Status

TAPR

TRA Benefit Receipt

UI Claims

Received a Job Search/Relocation/Travel Allowance

Interviews; TAPR

Whether Co-Enrolled in WIA

WIASRD

Received a Health Coverage Tax Credit

Interviews

Received a Wage Subsidy as Part of the ATAA Program (for those 50 and older)

Interviews






Subgroups Based on TAA Petition Features


Type of petitioner (worker, firm, other)

Petition

Number of affected workers

Petition

Certification determination processing time

Petition

Industry for the article produced by firm

Petition



Subgroups Based on TAA Program Features



State Performance Level


TAA National Office

State TAA Funding Levels per Participant

TAA cost reports

Number of TAA Participants in State

TAPR

Proportion of Participants Who Receive Training

TAPR; Interviews

Proportion Who Receive TRA Benefits

UI Claims

Staff Experience Levels

Site Visits; Local area survey

Extent of Linkages of the TAA Program with One-Stop Centers

Site Visits; Local area survey

Extent of State Versus Local Control in Setting Policies and Procedures

Site Visits; Local area survey

Timeliness of Rapid Response Services

Site Visits; Local area survey

Quality of the MIS System

Site Visits; Local area survey


TABLE 1D


DAta sources FOR THE BENEFIT-COST ANALYSIS



Data Sources

Data Sources


Benefits



Output


Baseline and follow-up interviews; Published sources on fringe benefits and effective tax rates


Reduced Use of Other Programs and Services


Other Training-related programs

Baseline and follow-up interviews; Published sources on costs of education and training programs

Public assistance (other than UI)

Baseline and follow-up interviews; Published sources on administrative costs of transfer programs


Value of Free Trade


Costs


Review of literature


Receipt of UI Benefits


Baseline and follow-up interviews; Published sources on administrative costs of transfer programs


Program Costs


TRA payments

UI Claims

Allowances (such as job search, relocation, transportation, and subsistence)

TAA Cost Reports

Training costs

TAA Cost Reports

Administrative costs

TAA Cost Reports

2. Indicate how, by whom, and for what purpose the information is to be used.  Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.


The information to be collected will be used to understand and analyze the impacts of the program overall and for different target groups, and to understand how the program works in terms of services, administrative practices and organizational structure. The information will be used by policy makers in the Department of Labor, other parts of the Administration, and the Congress in the formulation of legislative and regulatory policy, as well for determining appropriate technical assistance to improve the operation of the TAA program. Since this is a new collection, there has been no use of the information yet.


3. Describe whether, and to what extent, the collection of information involves use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection.  Also describe any consideration of using information technology to reduce burden.


Computer Assisted Telephone Interviewing (CATI) will be used to conduct interviews for the survey of TAA and comparison group members. CATI was selected because telephone interviews are more cost-effective and impose a lower burden on respondents than in-person interviews. CATI is more cost-effective than paper-and-pencil interviewing for many reasons, including the fact that CATI programs accept only valid responses and can be programmed to check for logical consistency across answers. Interviewers are thus able to correct errors during the interview, eliminating the need to call back respondents to obtain missing data. Also, calls will be made through an auto-dialer, linked to the CATI system, which virtually eliminates dialing error. The automated call scheduler will simplify scheduling and rescheduling of calls to respondents at their convenience and can assign cases to specific interviewers, for example, those who are fluent in Spanish


The local area survey of TAA coordinators will be offered in an internet-based version in addition to a mail version. It is anticipated that 15 percent of respondents will utilize the electronic version of this survey, which will permit electronic submission of responses and efficient, low-error inputting of responses into the database.


4.  Describe efforts to identify duplication.  Show specifically why any similar information already available cannot be used or modified for use for the purposes described in item 2 above.


There is no current source for the information that will be collected in this study. The last study of the TAA program was conducted using a sample of TAA participants in the late 1980s (Corson et al. 1993).


The impact study will utilize administrative records data from a wide range of sources, as well as survey data. The evaluation will use administrative data on: 1) TAA-Certified Workers (lists of workers laid off during the TAA certification period, provided by certified firms to state agencies) to obtain the certified-worker sample; 2) UI and TRA Claims data, to define the TRA-beneficiary sample, to define the frame from which comparison groups will be selected, to provide the data for matching potential comparison group to TAA members, to provide information on UI and TRA benefit receipt during the follow-up period, and to obtain contact information for the surveys; 3) UI wage records, to measure earnings during the follow-up period for all sample members; 4) TAPR and WIASRD records, to describe service receipt and the training experiences of sample members; and 5) TAA petition data, to develop the frame for selecting states for the evaluation, and to describe the types of firms and workers who apply for and become certified for TAA.


Administrative records data themselves are not sufficient for conducting the study, and therefore the study will rely also on survey data collected on a random subset of sample members. The survey data will provide more detail on TAA program experiences, training and reemployment experiences from other sources, key outcome measures, and baseline characteristics needed for matching and defining key population subgroups. Specifically, the survey data will provide information on the extent to which workers are notified about their TAA eligibility, reasons eligible workers accept or do not accept the TAA offer, and participants’ satisfaction with the program. The survey data will also capture services and benefits received by sample members outside the agencies for which administrative data are available, and will provide service receipt data that are collected consistently across states for both research groups. Moreover, the survey data will provide detailed information on the characteristics of jobs found by sample members (such as hourly wages, available fringe benefits, and occupations) and on earnings that are not captured in the UI wage records. The survey data will also provide other key outcome measures, such as overall health status, health insurance coverage, and the receipt of public assistance. Finally, the baseline survey data will collect information on workers’ demographic characteristics and pre-layoff employment-related experiences. We will use this information to re-match comparison to TAA group members to improve the quality of the matches, to define key subgroups, and to construct detailed control variables to adjust for remaining observable differences between TAA and comparison group members in the estimation of program impacts.


No data sources exist that can be used to support a comprehensive and independent analysis of TAA operations, which is necessary to interpret and feed into the study’s quantitative findings. Program regulations provide an outline of how TAA is intended to operate. Furthermore, DOL assesses TAA program performance using quarterly TAPR data provided by each state. However, none of these sources is adequate to support an independent assessment of the program’s current operations, the implementation of the provisions of the 2002 Trade Act, and the degree to which the TAA program is integrated within the local One-Stop Career Center system. Consequently, an up-to-date process study is necessary to obtain this information.


5.  If the collection of information impacts small businesses or other small entities (Item 5 of OMB Form 83-I), describe any methods used to minimize burden.


No small businesses or other small entities will be interviewed for this survey.


6.  Describe the consequences to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles in reducing burden.


If the information collection is not conducted, Federal program or policy activities will not be informed by high quality information upon which to base critical decisions regarding reauthorization of the TAA program and determining what changes are necessary to enhance the effectiveness of the program.


Most of the information collection is for one time with one major exception. Three rounds of survey data will be collected -- at baseline, 15 months later, and 30 months later. The three rounds are designed to measure both short- and long-term program impacts, to ensure high response rates, and to minimize recall error. The quality of the information in the evaluation would be lower with fewer rounds of data collection for the survey.


The rationale for the proposed survey is as follows: A baseline interview will be conducted with the certified-worker TAA sample and its comparison sample soon after these workers are sampled. Because of the time it will take to obtain the administrative data needed for sample selection, the baseline interview will take place after sample members have been laid off from their jobs (and after some TAA sample members have started receiving TAA services and benefits). Therefore, the baseline interview will cover the period prior to job layoff, as well as the period between job layoff and the interview date. The baseline interview will collect data on: 1) workers’ demographic characteristics and pre-layoff employment-related experiences (that will be used to re-match comparison to TAA group members, to define key worker subgroups, and to construct detailed control variables for the regression models); 2) worker experiences with the TAA program and the receipt of specific types of reemployment and training services; and 3) key employment-related outcome measures covering the post-layoff period.


The 15- and 30-month follow-up interviews will collect information on key outcome measures pertaining to the period since the previous interview date. The study will conduct both 15- and 30-month follow-up interviews rather than 30-month interviews only, because past experience suggests that interview response rates will be significantly higher if the interviews are spaced 15 rather than 30 months apart. In addition, recall error by workers about their employment and training experiences will be lessened under this design. Conducting the two follow-up interviews at 15 and 30 months will thus cost-effectively provide timely data on program experiences (close to when they occur) and on short- and longer-term employment outcomes.

7. Explain any special circumstances that would cause an information collection to be conducted in a manner:


  • requiring respondents to report information to the agency more often than quarterly;


  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;


  • requiring respondents to submit more than an original and two copies of any document;


  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;


  •  in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;


  • requiring the use of statistical data classification that has not been reviewed and approved by OMB;


  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or


  • requiring respondents to submit proprietary trade secrets, or other confidential information, unless the agency can demonstrate that it has instituted procedures to protect the information’s confidentiality to the extent permitted by law.


None of the special circumstances are applicable to this data collection. In all respects, the data will be collected in a manner consistent with federal guidelines. There are no plans to require respondents to report information more than quarterly, to prepare a written response to a collection of information within 30 days of receiving it, to submit more than one original and two copies of any document, to retain records, or to submit proprietary trade secrets. The statistical survey will produce valid and reliable results that can be generalized to the universe for the study, and it will include only statistical data classifications that OMB has reviewed and approved. It will include a pledge of confidentiality that is supported by authority established in statute or regulation and by disclosure and data security policies that are consistent with the pledge. It will not unnecessarily impede sharing of data with other agencies for compatible confidential use.


8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency’s notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments.  Specifically address comments received on cost and hour burden.


Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.


Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every 3 years even if the collection of information activity is the same as in prior periods.  There may be circumstances that may preclude consultation in a specific situation.  These circumstances should be explained.

a. Federal Register Notice and Comments


The public was given an opportunity to review and comment (Federal Register Notice Volume 71, No. 53, dated March 20, 2006, Page 14012 through 14014 with comments due May 19, 2006). Comments were received from one reviewer. The comments and responses are as follows:


The commenter recommended that for the purposes of examining HCTC, both the experimental and control groups include UI recipients who have not exhausted their benefits. Response: The research does include a comparison group of UI claimants who have not exhausted their benefits, who will be matched to the TAA-certified workers’ treatment group. The study also includes a second treatment group of TRA recipients, who will be a matched to a second comparison group of only UI exhaustees.


The commenter recommended including additional questions to the baseline and follow-up surveys to investigate whether, within a defined period of time, the interviewee delayed a doctor visit or filling a prescription because of cost; whether the interviewee received any doctor visits or prescription drugs; whether the interviewee has a regular source of health care; and the interviewee’s level of satisfaction with his or her access to health care, among other possible questions. The commenter recommended that the National Health Interview Survey and surveys conducted by The Commonwealth Fund be examined for sample questions along these general lines. Response: The baseline survey already asks a series of 17 questions regarding health status, health insurance, and health care costs. The additional questions proposed by the commenter would add more time to an already lengthy questionnaire and focus in detail on health-related issues which are not the main focus on the evaluation.


The commenter noted that there is no reliable estimate of how many eligible individuals do not receive HCTC and recommended that the evaluation include a specific questions asked of individuals who are potentially eligible but who did not participate in HCTC to ascertain the proportion who, in fact, were ineligible. Response: From a practical standpoint, it would not be feasible in the survey to determine with certainty which participants are eligible or ineligible for HCTC. However, receipt of TRA is likely to be a close approximation for those who are HCTC-eligible. Detailed questions in the baseline survey already cover the type of health insurance available to the sample and comparison groups, as well as the reasons that the TRA group did not take advantage of the HCTC. Responses to these questions will likely provide information for developing the estimate the commenter seeks on how many of those eligible do not receive HCTC. The responses to these questions will also help explain the reasons for the low take-up rate for the HCTC, of which perceived ineligibility for the credit may be one.

b. Consultations Outside the Agency

Consultations on the research design, sample design, data sources and needs, and study reports have occurred during the study’s design phase and will continue to take place throughout the study. The purpose of such consultations is to ensure the technical soundness of the study and the relevance of its findings, and to verify the importance, relevance, and accessibility of the information sought in the study. The contractor, Social Policy Research Associates (SPR), and its subcontractor Mathematic Policy Research (MPR) have provided substantial input to DOL for the evaluation. Table 2 displays the senior technical staff from these organizations that were consulted in developing the design, the data collection plan, and the questionnaire.


TABLE 2

CONTRACTOR TECHNICAL STAFF


Name

Affiliation

Telephone Number

Dr. Ronald Damico

Social Policy Research Associates

(510) 763-1499

Dr. Peter Schochet

Mathematica Policy Research

(609) 279-6887

Patricia Nemeth

Mathematica Policy Research

(609) 275-2294

Dr. Frank Potter

Mathematica Policy Research

(609) 936-2799

Jeffrey Salzman

Social Policy Research Associates

(510) 763-1499

Richard West

Social Policy Research Associates

(510) 763-1499

Dr. Paul Decker

Mathematica Policy Research

(609) 275-2290



  1. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.


As noted in the original Supporting Statement, several strategies will be used to ensure high response rates to the TAA and comparison group interviews. These include sending an advance letter explaining the study, using experienced and well-trained interviewers, and using call scheduling to allow respondents to select the most convenient time for their interview.


As we noted, we also plan to encourage response by offering an incentive to sample members for completing the telephone interviews. Our base plan is to offer a $25 post-completion payment (we will mail a check after the interview). There is, however, some ambiguity in the literature on the relative effectiveness of pre- versus post-payment incentive strategies for telephone surveys (Singer et al. 1999). Thus, as discussed in detail below, we propose to conduct an experiment in conjunction with the baseline interview to further investigate the effects of pre- versus post-payment incentives using a large national sample of UI claimants.


For the experiment, the sample of about 10,000 treatment and comparison group members will be randomly selected to the following three payment groups using stratified random sampling methods: (1) a group that receives a $25 post-payment sent by check to the sample member upon completion of the survey (60 percent of the sample); (2) a group that receives a $2 cash pre-payment and a $25 post-payment upon completion of the survey (20 percent of the sample); and (3) a group that receives a $5 cash pre-payment and a $20 post-payment upon completion of the survey. We estimate that these payment schemes will be cost-neutral. Study results will help inform the payment strategy for the follow-up interviews.


The strategy of providing compensation for participation in the study draws on an extensive literature documenting its importance in achieving high levels of cooperation with surveys. Research has shown, for example, that even modest compensation can increase the response rates to surveys and lower the cost of data collection without compromising the quality of the data (Singer 2002; Singer et al. 1999a and 1999b).


The generous incentives will help obtain a high cooperation rate and avoid the cost of using field interviewers to go to the sample members’ homes to attempt interviews. Offering generous incentive amounts can minimize the overall costs of a survey by reducing the length of the field period and the number of contact attempts needed to achieve the targeted response rate (Markesich and Kovac 2003).


Elaboration of Strategy

Incentive payments are frequently used by survey researchers to attain high response rates and to reduce survey nonresponse bias. As discussed, our main payment approach will be to offer a $25 incentive (post-payment) to sample members for completing the telephone interviews. There is, however, some ambiguity in the literature on the relative effectiveness of pre- versus post-payment incentive strategies for telephone surveys. In a meta-analysis, Singer et al. (1999) found no statistically significant differences in telephone survey response rates between the two types of incentives. However, the reviewed studies show that, in general, pre-payments tend to yield slightly higher response rates, especially if the two types of incentives are compared within the same study. On the other hand, the Teacher Induction Study for the U.S. Department of Education (Nemeth et al. 2006) recently found the opposite result: post-payments produced higher response rates than pre-payments of a similar size.

The TAA evaluation offers an opportunity to conduct an experiment to further investigate the effects of pre- versus post-payment incentives using a large national sample of UI claimants. This experiment would be conducted in conjunction with the baseline interview and would address two key research questions:


  • Relative to a post-payment only strategy, do pre-payments induce people to respond to telephone surveys by instilling a sense of reciprocal obligations?  

  • Among survey respondents, does a pre-payment strategy affect data quality and reduce data item nonresponse?


a. Defining the Payment Mechanisms

We used several interrelated criteria for determining the payment options for the experiment. First, the payment options should not cost more overall than our benchmark $25 post-payment only plan. Second, pre-payments should not be too large, because the contact information in the UI claims data may not be up-to-date at the time of baseline interviewing, and pre-payment incentives should only work if most sample members in the pre-payment groups actually receive the payment.1 Thus, our goal is to minimize the amount of pre-payments that do not reach sample members. Finally, due to the uncertainty of the relative effectiveness of various payment strategies for our study population, we required that total payments to survey respondents be somewhat similar across the payment options. This criterion is conservative in that it guards against potentially low response rates for some payment groups. Consequently, we ruled out design options that specify very small pre-payments but no post-payments (as was done in several studies in the 1980s and 1990s).


Based on these criteria, we propose the following three incentive payment schemes:


  • Group 1 (60 percent of the sample): A $25 post-payment sent by check to the sample member upon completion of the survey.

  • Group 2 (20 percent of the sample): A $2 cash pre-payment and a $25 post-payment upon completion of the survey. The small upfront payment is intended to get the sample member’s attention and to encourage a social bond to increase survey completion rates.2

  • Group 3 (20 percent of the sample): A $5 cash pre-payment and a $20 post-payment upon completion of the survey. This design will be used to assess whether a relatively large upfront payment will increase response rates relative to a smaller pre-payment for Group 2.

We anticipate that survey costs per sample member will be similar for these three groups. Respondent payments are likely to be larger for Groups 2 and 3 than Group 1 because of the pre-payments themselves and because the pre-payments might increase survey response rates. However, other survey costs for Groups 2 and 3 could be lower, because the initial $2 or $5 could motivate more people to call in for interviews, which could reduce the level of effort needed to complete interviews.


b. Sampling and Statistical Precision


The certified-worker sample and their comparison group will be used for the experiment. This sample will contain about 10,000 workers. We will randomly assign each sample member into one of the three incentive groups using stratified random sampling procedures, separately for the TAA and comparison groups. We will stratify the sample by state, survey release date, gender, race/ethnicity, age, and broad occupational category to ensure that the three payment groups will be balanced across key dimensions that are likely to be associated with response rates. This stratification strategy is likely to increase the precision of the impact estimates.


The design will have sufficient power to detect about a 3 percentage point difference in response rates across the three groups. By allocating 20 percent of the sample of approximately 10,000 workers to each of the two pre-payment groups and 60 percent to the post-payment group, we will be able to detect a difference of at least 2.9 percentage points between either pre-payment group and the post-payment group. We will also be able to detect a difference of at least 3.5 percentage points when comparing the two pre-payment groups to each other.3,4


c. Analytic Methods


The purpose of the experiment is to examine the extent to which token pre-payments induce adults to respond to surveys by instilling a sense of reciprocal obligations. Accordingly, in the analysis, we will examine the differential effects of the three payment options on survey response rates, survey costs, data item nonresponse, and key data item values (such as reported employment and earnings levels and participation rates in education and training programs).


We will estimate impacts using standard experimental procedures. The analysis will focus on comparing simple differences in the distribution of outcomes between the three payment groups by conducting t-tests and chi-squared tests. We will also compute effects for key population subgroups to examine whether the findings for the full sample apply broadly across respondent groups. We will also estimate regression-adjusted effects to improve the statistical precision of the estimates and to adjust for baseline differences between the observable characteristics across the payment groups due to random selection.


Results of the analysis will be used to inform the design for the follow-up surveys. If we find that one of the pre-payment options increases response rates in a cost-effective way, we could adopt that incentive payment scheme for the 15- and 30-month follow-up interviews. We could also consider designing another experiment in conjunction with the follow-up interviews to compare the effectiveness of variations of this scheme.


d. Implementation of Payment Procedures

For the baseline survey, an advance letter will be sent to sample members to inform them that they will soon be contacted by telephone to complete a survey questionnaire. An automated call scheduler will then release cases in batches, according to a software algorithm, to telephone operators who will administer the questionnaire. If the telephone number on file for a sample member is found to be incorrect, the case will go to locating/searching.

To carry out the incentive experiment, we will have three versions of the advance letter, each explaining the cash contained with the mailing (if applicable) and the post-payment incentive we will mail upon completion of the interview. The survey will also have three different versions of the introduction, that will remind the sample member of the pre-payment incentive (if applicable), as well as three closing statements, where we remind the respondent that a check will be mailed out. Finally, we will produce different versions of the thank-you letter included with the post-payment incentive check that will specify the amount of the enclosed check.

Aside from references to the incentives, however, materials and scripts will be identical. Each incentive group will be subject to the same telephone contact procedures. The timing of when a sample member gets called, how often the number is called, and decisions to send cases to locating/searching will not depend on incentive group status.


Special reports will be designed to capture completion rates and costs for the three incentive groups. This will allow us to track the progress of the sample through the interviewing, locating, and refusal conversion steps, as well as to track and resolve issues related to the cashing of post-payment incentive checks. Each sample member’s incentive group will be indicated by a “flag” or marker that we will insert into the person’s case record once the groups have been selected and randomized.


10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.


SPR and MPR will follow procedures consistent with provisions of the Privacy Act for assuring and maintaining confidentiality. Confidentiality agreements will be established with states in the collection of administrative records. Respondents to the baseline and follow-up interviews will receive information about confidentiality protection in an advance letter describing the survey and again at the outset of the interview as part of the interviewer’s introductory comments. Respondents will be informed that all information they provide will be treated confidentially. Interviewers will be trained in confidentiality procedures and will be prepared to describe these procedures in full detail, if needed, or to answer any related questions raised by respondents.


All data items that identify respondents will be kept by SPR and MPR for use in assembling records data and in conducting the interview. Any data received by the U.S. Department of Labor, Employment and Training Administration will not contain personal identifiers, which will thus preclude individual identification.


In addition, the following safeguards are routinely used by research team members to assure confidentiality in the collection of survey data:

  • Access to sample selection data with personal identifying information is limited to those that have direct responsibility for providing the sample. These data are destroyed at the conclusion of the research.

  • Identifying information is maintained in a separate file from interview data. The files are linked only with a sample identification number.

  • Access to link-files containing sample identification numbers connecting the research data and the respondents’ identification is limited to a few persons who have a need to know this information.

  • Access to any hard-copy documents is strictly limited. Physical precautions include use of locked files and cabinets, shredders for discarded materials, and interview control procedures.

The research team also will use standard methods to guard against inadvertent disclosure.5 These include methods to be used with tabular results of frequency data and tabular results of magnitude data, as well as methods to be used in preparing public-use files. With respect to tabular results, our intent is to report only those results with adequate statistical precision. In general, this will be a more limiting condition than is strictly necessary from the standpoint of ensuring adequate safeguards against inadvertent disclosure. Thus, the guidelines to be reported below should be viewed as minimal conditions; in actuality, much more stringent conditions will be applied in most cases. The guidelines are as follows:

Tabular Results of Frequency Data. For tabular results of frequency data, a risk of inadvertent disclosure will be avoided by adherence to these two conditions:

  • No cell shall be reported if the number of respondents is less than 10 and

  • No single cell shall solely account for a row or column total.

Should these conditions be violated in initial tabulations, rows or columns will be combined, as necessary, until the conditions are satisfied.

Tabular Results of Magnitude Data. For tabular results of magnitude data, we will require each cell value to be based on 10 or more respondents and will apply the (n,k) rule, using a value of 2 for n and of .6 for k. Thus, no cell value shall be reported if any two respondents contribute at least 60% to the cell’s total value. Should these conditions be violated in initial tabulations, rows or columns will be combined, as necessary, until the conditions are met.

Reporting Microdata. One of this project’s deliverables is a public-use file of microdata. Following customary guidelines, the following safeguards will be implemented to guard against inadvertent disclosure:

  • No personal identifiers will be appended to any record,

  • Units of geography will not be identified,6

  • The employer from which the individual was dislocated will not be revealed, nor will the TAA petition number nor the industry of dislocation.

  • Key information drawn from administrative data that could be used to identify an individual (including enrollment date, date of training, and date of exit) will be rounded (e.g., dates will be reported in mmyyyy format, rather than mmddyyyy format) and random perturbations will be applied, and

  • Variables will be bottom-coded or top-coded, if extreme values are present.

11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private.  This justification should include the reasons why the agency considers these questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.


The survey for the TAA evaluation contains a minimal set of items that may be considered sensitive in nature. These questions include the receipt of income by the sample member from jobs covering the pre- and post-layoff period, income by spouses or partners, income from pensions, public assistance receipt, and total household income. Questions about income and public assistance receipt are necessary to construct the primary outcome measures for the study. TAA provides training and other reemployment services to help participants prepare for and obtain suitable employment. Thus, the primary purpose of the program is to improve the long-term earnings and income of program participants and to reduce their reliance on public assistance. Consequently, it is necessary that the study obtain data to measure the economic well-being of study participants.


As described in item 10 above, all respondents will be assured of confidentiality at the outset of the interview. All survey responses will be held in strict confidence. In collecting all information, SPR and MPR will comply with the requirements of the Privacy Act of 1974. All questions in the current survey, including those deemed potentially sensitive, have been pre-tested and used extensively in prior surveys with no evidence of harm.


12. Provide estimates of the hour burden of the collection of information. The statement should: Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated.  Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates.  Consultation with a sample (fewer than 10) of potential respondents is desirable.  If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance.  Generally, estimates should not include burden hours for customary and usual business practices.


The total hour burden for information collected for TAA study is 11,867 hours, as shown in Table 4. The table displays the respondent time burden for the collection of administrative data from the 25 states included in the study and from the three rounds of telephone interviews, as well as for the internet/phone survey of all local areas. It additionally includes the burden estimate for all five rounds of site visits included as part of this study, including the Initial Implementation Study, the site visits to the State and local offices as part of the impact study sample, best practices site visits, and a final round of site visits as the study concludes.

TABLE 4

RESPONDENT HOURS BURDEN FOR THE TAA EVALUATION


Activity

Total
Respondents

Frequency

Average Minutes
per Response

Burden Hours


Impact Analysis





State Administrative Data Requests

25

Thrice

480

600

Baseline Survey

7,965

One time

35

4,646

15-Month Follow-up Survey

5,310

One time

30

2,655

30-Month Follow-up Survey

3,540

One time

30

1,770


Process Analysis





Administration of Process Visit Protocols





1: Initial Implementation

144

One time

90

216

2: Impact Sample State Visits

150

Twice

100

500

3: Impact Sample Local Visits

280

One time

85

397

4: Best Practices Visits

180

One time

100

300

5: Final Sample

325

One time

100

542

Survey of All Local Areas





State phone screener

50

One time

10

8

Local area survey

700

One time

20

233



The hour burden was calculated based on an estimate that it will take: 1) each state 24 hours of staff time to process our data requests, 2) each respondent 35 minutes to complete the baseline interview (based on actual pretests), 3) each respondent 60 minutes to complete the follow-up interviews (based on actual pretests), 4) 1,955 hours to administer the process visit protocols to state- and local-area staff, and 5) 20 minutes for the TAA Coordinator in each local area to complete the survey, as well as 10 minutes for each state telephone screener (based on actual pre-tests).7 Average time per response is 38.14 minutes, based on multiplying total burden hours by 60 minutes and dividing by number of responses ([11,867 x 60]/18,669).


The total burden cost of collecting the baseline and follow-up survey information is $145,136. This cost represents 35 minutes to complete the baseline survey multiplied by the number of completers (7,965), plus the 30 minutes to complete each of the follow-up surveys multiplied by the 8,850 completers, and by an estimated average hourly wage of $16.8 This burden cost would be offset by the respondent payment for each interview completed (which is about $25 for the three payment groups). The cost to states for filling our administrative data requests was included in item 14 of the original packet, which includes funds to reimburse states, as necessary. The total burden cost of collecting process analysis data was also included in item 14.


13. Provide an estimate for the total annual cost burden to respondents or recordkeepers resulting from the collection of information.  (Do not include the cost of any hour burden shown in Items 12 and 14).


The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component.  The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information.  Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred.  Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.


Respondents will incur no startup or ongoing financial costs. There are no record keepers.


If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance.  The cost of purchasing or contracting out information collections services should be a part of this cost burden estimate.  In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.


The proposed information collection plan will not require the respondents to purchase equipment or services or to establish new data retrieval mechanisms. These costs are not expected to vary.


Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.


We do not expect responding agencies to purchase equipment or services in order to respond to this information collection plan effort. 


14. Provide estimates of annualized costs to the Federal government.  Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information.  Agencies may also aggregate cost estimates from Items 12, 13, and 14 in a single table.


The total cost to the federal government of carrying out this study is $10,453,957, to be expended over the 72 months of the evaluation. Of this amount, approximately $3.4 million will be used for developing a research design, consulting with project advisors, carrying out an initial implementation study, carrying out analysis, preparing reports, developing a public use file, and carrying out project management. Data collection for the evaluation, as described in this supporting statement, will cost approximately $7.1 million. Data collection costs are as follows:

A) Total Cost of Collecting Petition Data: $620,808. This represents the cost of collecting lists of certified workers from states, from which the analysis sample of TAA-eligibles will be drawn, and is made of largely of loaded labor costs, plus minor costs for communication expenses

B) Total Cost of Collecting Other Administrative Data: $1,022,910. This figure includes the costs of collecting Unemployment Insurance wage records, claimant data, and program participant data. This budget estimate includes: 1) loaded labor costs, including the costs of requesting the data files from states and preparing them for analysis and 2) payments to states to reimburse them for the cost of preparing data files, estimated at approximately $510,000, or $18,000 to each of 25 states, plus indirect expenses.

C) Total Survey Administration Costs: $3,727,079. This figure includes the costs of selecting the treatment and comparison group samples ($44,000, primarily in labor costs) and administering the baseline (approximately $1,724,000), first follow-up ($1,125,000), and the second follow-up ($834,000) surveys. Costs for conducting these surveys include the loaded labor cost for senior research staff, programmers, survey supervisors, telephone interviewers, and data clerks and locators, and the M&S costs, including telephone costs (approximately $93,600), facilities costs (approximately $257,000), other costs (approximately $467,000, which are primarily for respondent payments), as well as indirect expenses.

D) Process-Study Data Collection: $1,730,667. This figure includes the contractor’s loaded labor costs and travel costs associated with the site visits, and costs associated with the local area survey.


15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.


This is a new, one-time data collection effort counting as 11,962 hours toward ETA’s Information Collection Budget (ICB), and it does not represent a change in respondent burden.


16.     For collections of information whose results will be published, outline plans for tabulation and publication.  Address any complex analytical techniques that will be used.  Provide the time schedule for the entire project, including beginning and end dates of the collection of information, completion of report, publication dates, and other actions.

A. Tabulations. A wealth of information will be collected and tabulated in this study around two broad areas of inquiry. These are: 1) what are the program’s net impacts on employment and earnings, both overall and for specific subgroups and 2) how does the TAA program operate (i.e., who receives which services, at what quality, under what administrative arrangements). The specific tabulations will reflect the multiple types of analyses discussed below.


B. Analytic Approaches. The two research questions cited above are inextricably connected, in that the proper interpretation of outcomes can derive only from a solid understanding of the TAA program’s administration and services. At the same time, each research question has its own logic, and each gives rise to its own analysis methods. Accordingly, the evaluation will entail impact analyses, a benefit-cost analysis, and a process study, as described below.

Impact Analyses: The impact analysis for the TAA evaluation will address the effectiveness of TAA services and benefits on key participant outcomes from several perspectives. The global analysis will examine the overall impacts of the TAA program for the full sample, while the targeted analysis will address the important policy questions of what works and for whom.


Global Analysis. The impact analysis will first estimate the extent to which the TAA program changes the average outcomes of program participants relative to what these outcomes would have been in the absence of the program. Theoretically, because the procedure used to select the comparison groups will have yielded well-matched comparison groups, this impact can be estimated as a simple difference in outcomes between groups. However, regression procedures will be used to estimate these impacts, for two reasons. First, these procedures produce more precise impact estimates, to the extent that the covariates included in the models are correlated with the outcome measures. Second, regression procedures can adjust for any differences in the observable characteristics of TAA and comparison group members due to interview nonresponse and to residual differences after matching.


The study will estimate variants of the following regression model:


where y is an outcome variable at a specific time point, TAA is an indicator variable equal to 1 for TAA group members and 0 for comparison group members, Xs are baseline explanatory variables used in the matching process, is a mean zero disturbance term, and , , and are parameters to be estimated. The estimate of represents the regression-adjusted impact estimate of TAA on the outcome variable, and the associated t-statistic can be used to gauge the statistical significance of the impact estimate.9 The estimates of across the many outcome measures that will be examined for the study will form the basis for assessing the effects of TAA program services.


The Technical Appendix describes the mathematical formulas that will be used to obtain the parameter estimates and their associated variances under a design-based inference approach. The Appendix displays formulas for continuous outcome measures (such as earnings and UI benefits received over a given follow-up period), as well as binary outcome measures (such as whether the worker is employed, has been recalled to his or her separating job, and has health insurance). The Appendix also describes specific methods that will be used to construct weights for the analysis, including probability weights, and adjustments for nonresponse and poststratification.10


Finally, under the certified-worker design, the study will obtain samples of both TAA participants and TAA nonparticipants in TAA-certified firms.11 Because different patterns of impacts for these two groups are expected, the study will estimate separate impacts for each one, although the study will also estimate impacts for the pooled sample (using the appropriate weights) to examine TAA effects for the full population of those covered by a certification. In addition, separate models will be estimated using the certified-worker and TRA-beneficiary samples.


Targeted Analysis. The targeted analysis will use a more refined approach than the global analysis to examine the effects of TAA on key outcomes. The targeted analysis will address the important policy questions of what works, and for whom does TAA work. Specifically, it will address the following research questions (see Table 1C):


  • Do impacts differ for workers who receive different services and benefits? What are the impacts for those who receive long-term training? For those waived from training? For those who use the HCTC? For those over 50 who receive Alternative TAA services? For those who receive TRA benefits? For those who receive assessment, counseling, or placement assistance?


  • Do impacts differ for workers with different baseline characteristics? Do impacts differ by age, race/ethnicity, education level, pre-layoff earnings level, industry, region, and the local unemployment rate?


  • Do impacts differ for workers with different petition features? How do impacts vary by the number of affected workers, certification determination processing time, industry, and type of petitions?


  • Do impacts differ among states with different administrative or organizational features or structures? How do impacts vary according to states’ performance levels? According to the ability of the TAA program to deliver adjustment services in a timely manner? According to the extent of integration of services and programs within the One Stop Career Center system?


In the targeted analysis, the study will first examine thoroughly, using interview and program data, the services and benefits that sample members received. Then researchers will gauge the extent to which TAA workers participate in various program components (such as job training and the HCTC). If participation levels in some program components are very low overall or for key worker subgroups to which these services are targeted, then program impacts for these program components are expected to be small. Similarly, understanding the nature and amount of services that the comparison group receives will help us assess whether impacts for specific program components or for specific groups of workers are likely to be large or small. Moreover, process analysis findings will clarify the nature of services and the structure of program operations and how these may affect outcomes and impacts.


Impact results for those who receive different program services and benefits can provide important information on how to improve services and to develop and expand the program. The estimation of these subgroup impacts, however, is complicated by two factors. First, there are likely to be differences in the characteristics of those who receive different services (which could lead to sample selection biases). Consequently, comparing outcomes of TAA group members who receive specific services to the outcomes of those who receive other services (or to the outcomes of the full comparison group) may yield biased estimates. Second, because there may be considerable overlap in the receipt of particular program services, it may be difficult to disentangle the effects of some program components from the effects of others.


The study will use a two-step estimation process to address these complexities. First, during the contextual analyses, the researchers will construct various service-receipt indicator variables to signify the key program services and benefits that TAA group members receive. For example, it is likely that indicators will be constructed for TRA beneficiaries who are waived from the training requirement, those who use the HCTC, those who participate in Alternative TAA, and those who receive both TRA benefits and job training. If appropriate, other indicators will be constructed for combinations of these training services or other services such as assessment, counseling or placement assistance. Importantly, indicator variable values for comparison group members will be the same as the values for their matched TAA group members.


In the second stage, researchers will estimate impacts for those receiving a specific array of TAA services, by comparing the average outcomes of TAA group members within a service-receipt category to the average outcomes of their matched comparison group members. These subgroup impact estimates will be obtained by including in equation (1) explanatory variables formed by the interaction of service-receipt and TAA indicator variables.12 Researchers will include these interaction terms one at a time, but they will also conduct analyses where these interaction terms are included simultaneously to help disentangle the effects of some program components from others. It is expected that these analyses will yield informative results, because the baseline characteristics of TAA group members in specific service receipt cells are expected to be similar to those of their comparison group members.


Next, researchers will determine the extent to which TAA benefits workers with different personal characteristics, a question with important policy implications both for the operation of the program and for the development of other programs designed to serve this population. The study will use UI and baseline interview data to construct these worker subgroups. We expect that the subgroups (pertaining to the pre-intervention period) will include age, race and ethnicity, gender, industry (such as steelworkers), education level, marital status, pre-layoff earnings level, likely job recall status, region, and the local unemployment rate (see Table 1C). We will obtain subgroup impact estimates using procedures very similar to those described above for the service-receipt subgroups.


Additionally, the study will examine whether TAA petition features affect TAA impacts. Using petition data, researchers will construct worker subgroups (based on the number of affected workers, certification determination processing time, type of petitioner, and industry), and compute impact estimates in a way similar to the estimation of service-receipt subgroup estimates. Impacts are expected to differ across these groups. For example, workers who exert the effort to petition when their firms fail to do so might value TAA benefits more highly than workers in other firms and, thus, these workers might have higher program participation rates and larger impacts.


Finally, the study will estimate impacts for subgroups defined by key state program features, using information from the process analysis on key features that vary across states and that are likely to contribute to overall program effectiveness (see Table 1C). Researchers will estimate these subgroup impacts by grouping states with a particular program feature, and by comparing the mean outcomes of TAA and comparison group members within those states. The study will also use hierarchical linear (HLM) models to help disentangle specific program features from others. In these HLM models, the 25 state impact estimates (or larger number of local-area impact estimates) will be regressed on a small number of key program features, so that the effects of a particular program feature can be assessed holding constant the effects of other features.


The targeted analyses will generate impact estimates for a large number of outcome measures and for many subgroups. In each analysis, formal statistical tests will be conducted to determine whether TAA-comparison group differences exist for each outcome measure and subgroup. However, an important challenge for the evaluation is to interpret the large number of impact estimates to assess the extent to which TAA makes a difference. Thus, researchers will carefully examine the pattern of results rather than focus on isolated results. For example, the evaluation will examine the magnitude of the significant impact estimates to determine whether the differences are large enough to be policy relevant, and check that the sign and magnitude of the estimated impacts are similar for related outcome variables and subgroups. In addition, researchers will determine whether the sign and magnitude of the impact estimates are robust with respect to alternative sample definitions, model specifications, and estimation techniques.


Benefit-Cost Analysis. A benefit-cost analysis will compare the monetary value of impacts to their costs in order to examine the extent to which the TAA program is cost-effective. The basic approach for measuring the benefits and costs of TAA will be to value key program impacts at market prices, which are readily available in most cases, straightforward to use, and provide a good measure of the value that society places on impacts.


The potential benefits and costs will fall into five categories:


  1. The benefits of increased output resulting from the additional productivity of TAA participants. TAA services are expected to increase the job skills of program participants, which may lead to long-term earnings gains. The additional output produced by program participants will be measured using the increase in their total compensation, which will include earnings and fringe benefits. The calculations will use the earnings impacts estimated using the UI wage records and survey data, and the costs of fringe benefits (such as paid leave, supplemental pay, health insurance, pensions, and savings plans) from published data sources. We will also estimate tax payments (federal income taxes and credits, payroll taxes, federal excise taxes, and state and local taxes) based on reported income and household composition.


  1. The benefits or costs from changes in the receipt of UI benefits. TAA might reduce the receipt of UI benefits if program reemployment services are effective in helping participants find jobs quickly. However, TAA might also increase UI exhaustion rates if recipients continue their training after becoming eligible for TRA services. The analysis will use estimated impacts on UI benefit receipt from the UI claims data, and information on UI administrative costs obtained from DOL.


  1. The benefits from the reduced use of other programs and services. TAA participants are expected to use fewer non-TAA-funded services than comparison group members. Such services include education and training programs and reemployment services not funded by TAA. The costs of these programs will be obtained as part of the process analysis. In addition, because of potential long-term earnings gains, the TAA group is expected to receive fewer public assistance benefits (such as Food Stamps, TANF, and general assistance) than the comparison group.


  1. Unmeasured benefits. TAA may provide other benefits that are difficult to measure, such as improvements in participants’ quality of life that may result from improvements in their employment opportunities, self-esteem, and health. TAA may also provide gains to society from freer trade.


  1. Program costs. Program costs will include: (1) TRA benefits paid to program participants (obtained using UI/TRA data); (2) allowances paid to program participants (such as job search, relocation, transportation, and subsistence allowances); (3) training-related costs; and (4) administrative costs. Researchers will calculate these costs using quarterly cost data that states provide to DOL as well as data that we will obtain as part of the process analysis.


The findings from the benefit-cost analysis will depend on the perspective from which benefits and costs are measured. Most of the benefits of TAA accrue to program participants, while the government pays most of the costs. Hence, the benefits and costs to participants will differ from the benefits and costs to the government and the rest of society. Consequently, benefits and costs will be examined from three different perspectives – those of: (1) society, in order to determine whether the aggregate benefits from the program are greater than the resources used by the program, abstracting from who enjoys the benefits and who bears its cost; (2) participants, in order to address whether TAA is a good investment for the workers themselves; and (3) the rest of society, to examine the extent to which TAA costs are offset by TAA’s benefits to everyone other than program participants (such as increased tax revenue and the reduced use of other programs and services).


Because TAA is designed to improve employment-related outcomes over the long run, the research will examine the appropriateness of extrapolating program benefits after the observation period. The extrapolation process, however, will depend on the pattern of the impact findings. For example, if earnings impacts grow near the end of the observation period, then program benefits will be estimated under various assumptions about the decay of future earnings impacts. Furthermore, a current dollar is worth more than a future dollar. Thus, a discount rate will be applied to all benefits (and costs) that accrue after the first year of the study observation period. Finally, the approach to the analysis will be to value program impacts on measurable, market-valued resources in the economy. This excludes many intangible, hard-to-measure benefits, such as improvements in health and in the quality of life. In addition, the analysis does not take into account the gains to society from freer trade resulting from beneficial effects of TAA on those who are adversely affected by it.


Process Study. The research questions associated with the process study concern how the TAA program is administered at the state and local levels, what institutional arrangements are used to deliver services (including relationships among TAA and other programs within the One-Stop system), how services are designed and delivered, who accesses services, and what system-level outcomes result. These questions can be addressed through two primary data sources: qualitative information gathered from the case studies and quantitative information available from the surveys and administrative data.

The data collected from the state and local site visits will be analyzed in a two-stage process. The first stage—a within-site analysis—will consist of the preparation of a detailed case study narrative for each state and local implementation site included in the study. During this stage, the wealth of information obtained from discussions, observations, and reviews of written materials will be organized into a coherent story of TAA program operations for the particular site. Case study narratives will be for in-house use by the SPR/MPR researchers, though site profiles can be developed to be shared with the Department of Labor (DOL) or the sites (at DOL’s request). The internal site-visit write-ups will include the “raw data” that will inform the cross-site analysis, which will, in turn, support the preparation of study briefings, Occasional Papers, and the Final Report. These will emphasize cross-site analysis that highlights common themes, reasons for variation in the way services have been designed, challenges to implementation, and promising approaches. At the cross-site level, descriptive analyses will examine the range of variation at the state and local levels across the case study sites, explanatory analyses will trace the importance of different contextual and implementation factors for service delivery patterns and outcomes, and evaluative analyses will identify the lessons learned from the experiences of state and local implementation and draw implications for policy.


Survey and administrative data will be used to detail aspects of TAA program services and operations. For example, survey data and administrative data on TAA participants will yield important insights on the nature of services received, relationships with other programs within the One-stop system, overall and by different subgroups of respondents, including reemployment services, training services, job search allowances, TRA allowances, participation in Alternative TAA, use of the health insurance tax credit (such as knowledge of the program, how informed about it), and so on. Similarly, by merging administrative data for the TAA and WIA programs, we can learn about the extent of co-enrollment and gain a full picture of the nature of services that participants receive across both programs. In addition, we intend to obtain TAA program data for multiple points in time, both before and after enactment of the TAA Reform Act, so that we can examine trends in service receipt and deduce what impact the Reform Act might have had on service receipt. Survey data can also be used to provide information about the TAA program’s take-up rates, another important issue to be examined as part of this study. The study will also be able to address who among eligible workers chose not to participate and why as well as how eligible workers were notified about the availability of program services and how soon were they notified after the petition was filed. Finally, data from the survey of local TAA coordinators will linked to other data sets to provide additional insights into funding, types of services, institutional arrangements and program practices.

C. Publication Plans

Publication plans for the TAA evaluation are as follows:


  • Report on Initial Implementation. This report will present cross-site findings from the Initial Implementation Study. This cross-site analysis will detail the range of variation in practices across states and local areas with respect to the 2002 TAA Trade Act. A draft of this report has already been prepared and delivered to DOL.

  • Occasional Papers. In lieu of an Interim Report, the study will produce 12 Occasional Papers to address specific sets of issues related to the process and impact analyses. Topics will include characteristics of TAA participants and their jobs (overall, trends over time, and in comparison to other dislocated workers); training and reemployment service receipt by TAA and comparison group members; the nature and adequacy of rapid response activities; TAA take-up rates; TAA within the One-Stop system and integration with other programs; influences on TAA of WIA reauthorization; state data collection systems (nature and adequacy); the impact of performance accountability on program design; the role of the health insurance tax credit; and results from the best-practices study. Additional topics will be developed in consultation with DOL on the basis of study findings. Approximately one paper will be submitted approximately every three months from mid-2005 through mid-2008.

  • Final Report. The Final Report will present a comprehensive accounting of all findings and results amassed over the duration of the evaluation. It will cover results from the local-area and individual surveys, the multiple rounds of site visits, information on clients and services from administrative data, impact estimates on all key outcome measures, and results from the benefit-cost analysis. A draft report will be submitted in October 2009, and a final version will be submitted in December 2009.

D. Project Schedule


The evaluation began in January 2004 and has a projected end date of December 2009. The timing of key activities is shown in Table 5.


17.  If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.


ETA will display the OMB control number and expiration date for any individual surveys under this clearance.


18. Explain each exception to the certification statement identified in Item 19, Certification for Paperwork Reduction Act Submissions, of OMB Form 83-I.

There are no exceptions taken to item 19 of OMB Form 83-1.

TABLE 5

SCHEDULE FOR THE TAA EVALUATION

Activity

Time Period

Study Design

January 2004 – August 2004

Collect Process Data


First site visit

March 2005 – September 2005

Second site visit

October 2005 – March 2006

Third site visit

July 2006 – December 2006

Fourth site visit

January 2007 – June 2007

Fifth site visit

January 2008 – June 2008

Conduct local-area survey

April 2006 – September 2006

Collect Administrative Data


Participant data

Annually, 2006 – 2009

UI/TRA claimant and wage data

Annually, 2006 – 2009

Select Samples

March 2006 – May 2006

Collect Survey Data


Baseline

July 2006 – December 2006

15-month follow-up

October 2007 – March 2008

30-month follow-up

January 2009 – June 2009

Analysis and Reporting


Initial implementation study

August 2004

Twelve occasional papers

Approximately one paper will be submitted every three months from mid-2005 to mid-2008

Final report

March 2009 – December 2009


B. COLLECTION OF INFORMATION INVOLVING STATISTICAL METHODS

1.  Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used.  Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample.  Indicate expected response rates for the collection as a whole.  If the collection had been conducted previously, include the actual response rate achieved during the last collection.


This section describes 1) the potential respondent universes and sampling for the impact analysis and 2) the universe for the process analysis for which no sampling will be used.


Universe and Sampling for the Impact Analysis


The ideal design for the TAA impact evaluation would be random assignment, where workers eligible for TAA services would be randomly assigned either to a treatment group (who could receive TAA services) or to a control group (who could not). Persons in the treatment group could then be further randomly assigned to various TAA service groups in order to examine the relative effectiveness of particular program services and components. Random assignment ensures that the average characteristics of each research group would be similar, so unbiased estimates of the impacts of TAA participation overall and of specific program services could be obtained by comparing the mean outcomes of members of the treatment and control groups.


A random assignment design is clearly not feasible for the TAA evaluation, because TAA services cannot be denied to eligible workers (that is, under program rules, it would not be possible to construct a control group). Furthermore, it would not be feasible to randomly assign participants to different service groups, because TAA services are voluntary and are tailored to meet the needs of individual clients. Consequently, the evaluation will employ a comparison group design using state-of-the-art propensity scoring procedures to create comparison groups and obtain estimated impacts.


The sample design for the TAA impact evaluation thus must meet several critical analysis objectives. First, it must produce a sample that is representative of the national population of workers who are eligible for and receive TAA services and benefits, i.e., TAA program participants. Second, the sample design must also produce a sample that is representative of the national population of workers certified for TAA who are nonparticipants, in order to estimate program take-up rates and reasons for program participation and nonparticipation. Third, the sample design must generate comparison samples of dislocated workers who are as similar as possible to workers in the TAA participant and nonparticipant samples, except for the offer of TAA services. These comparison samples will be used to estimate likely outcomes of treatment group members in the absence of the TAA program. Finally, the sample design must provide sufficient statistical precision for estimating impacts that are relevant to a host of policy issues important to the proposed audiences for the research.

To meet these analysis objectives, the treatment (TAA) groups for the study will be selected from two sample universes, each of which has several advantages and disadvantages. The first (and primary) sample universe, labeled the certified-worker universe, will consist of all workers nationwide who are laid off from TAA-certified firms during the period covered by certification, and who subsequently receive a first UI payment. The second sample universe, labeled the TRA-beneficiary universe, will consist of all workers who receive TRA payments after they exhaust their regular UI benefits. Each sample design will be used to generate program impacts for TAA and results from the two samples can be compared to examine the robustness and credibility of study findings under the quasi-experimental design. Hence, the use of the two TAA samples will improve the ability of the evaluation to yield informative conclusions about program impacts.


Universe of Certified Workers. The study will obtain the sample frame for the certified-worker sample from all potentially TAA-eligible workers in lists that certified firms provide to states. These lists are available (and include the workers’ contact information) because, under the 1988 legislative changes to the TAA program, state agencies became required 1) to identify potentially eligible workers by obtaining lists of workers who were separated or partially separated from trade-affected firms during the period covered by certification and 2) to notify each potentially eligible worker in writing.


Importantly, in the Initial Implementation Study, all states in the sample indicated that they request lists of workers from certified employers. Furthermore, employers generally comply, although states sometimes have difficulty obtaining lists from smaller firms, and from companies that move their operations to another state or go out of business. Most states maintain these lists in machine-readable form. Thus, these lists are reasonably comprehensive and available, and contain identifying information on most workers who are potentially eligible for TAA services.


Based on historical petition data and Congressional Budget Office (CBO) projections, we estimate that the certified-worker universe will contain about 200,000 workers. A random sample of workers from this universe will be selected as follows:


  • The contractor will request worker lists from 25 randomly selected states that are supplied by firms that become certified for TAA between August 1, 2004 and July 31, 2005. It is expected that the contractor will receive data on about 160,000 workers from the 25 states. This schedule will ensure that the sample will be eligible for TAA services after the implementation of the 2002 reforms (which took effect in August 2003), will not be affected by seasonal layoff patterns, and will be representative of most workers laid off during the period covered by the certification.13


  • The certified-worker sample will next be restricted to those who receive UI benefits. The study will include only UI recipients in the sample, because few UI nonrecipients are eligible to receive TAA benefits. Furthermore, because the comparison group sample will be selected from UI recipients, UI claims records data will be needed for matching purposes.


  • The contractor will select 24,000 certified workers meeting these criteria, using stratified random sampling methods. The number of sample members selected from each state will be predetermined to obtain a self-weighting sample (see section B2). Within each state, the contractor will randomly select workers within strata to ensure that key subgroup of workers will be proportionately represented in the study samples. There is no plan to over-sample certain groups of workers, because that would yield a sample that is no longer self-weighting and that would reduce the precision of estimates for the full sample. Key stratifying variables will likely include age, gender, race/ethnicity, education level, and industry.14 The stratified samples will be selected within each state by 1) assigning each sample member to a stratum; 2) calculating the number of workers to select from each stratum on the basis of the stratum’s share of the size of the sample universe in the state; and 3) randomly selecting the allocated number of sample members from each stratum.


  • Baseline and follow-up interviews will be conducted by telephone with a random subset of the sample. Interviews will be conducted at baseline, 15 months later, and 15 months after that. An 80 percent response rate of those in the sampling frame is expected to be achieved in each round of interviews. The sample allocation for the surveys is discussed in more detail later in this section.


The certified-worker sample can be used to address all key research questions pertaining to the impacts of the TAA program. The distribution of services and benefits received by the sample will be representative of those provided nationally to TAA certified workers. Thus, the sample can be used to examine the overall effectiveness of the services provided to TAA certified workers as well as the effectiveness for specific arrays of services and benefits (including those delivered by other programs within the One-Stop Career Center system). Furthermore, the sample can be used to address many important questions for the process study, such as the timing and types of services and benefits received by TAA program participants. Moreover, because the sample will also contain those who will not receive TAA services, it can be used to estimate program take-up rates, reasons for nonparticipation, and the extent to which nonparticipants receive other non-TAA services.


Universe of TRA Beneficiaries. The impact study will also select a nationally representative sample from the universe of TRA beneficiaries. The primary advantage of this sample universe over the certified-worker universe is that the UI records data contain information on all TRA beneficiaries nationwide, whereas the lists of certified workers that firms provide to states may not be fully representative of all TAA-eligible workers (although, as discussed, they are likely to be largely representative). The main disadvantage of the TRA-beneficiary sample is that it will exclude those who do not receive TRA benefits but receive other TAA services. Hence, the sample cannot be used to estimate impacts for trainee-only groups. Another important disadvantage of the TRA-beneficiary sample is that it cannot be used to examine issues pertaining to program take-up rates. We believe that the use of both the certified-worker sample and the TRA-beneficiary sample can improve the ability of the evaluation to yield informative conclusions about program impacts, because we will be able to compare the consistency of results using the two samples.


The TRA-beneficiary sample will be selected from the estimated universe of about 60,000 TRA recipients as follows:


  • Information will be requested from the 25 randomly-selected states on all those who receive a TRA first payment between April 1, 2005 and March 31, 2006. Based on historical TAA data, we expect to receive information on about 48,000 workers from these states. Because TRA payments typically start about six months after workers start receiving UI benefits, there should be significant overlap in the TRA-beneficiary and certified-worker participant sample frames (TAA participants are distinguished from nonparticipants in the certified-worker sample, because the former received a TRA payment).


  • The sample group will include 12,000 TRA beneficiaries selected using stratified random sampling methods. The key stratifying variables will be the same ones (discussed above) that will be used to select the certified-worker sample.


  • Administrative records data will be collected for these sample members, but not interview data. To conserve costs, we will conduct telephone interviews with the certified-worker sample only, not with the TRA beneficiary sample.15


Selection of Comparison Groups. To effectively gauge the net impact of the TAA program on the employment-related outcomes of program participants, the study must determine what the outcomes of these participants would have been in the absence of the program. In order to do this, the evaluation will employ a quasi-experimental comparison group design—based on propensity scoring—to obtain estimated impacts. Consequently, the evaluation requires that data be collected from a comparison group of workers otherwise similar to those in the TAA samples.


One obvious approach, which was rejected, is to define the treatment group to consist of eligible workers in TAA-certified firms who become TAA participants, and to define the comparison group to consist of eligible workers in TAA-certified firms who do not become TAA participants. We believe this approach is seriously flawed for two reasons. First, program activity generated by TAA could affect all workers in certified firms regardless of whether they become TAA participants. This possibility is especially acute given the 2002 TAA Trade Act’s emphasis on providing rapid response assistance to workers as soon as possible after a petition is filed. Such services, to the extent they area successful, would obviate the need for TAA enrollment. Second, substantial selectivity bias may result by choosing a comparison group to consist of eligible workers who choose not to seek TAA services. For both these reasons, the comparison of outcomes between TAA participants and nonparticipants from among workers in certified firms would likely yield a seriously biased estimate of program impacts.


As the superior alternative, the study will obtain comparison groups from workers in each state’s regular UI program who are not eligible for TAA services and who live in the same areas and work in the same industries as the TAA sample. We believe for several reasons that this is the best source for obtaining the comparison group. First, the TAA population is a subset of the UI population, so that suitable matches for the TAA sample can be found. Second, matching can be performed using UI records data that are available, at reasonable cost, for both the TAA and potential comparison group members and that contain fairly detailed demographic and employment-related information. Thus, developing a sample frame from which to select the comparison group is straightforward. The main features of the comparison group design are as follows:


  • The study will select the comparison group for the certified-worker sample from the universe of those who receive a UI first payment over the same period as the certified-worker sample. The variables used in the matching process will be constructed from UI records data and were displayed in Table 1A.


  • The study will select the comparison group for the TRA-beneficiary sample, however, from the universe of UI exhaustees. This is because workers certified for TAA must first exhaust their regular UI entitlements (including Emergency Benefits) before they can receive a first TRA payment.


  • Because of the importance to the evaluation of obtaining the best possible matches, the study will employ a two-stage matching process. In the first stage, UI data will be used to obtain matched-comparison samples that are twice as large as the TAA samples. Baseline interviews will then be conducted with more comparison than TAA group members. In the second stage, we will re-match comparison to TAA group members using richer matching variables from the baseline interview data. The resulting TAA and comparison group samples will be of similar size, and we will conduct follow-up interviews with these sample members only. This design will increase the comparability of the TAA and comparison groups, which will increase the credibility of the impact findings.


  • The study will use propensity score matching (Rosenbaum and Rubin 1983) to obtain the matched-comparison samples. Several recent, influential studies using propensity scoring were able to replicate experimentally based impact estimates (for example, Dehejia and Wahba 1999; Glazerman et al. 2002). The study will also employ several specification tests found in the literature to examine the validity and credibility of the impact findings.


Within each state, the propensity scoring procedure will be implemented in four steps:


  1. Estimate a probability model of TAA-eligibility status. A logit model will be estimated, where a binary dependent variable that equals one for a TAA sample member and zero for potential comparison group members is regressed on the matching variables from the UI claims records. The contractor will conduct separate models for the certified-worker and TRA-beneficiary samples (and by gender and industry).

  2. Assign a propensity score to each individual. The propensity score is the predicted probability from the logit model. It is a single number that is a function (weighted sum) of the individual’s values for the matching variables.

  3. Select comparison group members using propensity scores. For each TAA sample member, the contractor will select the comparison group member with the closest absolute propensity score, or the “nearest neighbor.” The selection process will be done with replacement, so that a potential comparison group member can be matched to several TAA sample members. The contractor will also explore the use of other matching approaches, such as caliper matching (where for each TAA sample, we will select comparison group members with propensity scores that are within a fixed band width of the TAA group member’s propensity score), and perhaps kernel matching (where all comparison group members will be matched to each TAA sample member, with weights inversely proportional to the distance between the propensity scores of the comparison group member and the TAA group member). Smith and Todd (2000) found that their impact results were robust to the choice of the matching method that was used to select the comparison group. Thus, the primary approach for the study will be to use the simplest nearest neighbor method, although the contractor will assess the sensitivity of the matches using caliper and kernel matching.

  4. Assess the adequacy of the matching process. The contractor will compare the distribution of the matching variables and propensity scores of TAA and comparison group members within various propensity scoring classes (defined by the size of the propensity scores). If the matching process is determined to be unsatisfactory on the basis of these statistical tests, the contractor will re-estimate the logit models by including interaction and quadratic terms as additional matching variables in the models (Dehejia and Wahba 1999; and Rubin 2001). This process will be continued until a satisfactory model specification is found.

The propensity scoring procedure should yield TAA and comparison groups with very similar observable characteristics. However, there may remain unobservable differences between the groups that are correlated with the key outcome measures, and these differences could lead to biased impact estimates. Although it is difficult to test for these unobservable differences, the contractor will employ several specification tests found in the literature to examine the validity of study findings. One such test, used by Heckman and Hotz (1989), is to conduct the matching process using baseline characteristics measured several periods before the intervention begins. Earnings “impacts” in the ensuing (but still pre-intervention) period should equal zero if the matching process is successful. Another test that the contractor will use is to examine post-intervention impacts for those in the certified-worker sample who receive very few services; mean outcomes should be similar for these workers and their matched-comparison group members (that is, program impacts should be zero for this group).


Sample and Survey Allocation. The contractor considered several factors to design the appropriate sample allocation for the TAA evaluation. First, because the certified-worker sample will contain both TAA participants and TAA nonparticipants, the study needed to specify how the sample should be divided across these two groups and what share of the interviews would be devoted to each. Second, the study needed to determine the sample allocation across the two TAA samples. Third, the study needed to determine the sample allocation across the TAA and comparison group samples. Finally, the study had to determine the number of interviews to conduct at baseline, 15 months, and 30 months.


In order best to meet the myriad study objectives within project resources, the sample allocation for the evaluation as follows (see Table 6):


  • There will be 12,000 TAA participants and 12,000 TAA nonparticipants selected from the certified-worker lists. Because program take-up rates are expected to be about 30 percent for program-eligible workers, most of those in the certified-worker lists will be nonparticipants. Thus, to select our samples, we will identify program participants and nonparticipants using UI records data information on TRA benefit receipt and select 12,000 workers from each stratum.16 We will obtain 24,000 matched-comparison group members for each TAA group, yielding a total sample of 72,000 workers. We will obtain administrative records data for these TAA and comparison group members, and survey data for a random subset of them.


  • A stratified random sample of 12,000 TRA beneficiaries will be selected. The contractor will select 24,000 matched UI exhaustees as comparison group members. They will collect administrative records data for these sample members, but not interview data.


  • Baseline interviews with about 8,000 sample members in the certified-worker sample will be completed. The evaluation will focus on both the TAA participants and nonparticipants in the certified-worker sample, although a greater share of survey resources will be spent on the participant group, because we expect program impacts to be larger for this group. Thus, we will conduct twice as many interviews with TAA participants than with nonparticipants. Furthermore, we will conduct twice as many interviews with comparison than TAA group members. We expect to achieve an 80 percent response rate to the baseline interview and, hence, will release a stratified random sample of 10,000 workers for interviews.


  • There will be 5,300 15-month and 3,500 30-month follow-up interviews completed with those in the certified-worker sample. The contractor will conduct 15-month interviews with both the participant and nonparticipant groups and with their respective re-matched comparison group members. The contractor will update the TAA participant status designations using baseline interview and TRA benefits data (and if available, TAA program data). The 30-month interviews will be conducted with participants only. We expect to achieve an 80 percent response rate for the 15- and 30-month interviews (that is, at each follow-up point, we expect to complete interviews with 80 percent of the original sample released for interviews).



TABLE 6

SAMPLE ALLOCATION FOR THE TAA IMPACT EVALUATION



Certified-Worker Sample


TRA-Beneficiary Sample

Data Source

TAA Participants

Comparison Group for Participantsa

TAA Nonparticipants

Comparison Group for Nonparticipantsa


TRA
Beneficiaries

Comparison Group for TRA Beneficiariesa


Records Data


12,000


24,000


12,000


24,000



12,000


24,000

Number Released for Interviews (9,990)

2,220

4,440

1,110

2,220


0

0


Number of Completed Interviews (16,815)








Baseline (7,965)

1,770

3,540

885

1,770


0

0

15-month (5,310)

1,770

1,770

885

885


0

0

30-month (3,540)

1,770

1,770

0

0


0

0


aFollow-up interviews will be conducted with only those comparison group members who are re-matched to TAA group members using baseline interview data.


Universe for the Process Analysis


The local-area survey will be conducted with the universe of respondents—each of the TAA Coordinators overseeing TAA activities in local areas. There are estimated to be approximately 700 respondents, who will be identified through a two-step interviewing process. In step one, State TAA Coordinators will be administered a brief telephone survey, asking them to identify who in their State oversees or coordinates TAA activity at the local level and to provide us with contact information for these respondents. In step two, these local coordinators will be surveyed directly. An 80 percent response rate is anticipated.


For the qualitative research relating to the process study that supports the impact analysis, we will conduct site visits to each of the 25 states whose data are being examined as part of the impact study (see B2.a, below, for a discussion of how these states are selected). Two local areas within these states will be selected by selecting certified petitions randomly proportionate to size, with size measured as the number of affected workers identified on the petition.


2.  Describe the procedures for the collection of information including:

 

Statistical methodology for stratification and sample selection,

Estimation procedure,

Degree of accuracy needed for the purpose described in the justification,

Unusual problems requiring specialized sampling procedures, and

Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


a. Statistical Methodology


The impact evaluation will be conducted using samples from 25 states that were randomly selected in geographic strata with probabilities proportional to the expected number of TAA-eligible workers in the state. The process analysis site visits will also be conducted in these same 25 states, although the local area mail survey will be administered to TAA Coordinators in all states.


Selection of States. The study samples will be selected from a random subset of states rather than from all states nationwide, for two reasons: (1) the TAA caseload is relatively concentrated and (2) sample selection and data acquisition costs increase significantly with the number of states selected. Although a clustered sample of states will result in a slight loss in the precision of the impact estimates (but no bias), the savings in resources and reduced administrative complexity provided by clustering more than offset this loss.


To select the 25 states, we obtained from DOL petition data on all TAA and NAFTA industry certifications from fiscal year (FY) 1999 through the second quarter of FY 2004. These petition data provide a sample universe from which to select the states, because each petition contains information on the estimated number of trade-affected workers (that is, those who are likely to lose their jobs in the period covered by the certification). These data are likely to be a good proxy for the actual number of trade-affected workers, although we will use post-stratification methods to adjust the sample weights after the sample universe is observed. The petition data contain information on 10,408 certified firms, covering nearly 1.3 million dislocated workers.17


Table 7 displays: (1) state shares of the number of trade-affected workers (calculated as the simple average of the state shares for each fiscal year)18; (2) the estimated number of certified workers between August 2004 and July 2005 (the period covered by the study) in each state using projections that 200,000 workers nationwide will be certified during this period; and (3) state selection probabilities. The state selection probabilities (weights) were scaled to sum to 25, the number of states included in the study. The data are ordered by state, according to their shares of the TAA population, from largest to smallest. Interestingly, the state orderings remain fairly constant across years (not shown).


Using the figures in Table 7, we randomly selected 25 states with probabilities proportional to state shares of the eligible TAA population. Twelve states (North Carolina, Texas, Pennsylvania, California, Tennessee, Ohio, Georgia, New York, Illinois, Michigan, Alabama, Virginia) were chosen with certainty.19 Two additional states (Wisconsin and South Carolina) were also chosen with certainty, because the probability of selecting these states was .97 and .95 respectively. These 14 certainty states contain about 65 percent of the eligible TAA population.


The remaining 11 noncertainty states were randomly sampled from the universe of 39 noncertainty states (including the District of Columbia and Puerto Rico), with the probabilities shown in column five of Table 7. We selected the noncertainty states by stratifying them by the six DOL regions and using a systematic sampling approach; this ensured that the sample of states would be dispersed geographically. Geographic stratification is a useful way of ensuring that the sample of states represents the full range of TAA programs and participants, because states within a geographic area tend to have similar industries, workers, and labor markets. The selected noncertainty states (Massachusetts, New Jersey, Mississippi, Kentucky, South Dakota, Arkansas, Minnesota, Indiana, Nevada, Washington, and Oregon) contain about 19 percent of the eligible TAA population. Consequently, our sample of certainty and noncertainty states contains about 84 percent of the eligible TAA population.


After we selected the 25-state sample, we also selected 6 “replacement” states for “primary” states that refuse to participate in the study. We selected one replacement state for each region using the sampling techniques discussed above. We will contact a replacement state in a region if we cannot obtain administrative data for any of the primary states in that region.


TABLE 7

STATE SELECTION PROBABILITIES FOR THE TAA EVALUATION

State

DOL Region

Average Annual Share of Trade-Affected Workers in Certified Firms from FY 1999 to
Quarter 2 of FY 2004
a

Estimated Number of Trade-Affected Workers
in Sampling Period

State Selection Probability Under a 25-State Design

North Carolina

3

9.3758

18,752

1.0000

Texas

4

7.0382

14,076

1.0000

Pennsylvania

2

6.7433

13,487

1.0000

California

6

5.5598

11,120

1.0000

Tennessee

3

4.1321

8,264

1.0000

Ohio

5

4.0454

8,091

1.0000

Georgia

3

3.9131

7,826

1.0000

New York

1

3.7771

7,554

1.0000

Illinois

5

3.6975

7,395

1.0000

Michigan

5

3.6482

7,296

1.0000

Alabama

3

3.5264

7,053

1.0000

Virginia

2

3.2075

6,415

1.0000

Wisconsin

5

3.0740

6,148

1.0000

South Carolina

3

3.0284

6,057

1.0000

Kentucky

3

2.4518

4,904

0.7654

Indiana

5

2.4406

4,881

0.7620

Mississippi

3

2.3477

4,695

0.7330

Washington

6

2.2092

4,418

0.6897

Missouri

5

2.2088

4,418

0.6896

New Jersey

1

2.0659

4,132

0.6450

Oregon

6

2.0329

4,066

0.6347

Massachusetts

1

1.8321

3,664

0.5720

Minnesota

5

1.6641

3,328

0.5195

Arkansas

4

1.6082

3,216

0.5021

Florida

3

1.5537

3,107

0.4851

Arizona

6

1.1848

2,370

0.3699

Oklahoma

4

1.1424

2,285

0.3567

Colorado

4

1.0150

2,030

0.3169

Maine

1

0.9640

1,928

0.3010

Louisiana

4

0.9238

1,848

0.2884

Kansas

5

0.7909

1,582

0.2469

Puerto Rico

1

0.6840

1,368

0.2135

Connecticut

1

0.6725

1,345

0.2100

Idaho

6

0.5331

1,066

0.1664

Iowa

5

0.4972

994

0.1552

Arkansas

6

0.4695

939

0.1466

New Hampshire

1

0.4281

856

0.1336

Maryland

2

0.4074

815

0.1272

West Virginia

2

0.3931

786

0.1227

New Mexico

4

0.3882

776

0.1212

South Dakota

4

0.3861

772

0.1205

Utah

4

0.3384

677

0.1056

Rhode Island

1

0.3009

602

0.0940

Vermont

1

0.2751

550

0.0859

Nebraska

5

0.2699

540

0.0843

Nevada

6

0.2272

454

0.0709

Montana

4

0.2250

450

0.0702

Delaware

2

0.1093

219

0.0341

Wyoming

4

0.0946

189

0.0295

North Dakota

4

0.0920

184

0.0287

Hawaii

6

0.0059

12

0.0018

District of Columbia

2

0.0004

1

0.0001

Total


100.0000

200,000b

25.0000

Source: DOL Petition Data on all Industry Certifications from FY 1999 to the second quarter of FY 2004.

a Figures pertain to the estimated number of trade-affected workers that are denoted in each petition.

bEstimate based on historical data and Congressional Budget Office projections.



This process yielded the sample of states shown in Table 8. Importantly, the regional distribution of workers in the selected sample of states is very similar to the regional distribution across all states nationwide (Table 8).


Selecting the TAA Samples for the Impact Analysis. We will generate self-weighting TAA samples, which will maximize the precision of the impact estimates for a given sample size of workers. We will obtain the sample sizes in each of the selected states using the following formula:



where ns is the number of TAA-certified workers selected in state s, Ns is the total number of TAA-certified workers in state s, and ps is the probability that state s was selected (using the figures in column five of Table 7). The term f is the national sampling fraction for the population being sampled, and will be selected so that the state samples will sum to about 12,000 for TAA participants in the certified-worker sample, to 12,000 for TAA nonparticipants in the certified-worker sample, and to 12,000 for TRA beneficiaries.


This formula sets the sample in each state (ns) so that the probability of selection is f for all program-eligible workers. The total probability that a worker is selected is the probability the state is chosen (ps) times the probability that a person is chosen in the state (ns/Ns).

TABLE 8
Selected states for the taa evaluation, BY REGION

25-State Sample

Replacement State

Distribution of the Number of Workers in TAA-Certified Firms, by Region (Percentages)

25-State Sample

All States


Region 1




9


11

Massachusetts

Maine



New Jersey




New Yorkc





Region 2




12


11

Pennsylvaniac

West Virginia



Virginiac





Region 3




34


30

Alabamac

Florida



Georgiac




Kentucky




Mississippi




North Carolinac




South Carolinac




Tennesseec





Region 4




11


13

Texasc

Oklahoma



Arkansas




South Dakota





Region 5




22


22

Illinoisc

Missouri



Indiana




Michiganc




Minnesota




Ohioc




Wisconsinc





Region 6




12


12

Californiac

Idaho



Nevada




Oregon




Washington





c Denotes certainty state.

As an illustration, to obtain a self-weighting sample of 12,000 TAA participants from the certified-worker lists, approximate state sample sizes will be as follows: 1,125 in North Carolina, 845 in Texas, 810 in Pennsylvania, 670 in California, 500 in Tennessee, 485 in Ohio, 470 in Georgia, 450 in New York, 440 in Illinois, 440 in Michigan, 425 in Alabama, 385 in Virginia, 370 in Wisconsin, 365 in South Carolina, and 385 in each of the noncertainty states.


Finally, as discussed, we will conduct telephone surveys with a random subsample of the certified-worker sample and its comparison group. The survey sample will be selected by state using stratified random sampling techniques.


Selecting Respondents for the Local Survey. No statistical methodology will be employed for sample selection because the survey is being administered to all members of the universe. As noted, respondents will be identified through a prior telephone survey of State TAA Coordinators, who will be asked to provide names and addresses of all staff overseeing or supervising TAA activity at the local level.


b. Estimation Procedures


The plans for the statistical analysis of the data for the process, impact, and benefit-cost analyses were discussed in A16 above.

c. Precision of Estimates


The evaluation will provide a broad range of information on the characteristics of TAA-certified workers, as well as on program impacts for the full sample and key subgroups defined by participant and program characteristics. Table 9 presents the precision of key estimates for these myriad analyses. The table presents 95 percent confidence intervals for examining a 50 percent characteristic (the most conservative assumption) for TAA participants and nonparticipants. The table presents also minimum detectable differences (MDDs) across the TAA and comparison groups on quarterly earnings and on a 50 percent characteristic (such as the employment rate or the percentage returning to their pre-layoff job). The MDDs are calculated for participants, nonparticipants, and the combined samples, as well as for estimates based on the records and follow-up interview samples. Notes to the table show our assumptions about confidence level, power, and reductions in variance due to regression. The precision of the estimates incorporates design effects due to the clustering of states selected for the analysis. Design effects for the MDD calculations are about 1.16 for impacts based on the follow-up interview sample, and about 2.9 for participant impacts based on the large records sample.


This design will yield adequate levels of precision both for the descriptive analyses of the demographic and training-related experiences of TAA-certified workers and for examining differences in the mean outcomes of the TAA and comparison groups. For example, for the overall participant sample, we would expect to detect a significant earnings impact if the true program impact was $122 or more using the administrative records sample and $242 or more using the survey sample. Because the previous TAA analysis (Corson et al. 1993) estimated the TAA impacts on earnings to be about $300 per quarter, our design could detect this benchmark impact using either the records or survey data. In addition, the MDDs are near target levels using the survey data for 50 percent subgroups of states or workers.


TABLE 9


MINIMUM DETECTABLE DIFFERENCES (MDDs) AND 95 PERCENT

CONFIDENCE INTERVALS FOR THE TAA EVALUATION



Sample

Minimum Detectable TAA and Comparison Group Differences


95 Percent Confidence Interval

Quarterly Earnings (Dollars)

50 Percent Characteristic (Percentage Points)


50 Percent Characteristic (Percentage Points)


Records Sample






TAA Participants


122


2.0



1.5

TAA Nonparticipants

122

2.0


1.5

Participants and Nonparticipants

93

1.6


1.4

TAA Participants:





50 percent subgroup of states

173

2.9


2.1

25 percent subgroup of states

245

4.1


3.0

50 percent subgroup of workers across all states

143

2.4


1.7

25 percent subgroup of workers across all states

177

2.9


2.1



Follow-up Interview Sample






TAA Participants


242


4.0



2.6

TAA Nonparticipants (15-Month Sample Only)

327

5.5


3.5

Participants and Nonparticipants (15-Month Sample)

240

4.0


3.2

TAA Participants:





50 percent subgroup of states

342

5.7


3.7

25 percent subgroup of states

483

8.1


5.2

50 percent subgroup of workers across all states

327

5.5


3.5

25 percent subgroup of workers across all states

452

7.5


4.7


Note: The MDD calculations assume: (1) a 95 percent confidence level for a one–tailed test, (2) an 80 percent level of power, (3) that the variance of the estimates are reduced by 20 percent owing to the use of regression models, and (4) a standard deviation of $3,000 for quarterly earnings (based on results from Needels et al. 2002, Schochet et al. 2001, Corson et al. 1998, Bloom et al. 1993, and Corson et al. 1993). The MDDs were calculated using the following formula (the confidence intervals were calculated using a similar approach):


,


where R2 is the regression R-squared value, pc (=.65) is the population share in the certainty states, mc (mn) is the sample size for each research group in the certainty (noncertainty) states, sn (=14) is the number of noncertainty states in the sample, f (=.54) is the finite population correction in the noncertainty states, ρ (=.03) is the between-state variance as a percentage of the total variance of the outcomes based on previous studies, and c (=.30) is the correlation between the mean outcomes of TAA and comparison group members within the same state.

The study design also provides a sufficient level of precision for detecting earnings impacts to produce a positive net benefit of the TAA program from both the government’s and society’s perspective. The CBO projects that TAA program costs in 2004 will be about $12,500 per participant.20 If we assume that 1) TRA benefits are a transfer from taxpayers to program participants (so that these payments do not enter the benefit-cost calculations from society’s perspective) and 2) TRA payments represent about 60 percent of program costs, then earnings would need to average about $320 per quarter during the follow-up period for benefits to society to offset costs. Again, this impact can be detected under the sample design.


3.   Describe methods to maximize response rates and to deal with issues of non-response.  The accuracy and reliability of information collected must be shown to be adequate for intended uses.  For collections based on sampling, a special justification must be provided for any collection that will not yield reliable data that can be generalized to the universe studied.


A. Methods for Maximizing Response Rates


Baseline and Follow-up Interviews. Several strategies will be used to achieve a high response rate to the baseline and follow-up surveys. First, before interviewing begins, an advance letter describing the purpose and sponsorship of the survey will be mailed to potential respondents. This letter will assure potential respondents that the caller is conducting a legitimate research interview and not soliciting donations or selling anything. Letters will be sent about a week before the sample is released to the CATI call scheduler. The letter will request up-to-date contact information and provide a toll-free call-in number.


Second, staff from the contractor’s experienced pool of interviewers will be recruited and extensively trained. They will be thoroughly schooled on data collection procedures, including methods for promoting cooperation among sample members. Interviewers especially skilled at encouraging cooperation will be available to persuade reluctant respondents to participate and will be assigned to attempt conversions with respondents who initially refuse (except for hostile refusals). Bilingual interviewers will also be available for conducting interviews in Spanish.


Third, call scheduling will allow respondents to select the time most convenient for them to be interviewed. We plan to conduct this survey using CATI, which ensures control of sample releases, call scheduling, and questionnaire logic and completeness.


Fourth, the subcontractor will make extensive use of various on-line databases to try to locate sample members who have moved. In addition, we will attempt interviews with both respondents and nonrespondents to previous interviews, because our experience suggests that interview response rates can be increased using this approach.


It is expected these techniques, combined with the $25 monetary incentive, to yield an 80 percent response rate to each round of interviews.


Local Area Survey. Respondents will be able to reply to a web-based or to a mail version of the survey. In order to maximize response rates, the contractor will follow up with reminder e-mails or postcards to any local-area respondent that do not respond to the initial survey within two weeks. After waiting an additional one to two weeks, a second copy of the survey will be sent to those whose responses are still outstanding. If any local respondent still has not responded to a second request, we will send another reminder in an effort to obtain their responses.


Finally, the surveys will be kept simple and short, and will only include questions that are directly related to the intended use of the survey and that draw upon the respondents’ presumed expertise. Moreover, no information of a personal sensitive nature will be asked about.


  1. Addressing Nonresponse


When the surveys of TAA and comparison group members are completed, the contractor will conduct an analysis of nonresponses to assess whether the survey sample is representative of the initial population of UI and TAA customers. This analysis will be done using UI administrative claims and wage record data, which will be available for all sample members. These data will include demographic variables (gender, age, race/ethnicity), earnings measures (base period earnings and quarterly earnings from the UI wage records), and UI claim data (weekly benefit amount, maximum benefit amount, weeks collected, dollars collected, participation in reemployment services). If it appears that the respondent sample is not representative of the full UI and TAA populations, we will adjust sample weights for nonresponse using propensity scoring methods.


C. Reliability of Data Collection


The draft questionnaire for the TAA and UI customers was built extensively on questionnaires developed for other DOL studies, including the Trade Adjustment Assistance Survey (OMB number 1205-0306; expiration date 3/31/1992), the Individual Training Account Experiment Survey (OMB number 1205-0441; expiration date 10/31/2006), and the National Job Corps Study Thirty-Month Follow-Up Interview (OMB number 1205-0360; expiration date 9/30/1998). The questions were designed to be easily understood by respondents. Revisions were made to the draft questionnaire based on an internal review, a review by DOL, and a pretest.


The use of CATI to conduct the survey also helps ensure the reliability of the data. It controls question branching (reducing item nonresponse due to interviewer error), modifies wording (providing memory aids and probes and personalizing questions), and constructs complex sequences that are not possible to produce or are less accurate in hard-copy surveys. The probes, verifications, and consistency checks are built into the system to standardize procedures. These procedures ensure the reliability of both the data collection methods and the data collected through those methods. Contractor staff will monitor 10 percent of each interviewers’ work using silent call-monitoring equipment and video monitors that display the interviewers’ screen.

4. Describe any tests of procedures or methods to be undertaken.  Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility.  Tests must be approved if they call for answers to identical questions from 10 or more respondents.  A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.


Both the Baseline Individual Survey and the Local-Delivery Survey have been pre-tested with nine or fewer respondents.


Pre-Test of the Baseline Individual Survey. Nine pre-tests of the current survey were conducted with TAA participants in Pennsylvania and Texas. The pre-tests assessed the content and wording of individual questions, the organization and format of the questionnaire, respondent burden time, and potential sources of response error. The pretest results were used to modify the questionnaire.


Pre-Test of the Local-Delivery Survey. The telephone screener was pre-tested with State TAA Coordinators in the States of North Carolina, Ohio, and Wisconsin. The survey of TAA local delivery was mailed to three intra-state regional TAA Coordinators in North Carolina and two in Wisconsin. As with the Baseline Individual Survey, the pre-tests assessed the content and wording of questions, the organization and format of the questionnaire, respondent burden time, and potential sources of response error. The pre-test results were used to modify the questionnaire.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The following persons contributed to, reviewed, and/or approved the design, instrumentation, and sampling plan:


Name

Affiliation

Telephone Number

Dr. Ronald D’Amico

Social Policy Research Associates

(510) 763-1499

Dr. Peter Schochet

Mathematica Policy Research

(609) 279-6887

Richard West

Social Policy Research Associates

(510) 763-1499

Dr. Frank Potter

Mathematica Policy Research

(609) 936-2799

Dr. Sheena McConnell

Mathematica Policy Research

(202) 484-4518






REFERENCES

Abadie, A. “Bootstrap Tests for Distributional Treatment Effects in Instrumental Variable Models.” Harvard University Working Paper. September 2005.


Abadie, A. and G. Imbens. “Large Sample Properties of Matching Estimators for Average Treatment Effects.” National Bureau of Economic Research Working Paper. March 2005.


Agodini, R. and M. Dynarski. “Are Experiments the Only Option? A Look at Dropout Prevention Programs.” Review of Economics and Statistics LXXXVI. February 2004.


Amemiya, T. Advanced Econometrics. Cambridge, MA: Harvard University Press. 1985


Baumgartner, R. and P. Rathbun. 1997. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, Virginia.


Berlin, M. et al. “An Experiment in Monetary Incentives.” In Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 1992.


Bloom, H., L.Orr, G. Cave, S. Bell, and F. Doolittle. The National JTPA Study: Title IIA Impacts on Earnings and Employment. Bethseda, MD: Abt Associates, 1993.


Burghardt, J. and J. Homrighausen. National Job Corps Study: Survey Results. Princeton, NJ: Mathematica Policy Research, Inc., 2002.


Church, A.H. “Estimating the Effects of Incentives on Mail Response Rates: A Meta-Analysis.” Public Opinion Quarterly, vol. 57, 1993, pp. 62-79.


Corson, W., P. Decker, P. Gleason, and W. Nicholson. International Trade and Worker Dislocation: Evaluation of the Trade Adjustment Assistance Program. Princeton, NJ: Mathematica Policy Research, Inc., April 1993.


Corson, W., K. Needels, and W. Nicholson. Emergency Unemployment Compensation: The 1990s Experience. Unemployment Insurance Occasional Paper 98-1. Washington, DC: U.S. Department of Labor, Employment and Training Administration, 1998.


Cochran, W. Sampling Techniques. New York: John Wiley and Sons, 1977.


Dehejia, R.H., and S.Wahba. “Causal Effects in Nonexperimental Studies: Reevaluating the Evaluation of Training Programs.” Journal of the American Statistical Association, vol. 94, no. 448, 1999.


DuMouchel, W. H., and G. Duncan. "Using Sample Survey Weights in Multiple Regression Analyses of Stratified Samples." Journal of the American Statistical Association 78(383):535-542, 1983.


Glazerman, S., D. Levy, and D. Myers. “Nonexperimental Replications of Social Experiments: A Systematic Review.” Princeton, NJ: Mathematica Policy Research, Inc., September 2002.


James, J. and R. Bolstein. “The Effect of Monetary Incentives and Follow-up Mailings on the Response Rate and Response Quality in Mail Surveys.” Public Opinion Quarterly 54, 1990.


Kish, L. Survey Sampling. New York: John Wiley and Sons, 1965.


Mack, S., V. Huggins, D. Keathley, and M. Sundukchi. 1998. “Do Monetary Incentives Improve Response Rates in the Survey of Income and Program Participation?” Proceedings of the Section on Survey Methodology, American Statistical Association, pp 529-34.


Maddala, G.S. Limited Dependent and Qualitative Variables in Econometrics. Cambridge U.K: Cambridge University Press. 1983.


Markesich, J. and M.D. Kovac. “The Effects of Differential Incentives on Completion Rates: A Telephone Survey Experiment with Low-Income Respondents.” Presented at the Annual American Association of Public Opinion Research, Nashville, TN, May 16, 2003.


Martin, E., D. Abreu, and F. Winters. 2000. “Money and Motive: Results of an Incentive Experiment in the Survey of Income and Program Participation.” Unpublished manuscript, Washington DC: U.S. Bureau of the Census.


Murray, D. Design and Analysis of Group-Randomized Trials. Oxford: Oxford University Press, 1998


Needels, K., W. Corson, and W. Nicholson. Left Out of the Boom Economy: UI Recipients in the Late 1990s. ETA Occasional Paper 2002-2003. Washington, DC: U.S. Department of Labor, Employment and Training Administration, May 2002.


Perez-Johnson, et al. The Effects of Customer Choice: First Findings from the Individual Training Account Experiment. Princeton, NJ: Mathematica Policy Research, Inc., December 2004.


Rosenbaum, P., and D. Rubin. “The Central Role of the Propensity Score in Observational Studies for Causal Effects.” Biometrika, vol. 70, 1983.


Rubin, D. “Use of Propensity Scores for Tobacco Litigation.” Harvard University Working Paper, 2001.


Schirm, A., and N. Rodriguez-Planas. The Quantum Opportunity Program Demonstration for Youth: Initial Post-Intervention Impacts. Washington, DC: Mathematica Policy Research, Inc., June 2004. (Also published as ETA Occasional Paper 2004-07, Washington, DC: U.S. Department of Labor, Employment and Training Administration.)


Schochet, P., J. Burghardt, and S.Glazerman. National Job Corps Study: The Impacts of Job Corps on Participants’ Employment and Related Outcomes. Princeton, NJ: Mathematica Policy Research, Inc., June 2001.


Shettle, C. and G. Mooney. 1999. “Monetary Incentives in Government Surveys.” Journal of Official Statistics 15:231-50


Singer, E., R. M. Groves, and A.D. Corning. “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation.” Public Opinion Quarterly, vol. 63, 1999, pp. 251-60.


Singer, E. and R. A. Kulka. “Paying Respondents for Survey Participation.” In Studies of Welfare Populations: Data Collection and Research Issues. Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs. Edited by Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press, 2002, pp. 105-28.


Smith, J. and P. Todd. “Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators?” Working Paper, 2000.


Subcommittee on Disclosure Methodology, Office of Management and Budget. 1994. Report on Statistical Disclosure Limitation Methodology: Statistical Policy Working Paper 22.

1 The Survey of Program Dynamics offers a $40 prepaid incentive and the Health and Retirement Survey offers a $20 prepaid incentive but these surveys have better contact information and involve in-person interviewing.

2 We considered offering a $23 post-payment incentive instead, but for simplicity, rounded to the $25 incentive.

3 These calculations assume (1) a 95 percent confidence level; (2) a two-tailed test (due to the uncertainty of the direction of effects); (3) an 80 percent level of power; (4) an 80 percent response rate to the baseline interview for Group 1; (5) a reduction in variance of 20 percent owing to the use regression models (that is, an R2 value of .20); and (6) no clustering effects, because the focus of the experiment is to draw inferences about the effect of each payment type for our sample, rather than to generalize to the sample universe as a whole (which is not a well-defined concept since many factors influence response rates besides sampling error).

4 Applying multiple comparison corrections increases the minimum detectable differences of 2.9 and 3.5 percent to 3.3 and 4.0 percent, respectively.

5 See Report on Statistical Disclosure Limitation Methodology, Subcommittee on Disclosure Limitation Methodology, Statistical Policy Office of the Office of Management and Budget, 1994.

6 A standard rule of thumb is that units of geography should be reported at a high enough level of aggregation such that there are no fewer than 100,000 individuals in the sampling frame in that unit. No single state would meet this criterion in this study.

7 The overall burden estimate is slightly lower than the estimate provided in our original packet because, based on actual experience, we have found that we require slightly less time spend interviewing respondents on site than we originally estimated. Additionally, we have found that some states with minimal activity have only one local office to visit, rather than the two that we had planned per impact-sample state.

8 The average wage for UI recipients reported in a recent study of this population (Needels et al 2002) is $16 per hour.

9 The study will also use this model to test the credibility of our comparison group design. By performing the propensity score matching using characteristics measured several periods before displacement, we can estimate the equation using “outcomes” measured prior to displacement. If the matching process was successful, the coefficient on the TAA indicator should be insignificantly different from zero.

10The contractor will also estimate the regression models without the sample weights to examine the robustness of study findings, and because there is some controversy in the literature about the appropriateness of using weights when estimating multivariate regression models in the absence of choice-based sampling.

11 TAA nonparticipants refers to those on worker lists supplied by employers as being covered by a certification, even though they never became a TAA participant; they are assumed to be TAA-eligible by virtue of being on the worker list, even though their TAA eligibility has not been conclusively established.

12 For instance, the study will estimate the following variant of equation (1):

where Sj is an indicator variable equal to 1 for TAA group members in service receipt category j and their matched comparison group members, and 0 for other TAA and comparison group members. In this model, the term, (1 + j), represents the program impact for TAA group members in service category j relative to their matched comparison group members, holding constant the effects of other services received by TAA group members as well as their baseline characteristics.

13 Workers covered by a certification include those laid off between one year prior to the petition filing date and two years after the petition certification date (which translates into a three to three-and-one-year layoff period). Thus, for the later certifications, our sample will exclude workers laid off many months after the certification date, because these workers will not have been laid off at the time we collect UI records and select our samples.

14 Although the size of these strata will vary by state, TAA program records on exiters between 7/2002 and 7/2003 indicate the following worker composition nationwide: (1) gender (48 percent of exiters were male); (2) age (about 48 percent were less than 45, 33 percent were between the ages of 45 and 55, and 17 percent were older than 55); (3) education (20 percent did not have a high school diploma, 55 percent had a high school diploma only, and 5 percent were college graduates); (4) race (76 percent were white, 17 percent were black); and (5) ethnicity (14 percent were Hispanic). Furthermore, TAA petition data indicate that the industry breakdown of 2003 TAA petitions was as follows: 20 percent for textile industries; 35 percent for industries related to machines, equipment, appliances, and electronics; 13 percent for steel and metal industries; 13 percent for lumber, paper, and furniture industries; and 9 percent for chemical and pharmaceutical industries.

15 However, as we discuss later, we will conduct more interviews with certified workers that became TAA participants than those who do not, because we expect program impacts to be greater for the former group. The TRA-beneficiary sample will overlap substantially with this group.

16 TRA benefit receipt is likely to be a very good proxy for TAA program participation, because, in the previous TAA evaluation, MPR found that 95 percent of the full TAA population receives TRA benefits (Corson et al. 1993).

17 The number of trade-affected workers was about 225,500 in FY 1999, 145,000 in FY 2000, 215,000 in FY 2001, 345,000 in FY 2002, 213,000 in FY 2003, and 96,000 in the first two quarters of FY 2004.

18 Data on the estimated numbers of trade-affected workers were capped at 1,000 workers to remove the effect of a few outliers. This truncation affected less than 0.5 percent of all petitions.

19 The six states with initial weights greater than 1 were chosen with certainty, because these states had more than 1/25 of the total weight. After removing these six states, we also chose five additional states with certainty, because they had more than 1/19 (1 ÷ [25–6]) of the remaining total weight. Finally, after removing the 11 certainty states, we chose one more state with certainty, because it had more than 1/14 of the remaining total weight.

20 The CBO projects that total outlays for TAA will be about $800 million, and that 60,000 workers will participate in the program. Program costs averaged about $10,000 per beneficiary under prior law.

4


File Typeapplication/msword
File TitleMEMORANDUM
AuthorCindy CMcClure
Last Modified Byastrich_k
File Modified2006-11-15
File Created2006-11-15

© 2024 OMB.report | Privacy Policy