Supporting Statement- 1205-0441 10 12 06

Supporting Statement- 1205-0441 10 12 06.doc

Evaluation of the Individual Training Account Experiment

OMB: 1205-0441

Document [doc]
Download: doc | pdf




Evaluation of the

Individual Training Account Experiment: OMB Supporting Statement





CONTENTS

Chapter Page

a. justification 1


1. Circumstances Necessitating the Data Collection 1


a. The Experiment 1

b. The Evaluation 9

c. Data Needs and Data Sources 10

d. Additional Follow-Up Survey 12


2. How, By Whom, and For What Purpose the Information Is to be Used 12

3. Use of Improved Technology to Reduce Burden 13

4. Efforts to Identify Duplication 14

5. Methods to Minimize Burden on Small Businesses or Entities 15

6. Consequences of Not Collecting the Data 15

7. Special Data Collection Circumstances 17

8. Federal Register Notice 17


a. Federal Register Notice and Comments 17

b. Consultations Outside of the Agency 17

c. Test of Procedures or Methods 18


9. Respondent Payments 18

10. Confidentiality 18

11. Questions of a Sensitive Nature 20

12. Hour Burden of the Collection of Information 20

13. Estimated Total Annual Cost Burden to Respondents and Record Keepers 21

14. Estimated Annualized Cost to the Federal Government 21

15. Changes in Burden 21

16. Tabulations 21

17. Reasons for Not Displaying Expiration Date of OMB Approval 25

18. Exceptions to the Certification Statement 25


b. collection of information involving statistical methods 26


1. Respondent Universe and Sampling 26

2. Statistical Methodology, Estimation, and Degree of Accuracy 28

3. Methods to Maximize Response Rates and Data Reliability 30

4. Tests of Procedures or Methods 32

5. Individuals Consulted on Statistical Methods 33

REFERENCES 34


APPENDIX a: SURVEY INSTRUMENT a.1


APPENDIX B: RESEARCH SECTION OF WORKFORCE

INVESTMENT ACT B.1


APPENDIX C: federal register notice C.1


APPENDIX D: advance letter E.1






TABLES

Table Page

1 APPROACHES TESTED IN THE EXPERIMENT 7

2 INDIVIDUAL-LEVEL DATA ITEMS 11

3 MINIMUM DETECTABLE DIFFERENCES BETWEEN ITA MODELS 29


a. justification

This clearance package seeks an extension in the currently approved follow-up survey (OMB 1205-0441; also, see Appendix A) for the Individual Training Account (ITA) Experiment. The total response rate for the first collection of the follow-up survey was 82 percent. This requested extension would allow for a longer follow-up period of participants to provide more complete information of impacts on participant training, earnings, and employment.

The evaluation of the ITA experiment currently observes the customers’ employment and earnings outcomes for 15 months after random assignment; hence, conclusions will be limited to statements about the short-run impacts of the approaches. As the proportion of customers that was still in training at the time of the survey is quite large and varies by approach (from 14 to 17 percent), the short-run impacts may well differ from the longer-run impacts. A longer observation period would help address the long-run impacts of the approaches.

1. Circumstances Necessitating the Data Collection

The Workforce Investment Act (WIA) of 1998 is bringing about substantial changes in the way training and other employment services are provided to DOL customers. WIA requires workforce investment areas to establish Individual Training Accounts (ITAs), which provide vouchers or other related funding methods that customers can use to pay for training. ITAs are intended to empower customers to choose the training services they need and raise the accountability of states, local areas, and service providers for meeting these needs.

a. The Experiment

Under the authority granted ETA in Section 171 of the Workforce Investment Act (see Appendix B), the ITA Experiment is testing different approaches for managing customer choice in the administration of ITAs. States and local offices have a great deal of flexibility in deciding how much guidance to provide to customers in choosing WIA-funded training. The experiment is testing three approaches that differ widely in both the resources available to customers and the involvement of local counselors in guiding customer choice. The three approaches range from a highly structured approach, in which customers are steered to the highest-return training options, to a true voucher approach, in which customers are offered a lump sum and allowed to choose any state-approved training.

TABLE 1

APPROACHES TESTED IN THE EXPERIMENT



Approach 1:

Structured
Customer
Choice

Approach 2:

Guided
Customer
Choice

Approach 3:

Maximum
Customer
Choice


Counseling

Mandatory,

most intensive

Mandatory,

moderate intensity


Voluntary

Can Counselors Reject Choices?


Yes


No


No

Award Amount

Customized

Fixed

Fixed


More specifically, the three ITA approaches, which are shown in the table above, vary along three dimensions related to the management of customer choice: (1) the type of counseling provided and whether it is mandatory or voluntary, (2) the ability of local counselors to reject the choices of customers, and (3) the method used to control each customer’s ITA spending.

  • Approach 1 Structured Customer Choice is the most directive of the three approaches. Customers participate in a series of mandatory assessment and counseling sessions designed to identify promising training opportunities. During these sessions, customers are guided through the estimation of the benefits and costs of alternative training options and directed toward options expected to yield a high return—that is, programs that will generate earnings on a new job that are high relative to the resources invested in training. Once appropriate training has been chosen, customers receive a customized ITA to fully cover the costs of training.

  • Approach 2 Guided Customer Choice broadly represents the approach that most local agencies have adopted in their transition to WIA. As in Approach 1, customers are required to participate in structured counseling activities, but the activities are less intensive under Approach 2 and are not specifically focused on the return to the training investment. Once customers have completed the required counseling, they are free to choose any training program from the state Eligible Training Provider (ETP) list—counselors cannot reject their choice. Although customers can choose any training program, they receive a fixed ITA award, which limits the ITA resources they can spend on training. Customers can use funds from other sources to supplement their ITA if they want to pursue a training program that costs more than the fixed ITA award.

  • Approach 3 Maximum Customer Choice is the least structured of the approaches. As in Approach 2, all Approach 3 customers receive the same fixed ITA amount and have final authority to choose their own training provider from the ETP list. Unlike Approach 2 customers, however, Approach 3 customers are not required to participate in any counseling activities prior to pursuing the training of their choice.

The approaches are being tested through an experimental approach that randomly assigns new customers to one of the ITA approaches. The advantages of randomly assigning customers are increased precision and accuracy in the impact estimates. Specifically, random assignment ensures that customers assigned to the three ITA approaches will have the same characteristics, on average. Differences in outcomes between the groups during the follow-up period can then be interpreted as resulting from differences in the ITA approaches, with a known degree of statistical precision. For example, the difference in average earnings for Approaches 1 and 2 represents the effect of Approach 1 on earnings relative to Approach 2.

Six grantees were purposely selected by DOL to participate in the evaluation through a competitive process. Although these grantees were purposely selected, they offer a mix of program settings in urban, suburban, and rural areas and in areas spread across the country. One of the grantees, the Workforce Board of Northern Cook County in Des Plaines, Illinois, was selected to be the pilot site; it began sample intake procedures, including random assignment, in 2001. The other five grantees (Consortium of Atlanta Regional Commission and Northeastern Georgia Regional Development Center; The Workplace Inc. in Bridgeport, CN; Charlotte-Mecklenburg Workforce Development Board, Inc. in North Carolina; First Coast Development, Inc. in Jacksonville, FL; and The PWIN/MWC Workforce Consortium in Phoenix, AZ) began sample intake procedures in 2002.

b. The Evaluation

The evaluation of the ITA experiment will examine the relative impacts of the three ITA approaches on four types of outcomes:

  1. Participation in training and related services, including receipt of training, receipt of counseling and other services, and receipt of support services (child care and transportation)

  2. Customer satisfaction, including satisfaction with training and satisfaction with other services

  3. Employment-related outcomes, including employment by quarter, earnings by quarter, and characteristics of jobs (wage rates and fringe benefits)

  4. Dependence on public assistance, including unemployment insurance, cash welfare benefits, and Food Stamps

The ITA experiment is using a classical random assignment design to estimate the relative impacts of the three ITA approaches. Individuals who are found eligible for training during the experiment’s intake period in the six demonstration sites are being randomly assigned to one of the three approaches. Since each approach involves the use of counselors to deliver services and since counselors are likely to differ in their abilities, random assignment is being implemented in such a way that each counselor is assigned roughly equal numbers of individuals for each approach. This procedure avoids biasing the impact estimates, a result which could occur if different counselors had disproportionate shares of the individuals assigned to the three approaches. Hence differences in mean outcomes between treatments provide unbiased estimates of the net impacts of the different approaches (see further discussion in section A.16).

Based on the estimates of the relative impacts of the three ITA approaches, the evaluation will also include an analysis of the relative returns on investment (ROI) for each of the approaches. The objective of the ROI analysis is to assess whether relative to less expensive models, more expensive ITA models provide additional benefits that are large enough to justify the additional costs.

c. Data Needs and Data Sources

Data items needed to measure outcomes and to provide background information on sample members are listed in Table 2 together with the source of the data. The four data sources are:

  1. Program MIS data from the six sites will provide information on training participation and use of other services that are obtained through the WIA system. This source will also provide data on the main demographic and other baseline characteristics.

  2. Unemployment Insurance (UI) wage records will be collected to obtain a 27-month history of employment and earnings—12 months prior to random assignment and 15 months after random assignment. These data will be collected from the 6 states in which the 6 grantees are located and 1 adjacent state.1 Because ITAs are designed to promote training that will raise employment and earnings, UI wage records will provide the most critical information needed to assess the relative impacts of different ITA approaches. Furthermore, because higher output is the primary benefit to training, earnings data from UI wage records will provide the most critical information for the ROI analysis.

  3. Unemployment Insurance (UI) program benefits data will be collected to create a 15-month history of participation and benefits in the Unemployment Insurance (UI) program. These data will be collected from the 6 states in which the 6 grantees are located. Since these data will only cover receipt of UI by the sample member, questions on the survey will also ask about UI receipt by other family members. UI program data will be helpful in assessing the number of weeks that ITA customers received UI benefits. Furthermore, it will be critical to the return-on-investment (ROI) analysis of different ITA approaches: other things equal, ITA approaches that reduce the amount of UI benefits will be more cost-effective for DOL.

  4. The follow-up survey—which is scheduled for approximately 15 months after random assignment—will collect important information on a variety of outcomes for people who were randomly assigned to one of the three approaches in the ITA Experiment (see Appendix A for a copy of the survey). This survey will provide more detailed information on employment outcomes—such as wage rates and fringe benefits—than Unemployment Insurance (UI) wage records and it will provide

TABLE 2

INDIVIDUAL-LEVEL DATA ITEMS and sources


Data Item

Data Source

Baseline Characteristics

Identifying and Contact Information


Sample member (name, address, telephone number)

MIS

Additional contacts (name, address, telephone number)

MIS


Demographics


Age

MIS

Gender

MIS

Race/ethnicity

MIS

Marital status

MIS

Number of children

MIS

Household size

MIS


Prior Experience


Education (highest grade, highest degree)

MIS

Characteristic of last job (wage, benefits, hours, industry, occupation, duration)

MIS, I

Number of years worked

MIS

Quarterly earnings prior to random assignment

WR


Reason for Job Loss

I

Employment and Training Services and Experiences

Receipt of Reemployment Services


Assessment and service planning

MIS, I

Job search assistance and training

MIS, I

Job counseling

MIS, I

Timing of service delivery

MIS


Receipt of Education and Training


Basic-skills training

MIS, I

Occupational classroom training

MIS, I

On-the-job training (duration, service dates, costs, type/occupation, provider, whether completed)

MIS, I


Receipt of Support Services


Child care

MIS, I

Transportation

MIS, I

Other

MIS, I


Satisfaction with Services and Training


I


Income


Unemployment insurance

I or UI

TANF/food stamps

I

Spouse’s earnings

I

Other income sources

I

Program Outcomes

Employment status, by quarter after baseline

WR, I

Quarterly earnings, by quarter after baseline

WR, I

Proportion of follow-up period employed

WR, I

Number of jobs held

WR, I

Characteristics of postprogram job (wage, benefits, hours, industry, occupation)

I

Job search activities

I


MIS=Management information systems; I=15 month follow-up interview; WR=UI wage records; UI=Unemployment insurance record; TANF=Temporary Assistance for Needy Families.

information on all jobs not just those included in the wage record system.2 It will also provide detailed information on household composition and other demographic characteristics. The follow-up survey will be the only source for data on perceptions and attitudes toward each ITA approach, including the level of customer choice, job search behavior after random assignment, characteristics of post-training jobs, and participation in government programs other than UI. It will also provide data on training and other services received outside of the WIA system. The survey will be conducted by telephone using computer-assisted telephone interviewing (CATI) techniques.

d. Additional follow-up survey

A second follow-up survey, as this supporting statement proposes, would be scheduled so that more complete information on training (such as training completion rates), employment, and earnings would be available for further analysis of the impacts of each of the treatments on the participants. The survey is the same survey used in the 15-month follow-up described above.

2. How, By Whom, and For What Purpose the Information Is to be Used

To determine the relative impacts of different ITA approaches on experiences with the workforce system and on labor market outcomes, MPR will use state administrative data and follow-up survey data to conduct a comparison of the three different ITA approaches. This comparison will be based on the experiences and outcomes of ITA customers, such as receipt of one-stop services and satisfaction with those services, education and training, employment and earnings, and participation in government programs. These comparisons will yield estimates of the relative impacts of different ITA approaches on key outcomes.

To make comparisons between the three ITA approaches, MPR will use the administrative and survey data to compute summary statistics, such as means and percentages, separately for each ITA approach. For example, MPR will compute the percentage of ITA customers served by each approach that were satisfied with the training-related services they received. This percentage will be compared across approaches to determine whether the different approaches vary in their ability to satisfy training customers.

DOL will use the findings from the experiment to advise local workforce boards on possible modifications to their ITA programs. Since the goal of the experiment is to determine the relative effectiveness of different approaches to providing ITAs, and the data collected from states and through the survey will provide critical information for making that assessment, the data collection efforts are critical to the relative assessment of different ITA approaches tested in the experiment.

3. Use of Improved Technology to Reduce Burden

Computer Assisted Telephone Interviewing (CATI) will be the primary method of data collection for this survey. CATI was selected because telephone interviews are more cost-effective and impose less burden on respondents than do in-person interviews. CATI is more cost effective than paper and pencil interviewing for many reasons, including the fact that CATI programs accept only valid responses and can be programmed to check for logical consistency across answers. Interviewers are thus able to correct errors during the interview, eliminating the need to call back respondents to obtain missing data. Also, calls will be made through an auto-dialer, linked to the CATI system, virtually eliminating dialing error. The automated call scheduler will simplify scheduling and rescheduling of calls to respondents at their convenience and can assign cases to specific interviewers, for example, those who are fluent in Spanish.

Sample members who are difficult to find will be located through the efforts of field staff. Field staff will typically not conduct interviews. Instead they will facilitate the completion of interviews by having sample members call MPR’s telephone center using their own telephones or cell phones provided by MPR. These calls will be made to a toll free number with the field interviewer present, and responses will be entered directly into the CATI system.

For a small number of cases, interviews will be conducted in-person using hard copy instruments. Some respondents will not have access to telephones and may resist using MPR-provided cell phones to complete the interview. In other cases, phone connections may be problematic making it more expedient to complete the survey in person using paper and pencil.

Field locating and interviewing will be conducted by DIR.

4. Efforts to Identify Duplication

The study will use administrative records data where possible, but since these data are not sufficient to conduct the study, survey data will be needed to supplement the administrative data. The survey will provide detailed information on household composition and other demographic characteristics; data on perceptions and attitudes toward each ITA approach, including the level of customer choice; job search behavior after random assignment; characteristics of post-training jobs, and participation in government programs other than UI. No other survey data collection effort has been conducted or has been planned to collect similar information that could substitute for the follow-up survey.

Some particular outcome measures will be available from administrative records. Specifically two kinds of administrative data will be used.

  1. UI Benefits data: UI agency administrative records on UI eligibility and benefit receipt will be collected from the six states in the study and used in the analysis. Questions on UI receipt included in the survey will refer to the entire household rather than just the sample members.

  2. Wage records: Quarterly wage records will also be collected from the six states to obtain summary information on employment and earnings by quarter.3 Since not all wage and salary jobs are included in the wage record system and since very limited information is available, the survey also includes questions about employment and earnings. Additional detail on employment such as industry, occupation, hours worked, the hourly wage, and fringe benefits not available from wage records will be collected on the survey.

5. Methods to Minimize Burden on Small Businesses or Entities

No small businesses or other small entities will be interviewed for this survey.

6. Consequences of Not Collecting the Data

Data will be collected from study participants. The survey will provide the only source for data for the longer observation period for ITA customers at the six grantees on the following outcomes:

  • Perceptions and attitudes toward each ITA approach

  • Job search behavior after random assignment

  • Characteristics of post-training jobs

  • Participation in government programs other than UI

Therefore, if the second follow-up survey were not conducted, the evaluation would be unable to assess the impacts of different ITA approaches on these outcomes for the longer observation period. For example, preliminary findings of the evaluation found the following:

There is some evidence that the approach affected the rate of completion of training programs, but the difference may primarily be due to the timing of the training. More customers in Approach 3 than in Approach 1 had completed a training program and more had received a certificate or degree from a training program by the time of the survey. However, there were no differences between any of the approaches in the percentage of customers who had completed or were still enrolled in a training program at the time of the survey. This may be related to the findings that customers in Approach 3 entered training earlier than did customers in Approaches 1 or 2 and that customers in Approach 1 were more likely to still be enrolled in training at the time of the survey. Because approximately one in seven customers was still in training at the time of the 15-month survey, and because that rate varied by approach, a longer follow-up period would be necessary to estimate the final impacts on program completion.

As more Approach 1 customers were still in training at the 15-month follow-up survey date than other customers, employment and earnings for Approach 1 customers may grow more after the follow-up period than for other customers. Approach 1 customers’ earnings and employment rates may increase more quickly once they complete training and begin jobs; an extended follow-up window would be necessary to examine this hypothesis.

Approach 3 customers’ earnings are relatively low in the first quarter but catch up to Approach 2 customers’ earnings by the end of the follow-up period. In quarter 1, Approach 2 customers’ earnings exceed Approach 3 customers’ earnings by $205, but by quarter 5 earnings are almost identical for customers in Approaches 2 and 3 (Table 2). While earnings for customers in the three approaches are statistically indistinguishable 15 months after random assignment, there may still be longer term earnings differences that we do not observe because of the relatively short follow-up period. A long follow-up survey would provide more complete information on earnings impacts.

7. Special Data Collection Circumstances

None of the special circumstances are applicable to this data collection. In all respects, the data will be collected in a manner consistent with federal guidelines. The statistical survey will produce valid and reliable results that can be generalized to the universe of study, and it will include only statistical data classifications that have been reviewed and approved by OMB. It will include a pledge of confidentiality that is supported by authority established in statute or regulation and by disclosure and data security policies that are consistent with the pledge. It will not unnecessarily impede sharing of data with other agencies for compatible confidential use.

8. Federal Register Notice

a. Federal Register Notice and Comments

In accordance with the Paperwork Reduction Act of 1995, the public was given an opportunity to review and comment through the 60-day Federal Register Notice that was published January 10, 2003 (FR Vol. 68, No. 7, pages 1482-1483). A.8 Federal Register Notice and Consultation Outside the Agency

A Notice on this proposed extension of an Information Collection was published in the Federal Register on July 25, 2006 (Volume 71, Number 142, pages 42128-42129). The Department received three comments. A summary of the comments received and the Department’s responses follows:

 

 

PUBLIC COMMENT SUMMARY 

 

AGENCY RESPONSE

The commenter stated that trainees who have been engaged, assessed, and tracked by the counselors are the most successful in completing programs. Providing carte blanche to an individual who knows nothing of the demand occupations, or employment trends in a specific region and allowing the same individual to choose a program without any type of assessment, would be a waste if tax payers' dollars.

The ITA experiment examines three types of approaches to delivering ITAs, where treatment 2 was designed to reflect an approach that was generally delivered in the One-Stops.  The experiment examines the differences in participant outcomes that we observe under all three treatments.  However, participants who were randomly assigned to treatments were done so after being determined in need of training by the local area.  As a result, we recognize that there is generally some counseling prior to selection and random assignment, depending on the site and their local practices. 

The commenter thought that this was a new project and wanted state participation information.

The ITA experiment has recently concluded and the purpose of the Federal Register Notice is to allow for additional data collection (i.e., a second follow-up survey) of participants who had already participated in the experiment.  We are simply reserving the option to conduct the survey a second time if necessary for more longer term follow-up. 

The commenter wanted to confirm that this was not a new project, but instead a request to extend the use of the survey.

This is a request for approval of a second follow-up survey of participants who have already participated in the experiment. 





b. Consultations Outside of the Agency

The following individuals were consulted in developing the design, the data collection plan, and the questionnaire. The data collection plan was also presented at a DOL-sponsored conference in 2001.

Name

Affiliation

Telephone Number

Dr. Paul Decker

Mathematica Policy Research

(609) 275-2290

Dr. Sheena McConnell

Mathematica Policy Research

(202) 484-4518

Dr. Rob Olsen

Mathematica Policy Research

(202) 484-4223

Ms. Pat Nemeth

Mathematica Policy Research

(609) 275-2294

Dr. Dan Kasprzyck

Mathematica Policy Research

(202) 264-3482

Dr. John Eltinge

Bureau of Labor Studies

(202) 691-7404

Dr. Ralph Smith

Congressional Budget Office

(202) 225-3149

c. Tests of Procedures or Methods

Nine pretests of the current survey instrument were conducted with participants in Northern Cook County, IL, the pilot site. The pretests assessed the content and wording of individual questions, the organization and format of the questionnaire, respondent burden time, and potential sources of response error. The pretest results were used to modify the questionnaire.

9. Respondent Payments

No payments or gifts will be made to survey respondents as part of this information collection.

10. Confidentiality

Mathematica Policy Research, Inc. and its subcontractors will follow procedures for assuring and maintaining confidentiality consistent with provisions of the Privacy Act. Respondents will receive information about confidentiality protection in an advance letter describing the survey (Appendix E) and again at the outset of the interview as part of the interviewer's introductory comments. Respondents will be informed that all information they provide will be treated confidentially. Both telephone interviewers and field staff will be trained in confidentiality procedures and will be prepared to describe these procedures in full detail, if needed, or to answer any related questions raised by respondents. For example, if asked about confidentiality, the interviewer will explain that the answers will be combined with those of others and presented in summary form only and that the answers will not affect past or future eligibility for any programs.

All data items that identify respondents will be kept only by the contractor, Mathematica Policy Research, Inc., for use in assembling records data and in conducting the interview. Any data received by the U.S. Department of Labor, Employment and Training Administration will not contain personal identifiers thus precluding individual identification.

In addition, the following safeguards are routinely employed by MPR to carry out confidentiality assurances:

  • All employees at MPR sign a confidentiality pledge that emphasizes the importance of confidentiality and sets forth the obligations of staff.

  • Access to sample selection data with personal identifying information is limited to those that have direct responsibility for providing the sample. These data are destroyed at the conclusion of the research.

  • Identifying information is maintained in a separate file from interview data. The files are linked only with a sample identification number.

  • Access to link-files containing sample identification numbers connecting the research data and the respondents' identification is limited to a few individuals who have a need to know this information.

  • Access to any hard-copy documents is strictly limited. Physical precautions include use of locked files and cabinets, shredders for discarded materials, and interview control procedures.

To ensure that there is no secondary data disclosure that inadvertently identifies a sample member, tabulations of study findings will be presented by ITA approach for the full sample in the six sites, for the full sample by site, and for subgroups drawn from all sites. Since we do not plan to report subgroups by site the minimum number of sample members in tabulations at the site level will include about 213 individuals (the number in the survey sample expected to be assigned to one approach in the average site). This number is large enough to avoid secondary data disclosure.

11. Questions of a Sensitive Nature

The survey of ITA Experiment participants contains a minimal set of items that may be considered sensitive in nature. These questions are related to the receipt of individual and household income (F1-F4 in the questionnaire) and public assistance receipt (F5-F13 in the questionnaire). As described in item A10, all respondents will be assured of confidentiality at the outset of the interview. All survey responses will be held in strict confidence and reported in aggregate, summary format, eliminating the possibility of individual identification. MPR will comply with the requirements of the Privacy Act of 1974 in collecting all information. All questions in the current survey, including those deemed potentially sensitive, have been pretested and used extensively in prior surveys with no evidence of harm. Questions about income and public assistance receipt are necessary to measure the economic well-being of study participants and the social rate of return to different ITA approaches.

12. Hour Burden of the Collection of Information

The total hour burden for information collected for the follow-up survey is 1,920 hours as shown in the attached table. This hour burden estimate is based on actual pretests of the survey which averaged 30 minutes to complete.


Cite/Reference

Total

Respondents


Frequency

Average Time Per Response


Burden

ITA Follow-up survey

3,840

One time

30 min.

1,920 hours



The total burden cost of collecting this information is $30,720. This cost represents 30 minutes to complete the survey multiplied by the number of completers (3,840, or 80 percent of the 4,800 targeted for the survey) and by an estimated average hourly wage of $16 per hour.4

13. Estimated Total Annual Cost Burden to Respondents and Record Keepers

There will be no start-up or ongoing financial costs incurred by respondents.

14. Estimated Annualized Cost to the Federal Government

The cost to the Federal government of conducting the survey is $1,093,212.

15. Changes in Burden

This is a new, one time data collection effort counting as 1,920 hours towards ETA’s Information Collection Budget (ICB), and this does not represent a change in respondent burden.

16. Tabulations

The survey data together with the MIS, wage record, and UI benefits administrative data will be used to examine impacts on:

  1. Participation in training and related services, including receipt of training, receipt of counseling and other services, and receipt of support services (child care and transportation)

  2. Customer satisfaction, including satisfaction with training and satisfaction with other services

  3. Employment-related outcomes, including employment by quarter, earnings by quarter, and characteristics of jobs (wage rates and fringe benefits)

  4. Dependence on public assistance, including unemployment insurance, cash welfare benefits, and Food Stamps

Estimating Overall Impacts: Random assignment will allow us to assess the relative effectiveness of the ITA approaches by comparing average outcomes across the approaches. Because all of the approaches will offer ITAs to customers, there is no control group that will be denied services and the comparison of outcomes for two approaches will allow us to assess the difference in the impacts of one approach of ITA versus another, not the effects of an ITA versus no ITA. Note also that the impact analysis will assess the impacts of the various ITA “offers,” not the impacts of the training received, as some persons in each approach may choose not to enroll in training. The impacts of the offer may also reflect the impacts of related assistance, such as counseling, as well as the direct impacts of training.

Random assignment ensures that differences in average outcomes between the different ITA approaches are unbiased estimates of the net impacts of the different approaches. For example, we can estimate the impact of Approach 1 on earnings relative to the impact of Approach 2 by comparing the average earnings of customers assigned to these two approaches. Differences-of-means tests can be used to determine whether the impacts are statistically significant. However, we also plan to use regression methods to estimate ITA approach impacts, since the use of such methods has a couple of advantages:

  • Regression methods control for differences across customers assigned to each ITA approach and thus improve the precision of the impact estimates.

  • Regression methods provide a convenient approach to estimating impacts for subgroups, as further discussed below.

Regression models can be estimated using ordinary least squares or other standard statistical techniques.5

To illustrate our approach, we can define a model that allows us to estimate the net impact of ITA Approach 1, which offers the most prescriptive ITA approach, relative to ITA Approach 2, which broadly represents the status quo. We can use the same model to estimate the net impact of Approach 3, which offers the least prescriptive ITA approach, relative to Approach 2. The sample used will be all customers assigned to an ITA approach, and the estimates will be generated from a model that includes indicators for Approaches 1 and 3 and treats Approach 2 as the omitted category:

(1) Y = a1X + b1A1 + b3A3 + e1,

where:

Y is the outcome for each individual during the period of interest.

X is a set of variables used to control for background characteristics (such as age, gender, race/ethnicity) or prerandom assignment outcomes (employment history, income) that are believed to be important determinants of Y.


A1 is an indicator that equals one for customers assigned to Approach 1 and zero for customers assigned to Approach 2 or 3.


A3 is an indicator that equals one for customers assigned to Approach 3 and zero for customers assigned to Approaches 1 or 2.


e1 is an individual-specific error term, which is assumed to be independent of X, A1, and A3.


The parameters to be estimated are a1, b1, and b3. In particular, the estimated coefficient b1 is interpreted as the average effect of Approach 1 (the most prescriptive model) relative to Approach 2 (the basic model), and b2 is the average effect of Approach 3 relative to Approach 2. We can also compare the relative effects of Approaches 1 and 3 by comparing coefficients b1 and b3.

The same approaches are offered in each site. To increase the precision of the estimates, the impact estimates will be based on samples that are pooled across all of the sites.

Estimates for Subgroups: The most effective ITA approach may differ in different settings (such as urban versus rural areas) or among different population groups (such as low-income adults versus dislocated workers). It would be useful to learn for which groups the various approaches are most effective. However, because samples are smaller, estimates for sample subgroups are generally less precise than are estimates for the full sample. Because the experiment will include three approaches, it will be especially difficult to detect differences in impacts for sample subgroups if these differences are not large. However, we will examine whether differences are detected for some large subgroups, particularly those that involve about half of the sample. Subgroups of potential interest include:

  • Low-income adults versus dislocated workers

  • Men versus women

  • Customers in rural versus urban sites

  • Younger workers versus older workers

To estimate impacts for sample subgroups, we would augment the estimating equation by introducing an interaction term that is the product of the approach indicator and an indicator of membership in the subgroup of interest. For example, to estimate the impact of Approach 1 versus Approach 2 for men versus women, we would modify the basic model in equation (1) as follows:

(3) Y = a1X + b1 A1 + c1 (A1 × MALE) + e,

where MALE is a binary variable indicating whether the customer is male. The effect of Approach 1 (versus Approach 2) for men would be b1 + c1, while the effect for women would be b1. We can test whether each of these impacts is significantly different from zero. We can also use these estimates to test whether the impact for men is significantly different from the impact for women, by testing the statistical significance of the coefficient c1.



17. Reasons for Not Displaying Expiration Date of OMB Approval

The expiration date will be displayed on the advance letter and on the hard copy version of the questionnaire.

18. Exceptions to the Certification Statement 19

There are no exceptions taken to item 19 of OMB Form 83-1.

b. collection of information involving statistical methods

1. Respondent Universe and Sampling

The respondent universe is individuals in the six sites eligible for ITAs. For each of the six grantees, the sample will include all people who were determined eligible for an ITA during the study intake period. These individuals are being randomly assigned to one of the three ITA approaches. The size of the sample will depend on the flow of customers deemed eligible for training. We are expecting that approximately 7,000 to 9,000 people will be randomly assigned during a study intake period of up to 18 months. The table below provides estimates of the minimum number of ITA customers who will be enrolled in the study at each grantee and overall:

Grantee

Expected Sample Size

Consortium of Atlanta Regional Commission and Northeastern Georgia Regional Development Center, GA

1,800

The Workplace, Inc. in Bridgeport, CN

600

Charlotte-Mecklenburg Workforce Development Board, Inc., NC

1,500

First Coast Development, Inc. in Jacksonville, FL

975

The Workforce Board of Northern Cook County in Des Plaines, IL

1,625

The PWIN/MWC Workforce Consortium in Phoenix, AZ

1,500

Total

8,000


We plan to request state administrative data for all study participants and to include approximately 4,800 study participants in the survey sample. If, as expected, the number of study participants exceeds 4,800, we will select a random sample of 4,800 people for the survey sample.6 If sampling is required, we will oversample study participants from grantees that enrolled a relatively small number of study participants to maximize the statistical power to detect grantee-level impacts for the grantee where statistical power is lowest. We will also stratify the sample by month of enrollment and draw the survey sample so that it is proportional to enrollment by month within each site. That will ensure that the survey sample is distributed overtime within site in the same way as the full sample.

Based on experiences with similar surveys, we expect that MPR will obtain approximately an 80 percent response rate in the survey.

2. Statistical Methodology, Estimation, and Degree of Accuracy

The primary objective of the ITA Experiment is to provide statistically valid and reliable estimates of the relative effects of the three ITA approaches on key outcomes, training participation, use of counseling, and earnings. Use of a classical experimental design, in which applicants are assigned randomly to the three approaches, will ensure that measured impacts represent valid estimates of the relative effects of the approaches. The measured impacts will be internally valid for the six sites. Since the six sites were chosen purposively, the results cannot be generalized to a wider population with a known degree of statistical precision. Similarly the results cannot be generalized to a wider set of counselors since all counselors in the sites were included in the study.

Impacts will be estimated by computing differences in mean outcomes between pairs of ITA approaches, adjusted for random differences in client characteristics at intake using multivariate regression. The regression adjustments will increase the precision of the impact estimates. More detail on estimation procedures is included in our discussion of tabulation plans under item A.16.

Given this design, the main question is whether the impacts estimates will be precise enough to detect likely impacts. To answer that question, Table 3 shows minimum detectable impacts for comparisons between two ITA approaches for quarterly earnings and dichotomous outcomes like participation in training or counseling. The analysis is done for a pooled analysis for the entire sample using the survey sample of 4,800 (based on an 80 percent response rate) and using the range of likely administrative data samples of 7,000 to 9,000. We also show minimum detectable impacts for the average site and for subgroups over all sites that contain half the sample and one-third of the sample.

Experiences in the ITA experiment to date indicate that sample sizes for the survey sample and the administrative sample are large enough to detect differential impacts among the three ITA approaches for key dichotomous variables. To date, data from the MIS systems show that differences in the use of counseling and approval of training between the voucher approach (Approach 3) and the other two approaches easily exceed the minimum detectable differences for the full sample and for major subgroups for both the survey and administrative samples, suggesting that likely differences in other important dichotomous outcomes including participation in training and completion of training will also be detected.


TABLE 3

MINIMUM DETECTABLE DIFFERENCES BETWEEN ITA MODELS



Minimum Detectable Impacts

Sample

Available Sample

Dichotomous Outcomes

Quarterly Earnings (dollars)

Survey Sample (4,800)




Full sample

1,280/1,280

.055

332

Half sample

640/640

.078

469

One-third sample

427/427

.096

574

Average site sample

213/213

.135

813


Administrative records (7,000)




Full Sample

2,333/2,333

.037

246

Half sample

1,167/1,167

.052

347

One-third sample

778/778

.063

425

Average site sample

389/389

.090

601


Administrative records (9,000)




Full Sample

3,000/3,000

.032

217

Half sample

1,500/1,500

.046

306

One-third sample

1,000/1,000

.056

375

Average site sample

500/500

.079

530


Note: The calculations assume (1) a 95 percent confidence level with an 80 percent level of power; (2) a two-tail test; (3) a reduction in the variance of 20 percent owing to the use of regression models; (4) a 80 percent response rate for the interview; and (5) a standard deviation of .5 for dichotomous variables and $3,000 for quarterly earnings, which are consistent with findings from previous studies of similar populations. The minimum detectable differences (MDD) are calculated using the following formula: where = 2.8 for a two-tail test, is the standard deviation of the variable, R2 is the variance explained by the regression model, r is the response rate, and n is the size of each ITA model group.



These differences in use of training suggest that there are likely to be differences in earnings among the ITA approaches, but is the sample size sufficient to detect likely differences? It is hard to answer this question since there are no similar experiments that provide insights into the likely earnings differences among ITA approaches, but evidence from the Job Search Assistance (JSA) demonstration suggests that likely differences may be detected. That demonstration, which worked with an Unemployment Insurance population that is similar to the predominately dislocated worker population being served in the ITA Experiment, estimated earnings differences during the first year after random assignment of about $185 dollars a quarter in 2003 dollars (Decker et al. 2000). As shown in Table 3 quarterly earnings of $217 to $246 can be detected with the administrative data sample, depending on the eventual sample size. While these numbers are larger than those found for the JSA demonstration, the larger number of trainees and the differences in training rates that have occurred in the ITA Experiment as compared to the JSA demonstration suggest that the differences in earnings may be larger in the ITA Experiment than in the JSA demonstration.

As shown in the table, detectable differences for the subgroups are larger due to smaller sample sizes. For subgroups that make up half of the sample, the detectable differences in quarterly earnings is $306 to $347. These differences should provide adequate power to identify subgroups for whom a given approach is particularly effective.

3. Methods to Maximize Response Rates and Data Reliability

a. Response Rates

Several strategies will be used to achieve a high response rate to the follow-up survey. First, before interviewing begins, an advance letter describing the purpose and sponsorship of the survey will be mailed to potential respondents (see Appendix E). This advance letter will assure potential respondents that the caller is conducting a legitimate research interview and not soliciting donations or selling anything. Letters will be sent approximately one week before the sample is released to the CATI call scheduler. The letter will request up-to-date contact information and provide a toll-free call-in number.

Second, staff from MPR’s experienced pool of interviewers will be recruited and extensively trained. These interviewers will be thoroughly trained on data collection procedures, including methods for promoting cooperation among sample members. Interviewers especially skilled at encouraging cooperation will be available to persuade reluctant respondents to participate and will be assigned to attempt conversions with respondents who initially refuse (except for hostile refusals). Bilingual interviewers will also be available for conducting interviews in Spanish.

Third, call scheduling will allow respondents to select the time most convenient for them to be interviewed. We plan to conduct this survey using CATI, which ensures control of sample releases, call scheduling, and questionnaire logic and completeness.

Fourth, locating activities will be conducted to find sample members who are not found at the address available from program intake records. Additional addresses and telephone numbers were obtained at intake for three individuals who the sample member identified as likely to know their location (for example, relatives). These individuals will be contacted to try and located the sample member. That approach has proved valuable in past studies, but if it is unsuccessful extensive use will be made of various on-line databases to try to locate sample members who have moved.

Finally, field staff will be used to locate sample members without a known address or telephone number. We expect these techniques to yield an 80 percent response rate. We expect that 65 percent will be achieved with the telephone survey effort and the remaining 15 percent will result from the field effort conducted by DIR.

When the survey is completed we will conduct an analysis of nonresponse to assess whether the survey sample is representative of the initial population of ITA applicants. In particular we will examine whether any differences in response rates among individuals assigned to each ITA approach may affect the findings. This analysis will use background data collected on the MIS including demographic data. Quarterly wage record data on post random assignment earnings, not subject to nonresponse, will be used to examine differences in earnings. Sample weights will be assigned to adjust for differences between responders and nonresponders in important background characteristics.

b. Reliability of Data Collection

The draft questionnaire was built extensively on questionnaires developed for other U.S. Department of Labor studies, including the Trade Adjustment Assistance Survey (OMB number 1205-0306); the Job Search Assistance Demonstration Survey (OMB number 1205-0367, and the National Job Corps Study Thirty-Month Follow-Up Interview (OMB number 1205-0360). The questions were designed to ensure that they would be easily understood by respondents. Revisions were made to the draft questionnaire based on an internal review, a review by DOL, and a pretest.

The use of CATI to conduct the survey also helps ensure the reliability of the data. It controls question branching (reducing item nonresponse due to interviewer error), modifies wording (providing memory aids and probes and personalizing questions), and constructs complex sequences that are not possible to produce or are less accurate in hard-copy surveys. The probes, verifications, and consistency checks are built into the system standardizes procedures. These procedures ensure the reliability of the data collection methods and the data collected through those methods.

Lastly, MPR will monitor 10 percent of each interviewers’ work using silent call-monitoring equipment and video monitors that display the interviewers’ screen.

4. Tests of Procedures or Methods

Nine pretests of the current survey instrument were conducted with participants in Northern Cook County, IL, the pilot site. The pretests assessed the content and wording of individual questions, the organization and format of the questionnaire, respondent burden time, and potential sources of response error. The pretest results were used to modify the questionnaire.

5. Individuals Consulted on Statistical Methods

The following persons outside of the Employment and Training Administration contributed to, reviewed, and/or approved the design, instrumentation and sampling plan:

Name

Affiliation

Telephone Number

Dr. Paul Decker (Project Director)

Mathematica Policy Research

(609) 275-2290

Dr. Sheena McConnell

Mathematica Policy Research

(202) 484-4518

Dr. Rob Olsen

Mathematica Policy Research

(202) 484-4223

Dr. Peter Schochet

Mathematica Policy Research

(609) 936-2783

Dr. Dan Kasprzyck

Mathematica Policy Research

(202) 264-3482

Dr. John Eltinge

Bureau of Labor Statistics

(202) 691-7404

REFERENCES

Decker, Paul. T., Robert B. Olsen, Lance Freeman, and Daniel H. Klepinger. Assisting Unemployment Insurance Claimants: The Long-Term Impacts of the Job Search Assistance Demonstration. Office of Workforce Security Occasional Paper, 2000-02. Washington, DC: U.S. Department of Labor, Employment and Training Administration, 2000.

Needels, Karen, Walter Corson, and Walter Nicholson. Left Out of the Boom Economy: UI Claimants in the Late 1990s. ETA Occasional Paper 2002-03. Washington, DC: U.S. Department of Labor, Employment and Training Administration, 2001.






APPENDIX a


SURVEY INSTRUMENT





APPENDIX B


RESEARCH SECTION OF WORKFORCE INVESTMENT ACT







APPENDIX C


federal register notice







APPENDIX D


advance letter

1The site in northern Florida (First Coast Development, Inc. in Jacksonville) is near enough to Georgia that some individuals may work in Georgia. For that reason we plan to obtain wage records from Georgia as well as Florida for this site. The other sites are sufficiently far from state borders that it is unlikely that individuals will have worked in neighboring states.

2Wage records are not reported on a routine basis for Federal jobs and are unavailable for self-employment and wage and salary jobs not covered by state UI programs.

3As noted above, wage record data will be collected from both Florida and Georgia for the Florida site, but since there is also a site in Georgia, the total number of states supplying wage records will be six.

4The average wage for UI recipients reported in a recent study of this population (Needels et al, 2002) is $16 per hour.

5For example, when the outcome is a discrete variable (such as whether or not an individual is employed at a given point in time), we will use probit or logit techniques more appropriate for discrete outcomes.

6A two-stage sampling plan in which the survey sample is selected based on data from the state administrative records is infeasible. The state administrative records will not be available in time to revise the survey sample.

5

File Typeapplication/msword
File TitleMEMORANDUM
AuthorCindy CMcClure
Last Modified Bynaradzay.bonnie
File Modified2006-10-12
File Created2006-10-12

© 2024 OMB.report | Privacy Policy