Evaluation of Strategies Used in the TechHire and Strengthening Working Families Initiative Grant Programs
OMB Control No. 1290-0NEW
OMB SUPPORTING STATEMENT PRA PART A
The U.S. Department of Labor’s Chief Evaluation Office (CEO) is undertaking the Evaluation of Strategies Used in the TechHire and Strengthening Working Families Initiative (SWFI) Grant Programs. The evaluation includes both implementation and impact components. The purpose of the evaluation is to identify whether the grants help low-wage workers obtain employment in and advance in H-1B industries and occupations and, if so, which strategies are most helpful.
This supporting statement is the first in a series of OMB submissions that correspond to an array of data collection activities for the evaluation. CEO is seeking clearance in this submission for: the baseline information form (BIF), the 6-month follow-up survey, the first round of site visits which includes interviews with grantee staff and partners, and a participant tracking form.
Subsequent OMB submissions will seek clearance for additional evaluation data collection activities, including: a survey of grantees, semi-structured telephone interviews with grantees, a partner information template, a survey of partners, semi-structured telephone interviews with partners, an 18-month participant follow-up survey, and a second round of site visit interviews and focus groups.
A.1 Circumstances Necessitating the Information Collection
A user fee paid by employers to bring foreign workers into the United States under the H-1B nonimmigrant visa program provides funding for the TechHire and SWFI grants as authorized by Section 414(c) of the American Competitiveness and Workforce Improvement Act of 1998 (ACWIA), as amended (codified at 29 USC 3224a). In September 2016, the Employment and Training Administration (ETA) competitively awarded 39 Tech Hire grants and 14 SWFI grants. These programs attempt to help U.S. residents access middle- and high-skill high growth jobs in H-1B industries. Broadly, the goals of TechHire and SWFI are to identify innovative training strategies and best practices for populations that have barriers to participating in skills trainings. Both programs emphasize demand-driven training strategies, including employer involvement in training, usage of labor market data, work-based learning, and sectoral partnerships, among other priorities. A key goal of both programs is to bring the training system into better alignment with employer needs. This evaluation seeks build knowledge about the implementation and effectiveness of the approaches used under these grant programs.
CEO undertakes a learning agenda process each year to identify Departmental priorities for program evaluations. This evaluation was prioritized as part of that process in FY 2016. Division H, Title I, section 107 of Public Law 114-113, the “Consolidated Appropriations Act, 2016” authorizes the Secretary of Labor to reserve not more than 0.75 percent from special budget accounts for transfer to and use by the Department’s Chief Evaluation Office (CEO) for departmental program evaluation. Further, 29 USC 3224a (1), authorizes the Secretary of Labor to conduct ongoing evaluation of programs and activities to improve the management and effectiveness of these programs.
Overview of Evaluation
The evaluation research questions can be topically summarized as follows:
Grantee Program Descriptions:
What are the types and combinations of programs, approaches or services provided under the TechHire and SWFI grant programs?
What are the characteristics of the target populations served?
How are services for the target population implemented?
What are the issues and challenges associated with implementing and operating the programs, approaches, and/or services studied?
Implementation Procedures and Issues:
How were the programs implemented?
What factors influenced implementation?
What challenges did programs face in implementation and how were those challenges overcome?
What implementation practices appear promising for replication?
Partnerships and Systems:
How were systems and partnerships built and maintained?
What factors influenced the development and maintenance of the systems and partnerships over the lifecycle of the grant?
What challenges did programs face in partnership and systems building and how were those challenges overcome?
How did partnership development and maintenance strategies evolve over the lifecycle of the grant?
Outputs and Outcomes and Effective Strategies for Overcoming Barriers:
How and to what extent did the customized supportive services and education/training tracks expand participant access to targeted employment, improve program completion rates, connect participants to employment opportunities, and promote innovative and sustainable program designs?
What strategies and approaches were implemented and/or appear promising for addressing systematic barriers individuals may face in accessing or completing training and education programs and gaining employment in H1B industries?
Removal of Barriers and Coordination at the Systems Level:
How and to what extent did the programs both remove childcare barriers and address the individual job training needs of participants?
What were the changes in the coordination of program-level supports (training and support services) as well as the leveraging, connecting, and integrating at the systems level?
What was the reach and interaction of this program to parents who receive other federal program supports?
To address each of the five research areas, the evaluation includes both implementation and impact components. The implementation study will focus on all 53 TechHire and SWFI grantees and serve several purposes: providing a thorough description of all of the TechHire and SWFI programs; documenting implementation barriers and facilitators; describing partnerships and systems change; and providing descriptive data on program outputs and outcomes. The impact study will include both a randomized control trial (RCT) study and quasi-experimental design (QED) study. The RCT study will include 6 grantees, whereas the QED study will include all of the 53 grantees. In the RCT study, individuals will be randomly assigned to a treatment group, allowed to enroll in TechHire or SWFI and receive training and related services, or to a control group, which will have access to services available in the community. Random assignment, if properly implemented, reduces the likelihood that the treatment and control groups differ at baseline, thereby strengthening the conclusion that differences in employment and earnings outcomes can be attributed to the program rather than to preexisting differences between the groups. The target sample size is roughly 300 in each treatment group in six sites, and 300 in each control group, for a total of 3,600 sample members overall. See Table A.1.1.
Table A.1.1 Sample sizes for the RCT Study
Site |
Treatment group members |
Control group members |
Total sample |
TechHire site 1 |
300 |
300 |
600 |
TechHire site 2 |
300 |
300 |
600 |
TechHire site 3 |
300 |
300 |
600 |
TechHire site 4 |
300 |
300 |
600 |
SWFI site 1 |
300 |
300 |
600 |
SWFI site 2 |
300 |
300 |
600 |
Total |
1,800 |
1,800 |
3,600 |
The purpose of the QED is to understand how program impacts vary with implementation and participant characteristics. The QED will use the control groups from the 6 RCT sites as the comparison pool. The QED bridges the RCT and implementation studies and will illuminate which strategies used in TechHire and SWFI are most promising. For example, the QED will look at whether sites that include work-based learning have larger than average nonexperimental impacts. It will also analyze whether sites with a specific target population tend to have stronger or weaker than average nonexperimental impacts. The implementation data collected will also be used in the QED to shed light on variations in impacts calculated as part of the QED.
Overview of Data Collection
To address the research questions listed above, the evaluation will include the following data collection activities:
Baseline Information Form (BIF) (clearance requested in this package)
6-month follow-up survey (clearance requested in this package)
Round 1 site visit interviews with grantee staff (clearance requested in this package)
Round 1 site visit interviews with grantee partners (clearance requested in this package)
Participant tracking form (clearance requested in this package)
Grantee Survey (clearance will be requested in a future package)
Semi-structured telephone interviews with grantees (clearance will be requested in a future package)
Partner information template (clearance will be requested in a future package)
Partner Survey (clearance will be requested in a future package)
Semi-structured telephone interviews with partners (clearance will be requested in a future package)
Round 2 site visit interviews (clearance will be requested in a future package)
18- month follow-up survey (clearance will be requested in a future package)
With the submission of this justification, CEO requests clearance for the first, second, and third data collection components listed above (i.e., the BIF, 6-month follow-up survey, and site visit interviews). The Department of Labor (DOL) anticipates submitting future OMB packages to request permission to conduct the fourth through tenth components. The evaluation team is submitting the full package for the study in parts because the study schedule requires random assignment to take place and the implementation study to begin before the other instruments are developed and tested.
A.2 Purpose and Use of the Information
This section discusses how information obtained through the BIF, 6-month follow-up survey, and site visit interviews will be used to assess how TechHire and SWFI are implemented, to assess the impact of TechHire and SWFI on outcomes of interest, and to assess variation in impacts attributable to program and participant characteristics. The data collected will be used in the implementation and impact components of the study.
A.2.1 Baseline Information Form (BIF)
Grantee staff will administer a baseline information form as part of the process to randomly assign eligible individuals to the treatment and control study groups in the six RCT grantees. Staff members will provide participants a study informed consent form prior to participants’ completion of the BIF. The information from the BIF will be entered into an online random assignment application. The brief form will provide important data for the RCT and the associated implementation study. It will collect information on demographic characteristics, household composition, educational attainment and training activities, and employment history. The BIF will also collect detailed contact information, including the names and contact information of three people who are likely to know the participants’ whereabouts at all times. The BIF data will be used for the following purposes:
Conduct random assignment. Identifying information collected in the BIF will be used to double-check that random assignment results within each of the six sites have virtually identical baseline characteristics and that individuals were not randomly assigned more than once.
Tracing participants for follow-up surveys. Contact information is essential for locating participants for follow-up surveys. Social Security Number (SSN) will be used to obtain National Directory of New Hires (NDNH) data on earnings to measure program impacts. The SSN will also be useful for tracing participants for follow-up surveys.
Increase precision. Information collected on the BIF will also be used as covariates to increase the precision of impact estimates and to identify subgroups for analysis.
Conduct nonresponse bias analysis. Baseline data will provide valuable information for conducting nonresponse analysis for the follow-up surveys. Survey respondents and non-respondents will be compared on demographic characteristics, household composition, educational attainment and training activities, and employment history collected on the BIF.
Provide measures for the QED. The BIF will also collect information on a select set of characteristics in a way that is comparable to the DOL Participant Individual Record Layout (PIRL) characteristics data recorded for all 53 TechHire and SWFI participants. This will support the matching of the pooled RCT control group sample to all program participants in the QED without requiring grantees to enter PIRL data for control group individuals.
A.2.1 6-Month Follow-Up Survey
The purpose of the 6-month follow-up survey is to collect information about training completion, educational progress, and employment and earnings for study participants in the six RCT grantee sites. The survey will document the types of program services being received in the short term, and provide data, in the RCT sites, that can be used to determine if TechHire and SWFI are having short-term impacts on participation in and completing of training, receiving supportive services (including child care), and making educational progress. It will also identify any early impacts on employment and earnings. In addition, this survey will capture the intensity of the services received and the nature and extent of contrast in service receipt between the treatment and control groups.
A.2.1 Site Visits (Interviews with Grantee Staff, Partners)
Sites visits will be conducted in the six RCT sites. This first round of site visits will provide a more in-depth look at the programs undergoing RCTs. Each visit will be 2-3 days in length, and will include half-hour interviews with staff and observations of program operations. Combining the site visit data with all other data collected about the sites’ programs will permit an analysis of why impacts may or may not occur for particular RCT sites. The visits will provide more nuanced information about implementation challenges encountered, and lessons learned about implementation and operations.
During the site visits, half-hour semi-structured in-person interviews will be scheduled with a select group of grantee partners. A subset of questions will be asked during two implementation research visits to each of the six TechHire/SWFI RCT sites. Some questions will be asked at both visits but many during either the first visit or the second visit. In addition, some questions will only be asked in the SWFI sites. Certain questions will be applicable to the employer partners but not to the other partners. These interviews will include topics such as program structure, recruitment, intake and assessment, training services, childcare counseling, financial, or other supportive services, program successes and challenges, and program sustainability. Combined with other site visit data, this will help describe why impacts may or may not occur for particular RCT sites. The purpose of these interviews is to understand the implementation of the TechHire/SWFI initiative, the challenges being encountered, and the lessons learned about implementation and operations.
A.2.2 Participant Tracking Form
Participants will be mailed a tracking form between the 6-month and 18-month follow-up surveys. The purpose of the tracking form is to encourage participants to provide updated contact information to facilitate locating participants for the 18-month follow-up survey. The tracking form will be send via U.S. postal service and include a postcard that participants can return with their updated contact information. The tracking form will also provide an email address and telephone number that participants can use to provide updated information to the evaluation team.
A.3 Use of Information Technology
Grantee staff will use an online random assignment application to enter study participant’s identifying information as well as self-reported demographic, education, and employment history information. Key identifiers (name and SSN) will be entered into the database twice. The application recognizes discrepancies in data entry of these fields and requires users to correct them before submitting the record. This application will check for accuracy, provide efficient upload of participant BIF information and randomly assign participants to the intervention or control group.
The evaluation team will conduct the 6-month follow-up survey using a sequential multi-mode approach that begins with the most cost effective mode. Participants will be invited to complete a web survey. Web surveys are low cost and less burdensome, as they offer easy access and submission, while also allowing participants to complete the survey at a time convenient to them and at their own pace. A web survey has the additional advantages of reducing the potential for errors by checking for logical consistency across answers, accepting only valid responses and enforcing automated skip patterns. Participants will be provided the URL and a unique PIN to access the survey. To increase the response rates, automated weekly email reminders and text messages will be sent to all non-respondents. The evaluation team does not expect to achieve the desired response rate via the web only. After sending out three reminders to further increase response rates, interviewers will contact all non-respondents and invite them to complete the survey by telephone. If participants cannot be reached by telephone, interviewers will search for respondents using batch tracing and individualized tracing. As a last resort, field staff will be dispatched to the last known address and attempt to locate participants via foot tracing. The participant will be provided with a cell phone and asked to call Westat’s Telephone Research Center (TRC) to complete the survey. It is anticipated that 25 percent of completed surveys will be completed via the web and 75 percent via telephone.
A.4 Identification of Duplication of Information Collection Efforts
DOL is not aware of any previous or planned effort to collect similar information concerning TechHire and SWFI program impacts or implementation. The data collection is needed to gather the information necessary to address the research questions of the evaluation. Specifically, the information being recorded in the BIF is not available elsewhere. No other data source collects the needed characteristics of the sample in terms of education and employment history, work-related barriers, and training to ensure the integrity of the random assignment process. In addition, no other data source collects contact information needed to locate individuals for follow-up surveys.
In an effort to reduce burden on grantees and participants and avoid duplication of current data collection efforts, the evaluation will use two existing data sources. First, the National Directory of New Hires (NDHD) contains data on employment and earnings outcomes maintained by the Office of Child Support Enforcement (OCSE). These data are accurate and have better coverage of some jobs, such as federal jobs, compared to state Unemployment Insurance (UI) records. One disadvantage is that NDNH does not contain information on hours worked or hourly wages.
Second, DOL PIRL data provides individual-level data collected from grantees on TechHire/SWFI treatment participants, including demographic and socioeconomic characteristics, training, service receipt, and outcomes. The DOL PIRL captures SSNs, which will be used to link data on employment and earnings for the QED. Use of the DOL PIRL provides access to information on participant outcomes for the 47 grantees not included in the RCT.
A.5 Impacts on Small Businesses or Other Small Entities
The evaluation of the pilot program will impose no burden on this sector of the economy.
However, some grantees are community-based organizations, the study team will work with all grantees to streamline the intake and randomization process to minimize the impact of this process on grantee staff. The randomization process is based on an online random assignment application designed to efficiently share BIF data with the study team. The study team will provide system user manuals, technical assistance, and training to all program staff to help streamline this process. Collection of follow-up information from participants does not impact small businesses or small entities. The site visits will be scheduled with grantees in advance at a time that is convenient, and the study team will ensure that the visit is efficient and productive. DOl is collecting the minimum amount of data necessary.
A.6 Consequence to Federal Program or Policy if Collection is not Conducted
The evaluation will contribute to the body of literature about strategies to help low-wage workers obtain and advance in employment. Moreover, since DOL is funding other H-1B skills training programs, it is important to have rigorous information about the implementation and impact of the programs. If the information collection is not conducted, DOL will not be able to determine whether the grantee programs are effective and which strategies are effective.
Without collecting the BIF, random assignment could not be implemented. The lack of baseline information would make it impossible to determine whether random assignment was correctly implemented and for whom the program is likely to be most effective. Lack of baseline information would also lead to less precise impact estimates, making it difficult to detect differences between the treatment and control groups. and adjustments for nonresponse to the follow-up surveys would not include rich data. Finally, if the baseline data were not collected, response rates to the follow-up surveys would be lower and increase the likelihood of nonresponse bias.
Without collecting the 6-month follow-up survey, it would be impossible to determine whether the program increases completion of education and training and alleviates work-related barriers, such as transportation and childcare. Not knowing whether the program had an impact on these intermediate outcomes would make it impossible to interpret the medium-term impact findings on employment and earnings.
Finally, without collecting the site visit interviews, the evaluation team would not have information about how the program are implemented, making it difficult to interpret the impact findings.
A.7 Special Data Collection Circumstances
There are no special circumstances relating to the guidelines of 5 CFR 1320.5. This request fully complies with 5 CFR 1320.5.
A.8 Federal Register Notice
DOL published a notice on November 17, 2016 in the Federal Register, Volume 81, Number 222, pages 81172-81174 and provided a 60-day period for public comments. A copy of this notice is included in this package. DOL did not receive any public comments.
The following people were consulted in developing the study design.
Technical Working Group
Kevin M. Hollenbeck, Ph.D., Vice President, Senior Economist, W.E. Upjohn Institute
Jeffrey Smith, Professor of Economics, Professor of Public Policy, University of Michigan
Gina Adams, Senior Fellow, Center on Labor, Human Services, and Population at The Urban Institute
David S. Berman, MPA, MPH, Director of Programs and Evaluation for the NYC Center for Economic Opportunity, in the Office of the Mayor
Mindy Feldbaum, Principal at the Collaboratory
A.9 Payments/Gifts to Respondents
For the follow-up survey to be most successful, the evaluator determined that monetary gifts should be provided to study participants in appreciation of the time they spend on data collection activities. These tokens of appreciation are a powerful tool for maintaining low attrition rates in longitudinal studies. The use of monetary gifts for the follow-up survey can help maximize response rates. Three factors helped to determine the gift amount for the follow-up survey:
Respondent burden;
Costs associated with participating in the interview at that time; and
Other studies of comparable populations and burden.
Previous research has shown that sample members with certain socioeconomic characteristics are significantly more likely to respond to surveys when there is a monetary gift. In particular, sample members with low incomes and/or low educational attainment have proven responsive to incentives (Duffer et al. 1994; Educational Testing Service 1991).
Sample members assigned to the control group will receive $50.00 as a token of appreciation for the comprehensive intake and screening process that could require several return visits to the grantee’s offices. Those randomly assigned to the control group will have gone through the same intake process as treatment group members and would therefore have been qualified to receive services in the absence of the study. This token of appreciation to the control group members, at the time of random assignment will help offset the hours they spent upfront and ameliorate their disappointment and increase their likelihood and staying engaged in the evaluation. It will encourage control group members to stay in touch with the evaluation team and be more responsive to answering the 6-month and 18-month follow-up survey questionnaires.
Previous studies have successfully used such an approach. For example, the Small Loan Study, conducted by MDRC and funded by the Robin Hood foundation, provided payments of $65 to individuals randomly assigned to the control group (compared to $15 to individuals randomly assigned to the program group).. Subsequent to the implementation of the new study payment approach, study recruitment and program staff satisfaction with the evaluation improved substantially, and cohorts who had higher control group study intake incentives had higher survey response rates. Similarly, Westat’s Mental Health Treatment Study (MHTS), a large-scale random assignment study sponsored by Social Security Administrative (SSA), provided an incentive for survey completion for the control group only in an effort to keep them engaged in the study. The MHTS achieved higher survey response rates for control as compared to program group member in some cases. While there is no literature to establish an appropriate amount for the control group incentive, we believe that $50 is appropriate given that in some RCT sites, participants will be required to come to two or three sessions before being deemed eligible for the program and randomly assigned.
Participants who complete the 6-month follow-up survey will receive $30 if they complete the survey within the first four weeks and $20 if they complete it after that time. This approach is supported by a number of previous studies. As has been documented elsewhere,1 it is increasingly difficult to achieve high response rates in surveys. In some instances, incentives have been found to be cost neutral as the price of the incentive is offset by the reduction in field time and contact attempts necessary to garner participation.2 Incentives are a reliable way to increase the overall quality of the survey by maximizing the response rate and increasing the efficiency of the survey operations. Several studies have found that incentives are particularly effective for minority and low income groups.3 The choice of early response incentive is guided by the desire to encourage participants to complete the survey by the most cost effective method possible, thereby increasing efficiency of data collection. Literature suggests that early bird incentives can be effective at increasing early response and may even be more effective than pre-paid incentives.4 One study found that providing an early bird incentive of $100 increased the response rate in the cutoff period from 20 percent to 29 percent compared to a $50 incentive.5 Similarly, DOL’s YouthBuild Evaluation used an early response incentive experiment and found that those who were offered the $40 incentive had 38 percent higher odds of completing their survey within the first four weeks, compared to those who were offered the $25 incentive. An early response incentive has the potential to reduce overall data collection costs by shortening the data collection period and driving more responses into the more cost effective web mode.
Finally, a $2 pre-paid incentive will be provided for respondents to return the contact information update postcard. Collection of between-wave contact information updates for respondents in longitudinal studies has been shown to reduce attrition and reduce costs by decreasing the number of call attempts and the need for more costly tracing efforts and refusal conversion.6 Studies have shown that providing incentives to update or confirm contact information between waves increases the response rate to the contact information mailing. The incentive will be included as a pre-payment because extensive literature shows that pre-payments, even those as little as $2, can have a significant impact on increasing response rates compared to promised incentives.7 This same research has found that there are smaller gains in response rate for pre-paid incentives above $2.8
A.10 Assurance of Privacy
Westat and MDRC are very cognizant of federal, state, and DOL data security requirements. All Westat and MDRC study staff will comply with relevant policies related to secure data collection, data storage and access, and data dissemination and analysis. All staff working with PII will sign data security agreements. Westat’s and MDRC’s security policies meet the legal requirements of The Privacy Act of 1974; the “Buckley Amendment,” Family Education and Privacy Act of 1974; the Freedom of Information Act; and related regulations to assure and maintain the privacy of program participants. The evaluation team will take the following precautions to ensure the privacy and anonymity of all data collected:
All project staff, including telephone interviewers, research analysts, and systems analysts, will be instructed in the privacy requirements of the data and will be required to sign statements affirming their obligation to maintain privacy;
Only evaluation team members who are authorized to work on the project access to client contact information, completed survey instruments, and data files.
Data files that are delivered will contain no personal identifiers for respondents;
All data will be transferred via a secure file transfer protocol (FTP); and
Analysis and publication of project findings for the participant survey will be in terms of aggregated statistics only.
All participants will be informed that the information collected will be reported in aggregate form only and no reported information will identify any individuals. In addition, the study has requested a Confidentiality Certificate from the U.S. government, meaning that we do not have to identify individual participants, even under court order or subpoena. This information will be conveyed to participants in the Informed Consent Form.
Upon obtaining signed consent forms from the participant, a staff member will log into MDRC’s secure web-based random assignment application and enter the study participant’s identifying information and selected (self-reported) measures about the participant’s demographic, education, and employment history. If the online random assignment system is not functioning for any reason, staff members will either collect baseline information on a paper form or use MDRCs phone backup system. MDRC’s security protocols cover all aspects of privacy for hard copy and electronic data. Any paper forms will be shipped to MDRC using Federal Express or an equivalent system that allows for package tracking; if any item is delayed or lost it will be investigated immediately. MDRC stores all documents containing sensitive information in locked file cabinets or locked storage rooms when not in use. Unless otherwise required by DOL, these documents will be destroyed when no longer needed in the performance of the project.
Access to the online surveys will require a unique PIN provided by the contractor to the respondent. Survey data collection will use secure sockets layer encryption technology to ensure that information is secure and protected.
Recordings will be made of the interviews, subject to respondent approval. Interviewers will ensure a private meeting space. Written materials and analyses from the interviews to be used as part of study reports will be prepared in such a way as to protect the identity of individuals. Only the site visit study team staff present at the interviews, the principal investigator, project director, and selected staff helping transcribe the recordings will have access to the notes. Notes will be securely stored in protected electronic files or locked cabinets. Only the staff members present at the interviews or transcribing the recordings will have access to the recordings. All site visit staff, project leadership, and transcribing staff will sign privacy agreements before the interviews are conducted or before working with the data.
When not in use, all completed hardcopy documents will be stored in locked file cabinets or locked storage rooms. Unless otherwise required by DOL, these documents will be destroyed when no longer needed for the project. Evaluation team members working with the collected data will have previously undergone background checks that may include filling out an SF-85 or SF-85P form, authorizing credit checks, or being fingerprinted.
At the conclusion of the study, the evaluation team will provide DOL with a public-use file (PUF) containing individual-level data from the BIF and 6-month follow-up survey that is stripped of all personally identifying information. The PUF will not contain NDNH data. The PUF will be subject to a disclosure risk analysis.
A.11 Justification of Questions of a Sensitive Nature
The questions collected in the BIF are a standard part of enrollment in most programs. One exception is personally identifying information, including name, address, email, phone number, and SSN. Participants will be asked to update this information on the 6-month follow-up survey. SSN is important to match participants with DOL PIRL data and NDNH data for analysis of program impacts and to ensure that participants are not randomly assigned more than once. Participant name and contact information is necessary in order to locate participants for the follow-up surveys. The BIF and follow-up surveys also collect the names and contact information of up to three people who know the respondent well to be used for locating participants.
The BIF also collects information on criminal justice involvement, which may be considered sensitive by some participants. Several grantees are serving individuals with criminal records, and an important policy question is whether job training works for this population.
A.12 Estimate of Annualized Burden Hours and Costs
Collection of baseline data and random assignment will begin as soon as OMB approval is received for this package. Table A.12 presents the estimated annual respondent hour and cost burden for the BIF, 6-month follow-up survey, and site visit interviews. Burden estimates are based on the contractor’s experience conducting similar data collections and BLS based hourly wage rates.
The evaluation team estimates that the BIF will take participants 30 minutes on average to complete. This includes the time to read the consent form. The evaluation team projects that 1,200 participants will be randomly assigned per year. The annual burden hours for the BIF are therefore (1,200 x 30/60) = 600 hours.
The evaluation team estimates an 80 percent response rate to the 6-month follow-up survey. The 80 percent response rate equates to (.80 x 1,200) = 960 participants. Completion of the 6-month follow-up survey will be approximately 20 minutes. The annual burden hours are 960 x 20/60 = 320 hours. The burden does not include burden associated with the 18-month follow up survey, which will be included in a future OMB submission as discussed above.
The evaluation team will interview approximately 10 grantee staff and 8 partner staff in each of the six grantees included in the RCT over a 3 year period. 10x6 = 60 and 8x6=48 = 108/3 = 36 annual number of respondents. The interviews will take approximately one hour to complete. Since participation in the evaluation is required by grantees, we expect a 100 percent response from the grantees. Therefore, the total number of grantee staff annual burden hours is 36 x 1 hour = 36 hours.
The tracking form will take approximately 10 minutes to complete. We expect a 100 percent response rate from those taking the 6-Month Follow-Up Survey. The annual burden hours for respondents is 960 x10/60 = 160.
The cost represents the sum across the data collections when the average hourly wage rate for each respondent category is multiplied by the corresponding number of hours, as shown in Table A.12. The average hourly wage rates were obtained using the latest Occupational Employment Statistics data on wages, adjusted for inflation.9 The total cost to respondents for these data collections is $9,575.
Table A.12 Estimated Annualized Respondent Hour and Cost Burdens
Instruments |
Number of Respondents |
Number of Responses per Respondent |
Total Number of Responses |
Avg. Burden per Response (in Hrs.) |
Total Hour Burden |
Average Wage Rate |
Total Cost Burden |
Baseline Information Form |
1,200 |
1 |
1,200 |
30/60 |
600 |
$7.25 |
$4,350 |
6-Month Follow-Up Survey |
960 |
1 |
960 |
20/60 |
320 |
$7.25 |
$2,320 |
Grantee Staff & Partner Staff Interviews |
36 |
1 |
36 |
1 |
36 |
$48.46 |
$1,745 |
Tracking Form |
960 |
1 |
960 |
10/60 |
160 |
$7.25 |
$1,160 |
Total |
3156 |
|
|
|
1,116 |
|
a The hourly wage rate for participants was assumed to be the Federal minimum wage.
https://www.bls.gov/oes/current/oes_stru.htm#00-0000: Management Occupations (SOC code 11-0000).
b The hourly wage rate for grantee and partner staff was taken from Bureau of Labor Statistics, “Occupational Employment Statistics—May 2016 National Occupational Employment and Wage Estimates” found at:
A.13 Estimates of Annualized Respondent Capital and Maintenance Costs
There are no capital/start-up or ongoing operation/maintenance costs associated with this information collection.
A.14 Estimates of Annualized Cost to the Government
The total cost to conduct the information collected in this request is $1,116,678. The annualized cost is $1,116,678 / 3 = $372,226.36
1.The estimated cost to the federal government for the contractor to carry out this study is based on the detailed budget of contractor labor and other costs is $1,060,000 for survey development, data collection, and fielding of the BIF, 6-month follow-up survey, site visits, and tracking form.
2. In addition, DOL expects the annual level of effort for Federal government technical staff to oversee the contract will require 200 hours for one Washington D.C.-based GS-14, Step 4 employee earning $59.04 per hour.10 To account for fringe benefits and other overhead costs the agency applies a multiplication factor of 1.6. The annual cost is $18,893 ($59.04 x 1.6 x 200 = $18,893). The data collection period covered by this justification is three years, so the estimated total cost is $56,678 ($18,893 x 3 = $56,678). The total cost is $1,060,000 + $56,678 = $1,116,678.
A.15 Changes in Hour Burden
This is a new data collection.
A.16 Plans for Tabulation and Publication
The Evaluation of Strategies Used in the TechHire and SWFI Grant Program data collection activities in this request will support the following major deliverables:
Short Paper 1. The first of two short papers will focus on interim implementation lessons, grantee strategies, and program participation. This short paper will include data gathered from the site visits to RCT grantees.
Short Paper 2. The second short paper will document short-term impacts based on the 6-month survey. The main short-term outcomes of interest will include training completion or continued enrollment in training and educational progress. It will also consider impacts on employment and childcare arrangements.
Issue Brief 1. An issue brief describing interim implementation and impact findings based on the two short papers will be submitted to DOL.
Final Report. A final report documenting the impact and will be submitted to DOL. The report will document the effects of participation on employment and earnings using the NDNH data and on employment, wages, hours worked, using the 18-month survey. The report will include analysis of impacts for key subgroups pooling across sites.
Issue Brief 2. The final issue brief, to be delivered in June 2021, will focus on final report highlights.
In addition to these reports, the evaluation will also include three briefings on the following topics: the study design plans, interim findings, and final findings.
Table A.16.1 presents an overview of the project schedule.
Table A.16.1 Overview of project data collection schedule
Subtask and deliverables |
Estimated date (12 month intake) |
Associated Publications |
Random assignment 6-month follow-up survey Participant Tracking Form 18-month follow-up survey Grantee survey Partner survey Grantee phone interviews Partner phone interviews Round 1 site visit grantee interviews Round 1 site visit grantee partner interviews Round 2 site visit grantee interviews Round 2 site visit partner interviews |
1/2018-12/2018 7/2018-6/2019 1/2019-12/2019 7/2019-6/2020 9/2018-12/2018 9/2018-12/2018 2/2019-4/2019 2/2019-4/2019 4/2018-9/2018 4/2018-9/2018 10/2019-3/2020 10/2019-3/2020 |
N/A Short Paper 2 N/A Final Report Final Report Final Report Final Report Final Report Short Paper 1, Final Report Short Paper 1, Final Report Final Report Final Report |
A.17 Approval to Not Display the Expiration Date
The collection of interview and survey data will show the OMB expiration date on any written instrumentation. The following Public Burden Statement will appear:
Responding to this questionnaire is voluntary. Public reporting burden for this collection of information is estimated to average XX minutes per respondent. Send comments concerning this burden estimate or any other aspect of this collection of information to the U.S. Department of Labor, Chief Evaluation Office, Room 2218, Constitution Ave., Washington, DC 20210. According to the Paperwork Reduction Act of 1995, an agency may not conduct or sponsor, and a person is not required to respond to a collection of information unless it displays a valid OMB control number. The OMB control number for this information collection is 1290-XXXX.
A.18 Exceptions to the Certification Statement
There are no exceptions to the Certification for Paperwork Reduction Act (5 CFR 1320.9) for this study.
1 For example, see Brick, J. M., & Williams, D. (2013). Explaining rising nonresponse rates in cross-sectional
surveys. The ANNALS of the American academy of political and social science, 645(1), 36-59; and Curtin, R., Presser, S., & Singer, E. (2005). Changes in telephone survey nonresponse over the
past quarter century. Public opinion quarterly, 69(1), 87-98..
2 Research Triangle Institute. Evidence-based Practice Center North Carolina Central University
(Durham, North Carolina) & West, S. (2002). Systems to rate the strength of scientific evidence (pp. 51-63). AHRQ (Agency for Healthcare Research and Quality).
3 Singer, E. (2002). The use of incentives to reduce nonresponse in household surveys. Survey
nonresponse, 51, 163-177.
4 LeClere, F., Plumme, S., Vanicek, J., Amaya, A., & Carris, K. (2012). Household early bird
incentives: leveraging family influence to improve household response rates. In American Statistical Association Joint Statistical Meetings, Section on Survey Research.
5 Coopersmith, J., Vogel, L. K., Bruursema, T., & Feeney, K. (2016). Effects of Incentive Amount and Type of Web Survey Response Rates. Survey Practice, 9(1).
6 McGonagle, K., Schoeni, R., and Couper, M. (2013). The effects of a between-wave incentive experiment on contact update and production outcomes in a panel study. In Journal of Official Statistics.
7 For example, Cantor., D., O’Hare, B. and O’Connor, K. (2007) “The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys” pp. 471-498 in J. M. Lepkowski, C. Tucker, J. M. Brick De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (Eds.), Advances In Telephone Survey Methodology, New York: J.W. Wiley and Sons, Inc. and Singer, Eleanor, John Van Hoewyk, Nancy Gebler, Trivellore Raghunathan, and Katherie McGonagle. (1999). “The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys.” Journal of Official Statistics, 15: 217-230.
8 Trussell, N. and P. Lavrakas. (2004). “The influence of incremental increases in token cash incentives on mail survey responses. Is there an optimal amount?” Public Opinion Quarterly, 68(3): 349 – 367.
9 Bureau of Labor Statistics, “Occupational Employment Statistics—May 2016 National Occupational Employment and Wage Estimates” https://www.bls.gov/oes/current/oes_nat.htm
10 See Office of Personnel Management 2017 Hourly Salary Table: https://www.opm.gov/policy-data-oversight/pay-leave/salaries-wages/salary-tables/pdf/2017/DCB_h.pdf
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |