Supporting Statement Part B ERAI Study v5

Supporting Statement Part B ERAI Study v5.docx

Evaluating Registered Apprenticeship Initiative Study

OMB: 1290-0046

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT PART B: EVALUATING REGISTERED APPRENTICESHIP INITIATIVE STUDY

OMB Control Number 1290-0NEW

OMB Expiration Date: TBDShape1

OMB Supporting statement

PART B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

In this document, the Department of Labor (DOL) requests clearance from the Office of Management and Budget (OMB) under the Paperwork Reduction Act (PRA) for a new collection associated with Evaluating Registered Apprenticeship Initiative. The Chief Evaluation Office of the U.S. Department of Labor (DOL) commissioned the Evaluating Registered Apprenticeship Initiative (ERAI) study to design and conduct analyses that add to the evidence base on apprenticeship strategies and models through an evaluation of the Apprenticeship Building American (ABA) grants. ABA awarded grants in four categories: state apprenticeship system building and modernization (category 1), youth apprenticeships (category 2), pre-apprenticeships (category 3), and registered apprenticeship hubs (category 4).

We discuss here 14 different survey and interview instruments that are part of this study:


  1. ABA Youth Apprenticeship and Pre-apprenticeship Grantee Survey

  2. ABA State Apprenticeship System Grantee Survey

  3. ABA Registered Apprenticeship Hub Grantee Survey

  4. ABA Pre-apprenticeship Participant Survey

  5. ABA Apprenticeship Survey

  6. ABA Participant Focus Group Protocol

  7. ABA Youth Apprenticeship and Pre-apprenticeship Grantee Staff Interview Protocol

  8. ABA Youth Apprenticeship and Pre-apprenticeship Partner/Employer Interview Protocol

  9. ABA State Apprenticeship System Grantee Staff/Partner Interview Protocol

  10. ABA State Apprenticeship System Employer Interview Protocol

  11. ABA Registered Apprenticeship Hub Grantee Staff Interview Protocol

  12. ABA Registered Apprenticeship Hub Partner Interview Protocol

  13. ABA Registered Apprenticeship Hub Grantee Customer Interview Protocol

  14. ABA Impact Evaluation Baseline Survey - participants



B.1. Respondent Universe and Sampling

In this section, we describe the respondent universe and sampling for each instrument. We discuss the selection of participants in part “a” of this section and response rates in part “b” for each data collection instrument in turn.

  1. Selection of participants


Grantee survey (instruments 1,2,3). The study team will administer three separate web-based surveys to the ABA grantees which will include Youth Apprenticeship and Pre-Apprenticeship grantees (19), State Apprenticeship System grantees (5), and Registered Apprenticeship Hub grantees (15). The surveys are designed to provide the breadth of knowledge needed to systematically understand how grantees have structured and implemented their apprenticeship initiatives and/or to gain base information that will be built upon in the interviews. There is no sampling, every grantee will be surveyed.

Participant Apprentice and Pre-apprentice surveys (instruments 4 and 5). The study team will administer two separate surveys: one to pre-apprentices and one to apprentices. The Youth Apprenticeship and Pre-Apprenticeship grantees, grant categories 2 and 3, respectively, have both apprentices and pre-apprentices. Many of the apprentices will have been grant-supported pre-apprentices prior to being apprentices. We determined that administering two separate surveys would limit the number of questions asked and respondent burden, but still enable the study to collect all the information needed. We derived the universe for both surveys as the targets set by grantees for apprentice and pre-apprentice participation, adjusted for the elapsed duration of the grants at the time the survey sample is selected. Targets were derived from the grantees’ grant applications. The total targets for all grantees in categories 2 & 3 are 7550 pre­-apprentices enrolled and 6050 registered apprentices enrolled (some of these individuals are also in the pre-apprentice targets). The grants are funded for five years starting June 2022, so we estimate that 2/5 of the target enrollment will have been reached.

We will request pre-apprentice and apprentice participant lists from category 2 and 3 grantees near the time of the survey’s launch. We will select a random sample of the list of those who began a pre-apprenticeship, and either are still engaged, completed, or left without completing. Our estimate is that 3020 pre-apprentices will have enrolled by the time of the survey. We will randomly select 3,000 pre-apprentices to survey.

We will also select a random sample of all who started an apprenticeship since the beginning of the grant who are not on the pre-apprentice list and those who started an apprenticeship and were pre-apprentices but were not selected to receive the pre-apprentice survey. Our estimate is that 3430 apprentices will have enrolled by the time of the survey supplemented by pre-apprentices that were not selected for that survey. We will randomly select 3,000 apprentices to survey.

Participant focus group protocol (instrument 6). The study teams will conduct approximately 9 focus groups with apprentices and pre-apprentices during the visits, at 2 of the Youth Apprenticeship and 2 Pre-Apprenticeship grantees, and all 5 of the State Apprenticeship System grantees. We will purposively select the Youth Apprenticeship and Pre-Apprentice grantees that hold the focus groups, from the 12 Youth Apprenticeship and Pre-Apprenticeship grantees selected for implementation visits (see below). One factor that we will consider in this selection is grantee willingness to hold a focus group. The project team will work with grantees and partners to recruit a sufficient number of apprentices or pre-apprentices to generate 6 to 9 attendees for each focus group. These are not meant to be representative samples of participants but will serve to give depth to our understanding of the apprenticeship experiences. The universe is all pre-apprentices and apprentices at the time of the grant visits in the 12 grantees selected for visits. The total estimated target enrolled by the time of the grant visits is 6,450. We estimate the universe as 12/19 of the total target (12 grant visits/19 grants).

Grantee staff, partner, employer and customer interview protocols (instruments 7-13). The study team will conduct interviews with staff and partners of each of the selected grantees using the appropriate interview guides for the type of interviewee. Working closely with DOL, the study team will purposively select the grants for the remaining implementation visits. The selection processes for selecting grants and the interviewees within grants for the different categories of grants are described below. In general, the frame of potential grantee staff, employers, partners, and customers for each grant selected includes all staff, employers, and partners participating in the grant program for each grant category and, in addition, all TA customers who have been served by the Registered Apprenticeship Hub category grant program. Estimates of the universe for each data collection activity come from our review of grant applications. The interviewees chosen per type of respondent per grant will be selected purposively to provide information on different grant roles and perspectives and the number is in line with available study resources.

The study team will use purposive sampling to select 12 grantees to conduct implementation visits from the 19 Youth Apprenticeship and Pre-Apprenticeship grantees. The grants will be selected purposively because resources do not allow all of the grantees to be included in the implementation study. To ensure grantees with certain features that are important to DOL for learning from this study, we will select grants that are varied in apprenticeship models used (a balance between youth and pre-apprenticeship grants); industry and occupational focus of the grant; target age of participants served, ensuring coverage of high school aged pre-apprenticed and youth apprentices; target populations to be served (especially those targeting underserved groups; and geographic area served by the grant. A memo will be submitted to DOL identifying the grants selected for visits, which will include the reasoning, criteria, and process used for selection of grantees. The implementation visit will include 4 interviews with grantee staff, and 3 interviews with partners and employers. We will use purposive sampling to select these interviewees, with the help of grantee staff, identifying partners and employers of the grantee’s apprenticeship and pre-apprenticeship programs that are either the largest, most important, or most innovative models. Our estimations of the universe sizes for grantee staff, partners, and employers provided in Table B.2 are based on our review of the grant applications.

The study team will conduct an implementation study visit to each of the 5 State Apprenticeship System grantees. The study team will identify respondents for each state including 4 grantee staff interviews, 2 partner interviews and 2 employers. We will use purposive sampling to select interviewees with the help of grantee staff, identifying partners and employers to be interviewed. We will prioritize interviewing partners and employers who have substantial but varied ABA grant roles. Our estimations of universe sizes for grantee staff, partners, and employers provided in Table B.1 are based on our review of the grant applications.

The study team will conduct implementation visits to a sample of 5 grantees out of the 15 Registered Apprenticeship Hub grantees. We will use purposive sampling to select grants to visit. The grants will be selected purposively because resources do not allow all of the grantees to be included in the implementation study. To ensure grantees with certain features that are important to DOL for learning from this study, we will select a diverse range of grantees , in consultation with DOL based on criteria including, but not limited to, Hub structure, focal occupations and industry, geographic area covered, and capacity building and expansion strategies described in the grantee survey. A memo will be submitted to DOL identifying the grants selected for visits, which will include the reasoning, criteria, and process used for selection of grantees. From each selected grant from the Registered Apprenticeship Hub category, we anticipate interviewing approximately 3 to 4 key grantee staff who are involved in the overarching management, strategic direction, and activities, as well as day-to-day delivery of technical assistance (TA) services. We will also identify 3 to 4 partners involved in each Registered Apprenticeship Hub’s partnership network who can provide insight into the partnership network and provide context about the role of the grantee and partners in carrying out collaborative grant activities. We will select these staff and partners using purposive sampling with the help of grantee staff. We will interview 2 to 3 TA customers to better understand their perspectives on TA services. We will use purposive sampling to select these respondents with the help of grantees and partners to identify customers who have had a range of experiences and outcomes. Our estimations of the universe sizes for grantee staff, partners/ employers, and customers provided in Table B.2 are based on our review of the grant applications.

Impact Evaluation Participant consent and Baseline Survey (instrument 14)

A component of this evaluation is to develop an impact design to rigorously evaluate the effectiveness of pre-apprenticeship programs run by ABA grantees. The initial stage of the impact evaluation includes a participant baseline survey and consent form. Study participants will be potential pre-apprenticeship candidates seeking program services from the grantee organizations and their subgrantees and partners. The respondent universe for the baseline survey and consent form are all pre-apprentices from all ABA Pre-apprentice grantees and all but one ABA Youth Apprenticeship grantees (one of these grantees is not running a pre-apprenticeship program). A Randomized Controlled Trial (RCT) design to estimate program impacts with control groups or enhanced treatment groups is planned. The study will include a purposeful sample of up to 10 grantees from this universe of 18 potential grantees and 4,000 program participants split evenly between the research groups. The objective of site selection is to identify up to 10 grantees that are deemed to be suitable candidates for participating in a random assignment evaluation to address the study research questions. The study will purposively select the grantees based on factors related to what can be learned from them and the feasibility of implementing random assignment. The study will consider five factors in determining a site’s suitability for participating in the random assignment study:


  1. Sufficient sample size for estimating impacts. To ensure a study sample size to yield sufficient statistical power to detect impacts, each participating granteemust be able to recruit and enroll a sufficient number of participants. Clarifying discussions with grantees will record the planned enrollment and the expected intake period. The study will also assess whether grantees are over-subscribed and have the ability to recruit additional participants to fill a control group.


  1. Implementation status and readiness for evaluation. A second factor is that the grantee’s program services are of sufficient high quality and maturity.


  1. Service differential between the contrasted groups. A third factor is the differential between the pre-apprenticeship-related services provided to the research groups. Control group members denied pre-apprenticeship services will have access to other services available in the community (and perhaps even other services from the same grantee). Thus, it is critical a sufficient differential exists between the tested program services and those available elsewhere in the community.


  1. Similarity of services and point of random assignment across study grantees. A fourth factor is the ability of participating grantees to implement a relatively consistent point of random assignment (for example, at the community college admissions office) and deliver a relatively similar set of intervention services, thereby ensuring that the impact analysis (which may need to pool across grant programs because of sample size considerations) tests a consistent model with focused research questions. Variation across grantees in services or points of random assignment can pose problems for the analysis because a pooled impact analysis effectively would treat them as the same program even if they actually vary substantially in the nature of their services or in how or when a worker is defined as a study participant.


  1. Appropriateness of implementing random assignment. A final factor is consideration of the feasibility of implementing random assignment. In some cases, programs could find themselves in conflict with partners who simply refuse to participate in such a study. This could occur, for example, if the grantee program focuses on special populations where there is not oversubscription to services or has an intensive eligibility and selection process for participation. During grantee selection, the study will focus on the grantees in which random assignment is more feasible and does not threaten the program’s continued operations and recruitment sources.


The study will rate each grantee using these five criteria using two data sources. First, we will conduct a systematic examination of extant materials on all grantees including the grantee applications and progress reports. Second, we will conduct clarifying phone calls to all potential grantees focusing on clarifying services provided by the programs and the process by which participants are recruited and enrolled into the program. The study will rank the 18 grantees using the five considered categories. Formal recruiting of these grantees will then occur by phone, and the study will start the process of tailoring random assignment procedures to fit the grantees’ contexts, and also develop Memoranda of Understanding between the grantees and DOL and the evaluation team. It is expected that this process will yield up to 10 suitable grantees for the study.


In the selected grantees, all pre-apprentice participants who meet the program eligibility requirements and consent to be part of the study will be subject to random assignment. According to the ABA grant applications, grantees report targets of between 50 and 900 pre-apprentice participants and a total of 6,900 pre-apprentice participants. Not all grantee participants will be part of the study population to address the specific research questions. Based on actual counts from previous DOL-funded apprenticeship grant programs, the sample size targets of the grantees may be ambitious. Thus, the study conservatively assumes an average of 400 eligible program applicants per grantee who will be subject to random assignment, yielding a respondent universe of 4,000 participants split evenly between the research groups.


The universe and sample size estimates for all instruments summarizing the above narrative are provided in Table B.1.


Table B.1. Summary of universe and sample counts

Evaluation component

Universe Description

Estimated Size of Universe

Expected Sample Size 1

Sampling method

ABA Youth Apprenticeship and Pre-Apprenticeship Grantee Survey

All ABA Youth Apprenticeship and Pre-Apprenticeship grantees

19

19

Universe

ABA State Apprenticeship System Grantee Survey

All ABA State Apprenticeship System grantees

5

5

Universe

ABA Registered Apprenticeship Hub Grantee Survey

All Registered Apprenticeship Hub grantees

15

15

Universe

ABA Pre-apprenticeship Participant Survey

Pre-apprentice enrollment for years prior to conducting survey

3030

1000

Random sample

ABA Apprenticeship Survey

Apprenticeship enrollment without prior pre-apprenticeship for years prior to conducting survey

3030

1000

Random sample

ABA Participant Focus Group Protocol

Pre-apprentices and apprentices in grantees selected for grant visits

4075

81

Purposive

ABA Youth Apprenticeship and Pre-Apprenticeship Grantee Staff Interview Protocol

All grantee staff for 12 selected ABA Youth Apprenticeship and Pre-Apprenticeship grants for visits

60

48

Purposive

ABA Youth Apprenticeship and Pre-Apprenticeship Partner/Employer Interview Protocol

All partners and employer partners for 12 selected ABA Youth Apprenticeship and Pre-Apprenticeship grants for visits

60

362

Purposive

ABA State Apprenticeship System Grantee Staff/Partner Interview Protocol

All grantee staff and partners for 5 ABA State Apprenticeship System grants

45

303

Purposive

ABA State Apprenticeship System Employer Interview Protocol

All grantee employer partners for 5 ABA State Apprenticeship System grants

25

10

Purposive

ABA Registered Apprenticeship Hub Grantee Staff Interview Protocol

All grantee staff for 5 selected ABA Registered Apprenticeship Hub grants for visits

25

20

Purposive

ABA Registered Apprenticeship Hub Partner Interview Protocol

All partners for 5 selected ABA Registered Apprenticeship Hub grants for visits

30

20

Purposive

ABA Registered Apprenticeship Hub Customer Interview Protocol

All customers for 5 selected ABA Registered Apprenticeship Hub grants for visits

30

15

Purposive

ABA Impact Evaluation Baseline Survey – participants

All eligible applicants for pre-apprenticeships within selected grantees

4,000

4,000

Universe of all eligible applicants

1The number of respondents in Table A.2 in the companion Statement Part A is annualized over three years of collection. The expected sample size column in this table provides the total sample size over the three years. The two numbers may not match due to rounding in Table A.2.

2 Assumes interviews with 4 program staff and 3 partners (including employers) per grantee selected.

3 Assumes interviews with 4 grantee staff and 2 partners (other than employers) per grant.




  1. Response rates

This section discusses the response rates expected for eah data collection activity by instrument.

Grantee survey (instruments 1,2,3). We expect 100% response rates to these surveys. As a condition of grant award, ABA grantees are required to participate in the evaluation.1 We have had success getting 100% grantee survey response in prior DOL grant demonstration evaluations.2

Participant Apprentice and Pre-Apprentice surveys (instruments 4 and 5). For the pre-apprentice and apprentice surveys, we assume a 33 percent response rate. This is similar to prior apprenticeship participant surveys.3

Participant focus group protocol (instrument 6). For the focus groups, we expect recruitment of 12 participants per focus group will yield 6 to 9 pre-apprentices or apprentices per focus group. This is an expected response rate of 50 to 75 percent. This estimate is based on similar results for focus groups of apprentices conducted for the Implementation Study of the Scaling Apprenticeship and Closing the Skills Gap grants evaluation that we are currently conducting for DOL.4

Grantee staff, partner, employer and customer interview protocols (instruments 7-13). These interviewees are purposively selected and we do not expect them to be representative of the universe of grantee staff, partners, employers, or customers. We anticipate we will complete interviews with the type and number of individuals laid out in Section B.1. for the selected grants. The expected response rate is 100 percent for grantee staff and partners as participation in evaluation activities is a required condition of the grant award. The expected response rate for employer partners is 80 percent. This assumption is based on prior experience in similar studies that were able to conduct similar interviews for selected grantees.5 We estimate this same response rate, 80 percent, for customer interviews.



Impact Evaluation Participant consent and Baseline Survey (instrument 14. Applicants eligible for study participation will only be enrolled in the study and randomly assigned if they complete the baseline survey and provide their identifying information as part of the intake process. The project team therefore anticipates that the response rate to this baseline survey will be100 percent of study participants.



B.2. Procedures for the collection of information

Data for the study will be collected through online surveys, semi-structured interviews, focus groups and phone interviews, and random assignment at intake for a potential impact study.

Each of instruments will be a one-time data collection. No respondent will be asked to respond to a given instrument more than once. Different data collection activities are spread out over the course of the evaluation. The grantee surveys (instruments 1 – 3) will be collected in Spring 2024, the participant pre-apprenticeship and apprenticeship surveys (instruments 4 and 5) will be conducted in Summer/Fall 2024, the grant visit focus groups and interviews (instruments 6 – 13) will take place in Spring 2025, and the baseline survey collection will begin in late Spring 2024.

Procedure for Grantee surveys (instruments 1 -3) and Participant Pre-apprenticeship and Apprenticeship Surveys (instruments 4 and 5)

The grantee and participant surveys will be programmed and administered using Qualtrics. This survey software offers a user interface that is modern, secure, and easy to navigate for respondents. The software will also facilitate generation of tabulations of responses as surveys are completed by subgrantees and processed. The surveys will be hosted on the Internet via a live secure web-link. To reduce respondent burden, they will employ the following: (1) secure log-ins and passwords so respondents can save and complete the survey in multiple sessions; (2) drop-down response categories so respondents can quickly select from a list; (3) dynamic questions and automated skip patterns so respondents only see those questions that apply to them (including those based on answers provided previously in the survey); and (4) logical rules for responses so respondents’ answers are restricted to those intended by the question.

For the grantee surveys, a pre-survey email will be sent by DOL to all grantees announcing the survey, generally describing the importance of this collection and the content of the survey.

For the participant surveys, the grantee or subgrantee that they are participating with will send a pre-survey email to selected apprentices and pre-apprentices that describes the study, the survey contents, and importance of their participation for the study. It will make clear that any information shared will be kept private and not shared with the grantee or DOL and only reported aggregated with other responses. The study team will send a similar email with the link to the survey. The evaluation team will conduct additional survey completion monitoring and send reminder emails accordingly. We will send at least two reminder emails. The first will be sent two weeks and then again at one week before the survey close date. Within the survey there will be a description of privacy and a place to indicate consent to continue.

After the survey response is completed, the study team will share the email address of the participant in a secure manner with the Urban Institute operations manager so that each participant can be emailed a $25 gift card.

The grantee survey instruments are provided in Attachments 1-3, and the participant surveys are Attachments 4-5.

  1. Nonresponse bias analysis.

We anticipate the grantee surveys will be collected for the universe of grantees.

For the participant surveys, before analysis, the evaluator will use DOL’s Workforce Integrated Performance System (WIPS) data to test for differences between survey respondents and non-respondents in their demographic characteristics (sex, age, race/ethnicity), apprenticing occupation, region, months since their apprenticeship began, apprenticeship completion status, entry wage, and the grantee whose program they are affiliated with. Grantees are require to enter data for all enrollees into the WIPS. To correct for nonresponse bias, the evaluator will estimate and apply nonresponse weights, starting with predicted probabilities of response as a function of the characteristics observed for all apprentices, followed by calculating the inverse of these estimated probabilities and weighting each observation by this amount using standard weighting routines in statistical software. The weights will be calibrated to reflect the composition of all apprentices.

To address any item nonresponse, we will first using logical imputation or imputation based on existing knowledge wherever feasible. Where that is not possible, we will fill in missing survey data elements using multiple imputation routines available in standard statistical software, such as Stata’s mi command. Such imputation uses statistical relationships between items estimated for sample members for whom the items are not missing to estimate values for sample members for whom data are missing on some but available for other items.

The combination of nonresponse weighting and multiple imputation will aim to enhance the accuracy of outcomes derived from the Participant Survey. Because the Participant Survey will be used to measure outcomes, not impacts, there will be no calculation of minimum detectable effects.



Procedure for participant focus groups (instrument 6)

The implementation study team for the State Apprenticeship System study and Youth Apprenticeship and Pre-Apprenticeship study will aim to conduct a focus group with apprentices or pre-apprentices in each of the five State Apprenticeship System grantees and four of the Youth Apprenticeship and Pre-apprenticeship grantees to capture participants experiences learning about and participating in grant-sponsored apprenticeship or pre-apprenticeship activities. As furthering equity in apprenticeships is a goal of the grants, the team will work with the grantees to prioritize speaking with members of underrepresented or underserved communities. The Youth Apprenticeship and Pre-Apprenticeship study team will determine, as a part of implementation visit planning, which grants and programs will be able to participate in a focus group. The study team will focus on participants in programs associated with interviewed employers with large enough cohorts to support a focus group of 6 to 8 attendees.

Once grantees and programs have been identified, the two-person study team assigned to each grantee will provide text for an email introduction to the focus group and an invitation for grant participants to participate in the focus group. Participants who are interested in the focus group will be asked to email the site visit team, who will provide a link for the virtual focus group. Participants will be asked to consent before the beginning of the focus group. We will need to overrecruit at least 20 apprentices to ensure 6-8 participants attend each focus group session.

After the focus group, the study team shares the email address of the participant in a secure manner with the Urban Institute operations manager so that each participant can be emailed a $50 gift card.

We will determine one month prior to data collection whether the interviews will be conducted virtually or in-person, in close collaboration with DOL and the ABA grantees.

The discussion guide for the focus groups is provided in Attachment 6.

  1. Focus group participant characteristics

We will compare aggregate basic demographic characteristics (self-reported sex, age, race) and occupation of pre-apprenticeship or apprenticeship reported by focus group participants to the same data categories for all pre-apprentices and apprentices in the ABA Youth Apprenticeship and Pre-Apprentice selected grantees and in all ABA Youth Apprenticeship and Pre-Apprentice grantees using the WIPS data. Because the focus group results are not intended to be representative, this information is not used for weighting results, but to provide readers information on the focus group sample relative to the universe on these characteristics. Procedure for grantee staff, partner, employer, and customer interviews (instruments 7 – 13)

The study will conduct semi-structured interviews with key grantee staff and partners, including employers, in all three implementation studies for the grantees selected, as well as technical assistance customers in the Registered Apprenticeship Hub grants study as part of site visits.

Before the site visits, a DOL representative will send an email notifying all selected grantees (see section B.1) that they have been selected for site visits as part of the evaluation. Once grantees have been notified, the two-person teams assigned to each grantee will send a follow-up (introductory) email and then call the grantee contact person(s) to identify which grantee and partner administers/staff will participate in the interviews and to begin the process of scheduling the visit. The site visit teams will work with both grantee and partner organizations (including at least one employer) directly on scheduling.

We will determine one month prior to data collection whether the interviews will be conducted virtually or in-person, in close collaboration with DOL and the ABA grantees.

The interview discussion guides are provided in Attachments 7-13. These discussion guides include all the questions that could be asked of grantee and partner staff. They are designed to be thoughtfully tailored for interview respondent(s) to align with the structure and approach to ABA grant implementation. Not all questions in these protocols will be asked of every interviewee with which they are used. Because these data collection activities are not intended to be representative of the universe and the nature of these semi-structured interviews, we will not be filling in missing data.

  1. Implementation Study Analyses

The data collection activities from instruments 1-13 will generate a large volume of data that the study team will analyze to answer the research questions of interest for the implementation studies outlined in Part A Table A.1. We anticipate two analytical tasks— (1) a descriptive analysis of the study surveys and (2) a thematic analysis of information collected during the site visits.

The descriptive analysis will primarily use the grantee surveys (instruments 1 – 3) to provide a comprehensive picture of the components, models, partnerships, and strategies implemented by the grantees. It will use data collected for each study to create separate analysis files. The study team will first develop descriptive univariate tabulations of the grantee survey data. They will then produce selected cross-tabulations, especially to look at variation across industries, target populations, and program models. The descriptive analysis will include tables, charts, and graphs to illustrate key findings, and the team will provide survey data tables of the grants as appendices to the final reports.

For the Youth Apprenticeship and Pre-apprenticeship Study, we will also use the data collected form the participant surveys (instruments 4 and 5) in the descriptive analyses. These survey questions are mostly multiple choice and closed-ended but there are some open-ended questions to give respondents a chance to provide additional context to their answers. In addition, the questions provide respondents with the option to choose “other” and add a response so the team can capture the full range of activities implemented by grantees. The team will clean and finalize the raw data to prepare for the analysis. They will clean and code variables to prepare the analysis file. The team will also prepare documentation and a codebook for the analysis. Finally, they will tabulate responses to each survey question (i.e., absolute and relative frequency) to look at basic statistics such as mean/median/minimum/maximum and frequencies, depending on the question type. They will then produce selected cross-tabulations, especially to look at variation in participant experiences and perceptions across industries, target populations, and program models. The descriptive analysis will include tables, charts, and graphs to illustrate key findings, and the team will provide survey data tables of the grants as appendices to the final reports.

The implementation study team will use the interview and focus group data collected (instruments 6 – 13) to conduct a thematic analysis of the qualitative data collected during the site visits. The purpose of these analyses is to distill lessons from the grantees’ implementation approaches, models, partnerships, and strategies to expand apprenticeship as part of the grants.

To ensure all site visitors organize and collect information in a systematic way, the implementation study team will develop a uniform site visit summary template for each study in Word that will be used for all respondents interviewed for that study. Summarizing the information by topic will be the first step in the analysis process. The summary template will summarize all the key topics of interest for the implementation study. Within each topic, it will summarize the information collected from all staff and partners that participate in the site visit while making note of any discrepancies or inconsistencies across respondents. The team will align the summary template and its topics with the detailed research questions discussed earlier.

The analysis approach will use an “applied thematic analysis” to identify and summarize emerging themes within and across grantees and code the themes accordingly. Implementation study team members will hold coding and analysis meetings to discuss emerging themes, align codes, ensure validity across team members, and enhance the quality of the analysis. The summary information compiled facilitate cross-grantee analysis in key areas of interest that can inform the implementation report and project briefs.

For the Youth Apprenticeship and Pre-Apprenticeship study and the State Apprenticeship System study the implementation study team will use a similar approach to analyze the data collected during the focus groups with apprentices and pre-apprentices (instrument 6). The team will develop a uniform focus group summary template in Excel to use for each grantee focus group. The summary template will summarize all the key topics of interest for the focus groups, including how apprentices were enrolled in the apprenticeship program, their reasons for participating, ways in which the apprenticeship program helped them increase their responsibilities and wages, and overall thoughts and reflections about their experiences. Within each topic, the study team will summarize the information collected from all focus group participants while making note of any discrepancies or inconsistencies across respondents. The information compiled from each focus group will be detailed enough to create comprehensive call-out boxes in the report that describe the experiences of focus group participants without identifying respondents. Findings from the focus groups will also be interwoven in other sections of the report and project briefs as applicable, which may include any interesting important trends that arise during cross-grantee analysis of the focus group data.



Procedure for baseline data collection from participants in Impact Study (instrument 14)

The evaluation team anticipates starting participant intake, randomization, and baseline data collection. Grantee staff will use RAPTER® to conduct participant intake. RAPTER® is a secure, web-based system that program staff will use to administer consent to participants, collect their identifying and contact information, and conduct random assignment of study participants. Participants completing the 15-minute baseline survey via the web or program staff entering baseline survey information on behalf of participants will use the RAPTER® interface to complete baseline information, which will also be completed online. The evaluation team will program RAPTER® to conduct random assignment within strata to ensure key population subgroups (such as special populations) are balanced across the research conditions to improve precision of the impact estimates.


a. Estimation procedures


With an experimental design, unbiased impact estimates can be obtained by comparing differences between the mean outcomes of the contrasted research groups. By using regression procedures that control for highly predictive covariates, however, the study will improve the precision of estimates and adjust for small baseline differences between groups that may arise by chance or from survey nonresponse or missing administrative records data. The study will estimate impacts not only for the full sample, but also for important subgroups defined by participant and program characteristics from the grant application and baseline survey. The analysis will be conducted using the RCT-YES software program (www.rct-yes.com) that uses state-of-the-art design-based impact estimators derived from the building blocks of experiments with minimal assumptions, and can estimate impacts for continuous, binary, and discrete outcomes.


Assessing baseline equivalence. Using data from the program application and baseline surveys, the study will conduct t-tests on each baseline measure in isolation to examine differences between the research groups due to random sampling. We will also conduct a joint F-test to assess the joint significance of the baseline differences. The analysis will control for baseline characteristics, correlated with the outcomes, to improve the precision of the estimates.


Estimating impacts for the full sample. Assuming 10 grantees, the benchmark model will be a regression in which an impact is calculated for each site, adjusted for students’ baseline demographic characteristics from the program application and baseline information forms:

where is the outcome of worker ; for a worker in site and 0 otherwise; for treatment students offered program services and 0 for controls; are baseline characteristics; is the error term; and and are parameters to be estimated. We will select the baseline covariates that are correlated with the outcomes using Least Absolute Shrinkage and Selection Operator (lasso) procedures (Tibshirani, 1996; Hastie et al., 2009) that avoid model overfitting.


The average impact of the tested intervention across grant programs is . The study will assess differences in impacts across grantees using a joint F-test of the grantee-level impacts (dk) and by comparing them to each other. The study will also explore the extent to which the results are sensitive to different weighting schemes, where for example, each sample member is weighted equally, or each grantee is weighted according to the size of the program eligible population. All weighting schemes are valid approaches but will provide slightly different estimates if grantees are of different sizes and have heterogeneous impacts. The study will account for missing data on baseline covariates using multiple imputation procedures with chained equations and predictive mean matching.


The study will interpret the impact estimates by conducting both classical significance testing and a Bayesian approach, where the study will report the probability that the intervention had positive effects given our findings (a Bayesian posterior probability). The Bayesian approach reduces the chance of misinterpreting p-values and statistical significance findings while providing credible, understandable assessments of program effectiveness.


Estimating impacts for subgroups. These same analytic methods for the full sample can be used to obtain impact estimates for two types of subgroups to address the question of whether access to grantee services is more effective for some subgroups than others. First, the study will estimate impacts for subgroups defined by worker characteristics (for example, age, prior employment experiences, special populations) defined from the program application and baseline surveys. Second, the study will estimate impacts for subgroups defined by key program features obtained from the implementation analysis (separate ICR).


Impacts for subgroups will be estimated using a straightforward modification to Equation (1), where the model includes terms formed by interacting subgroup indicators with the treatment status indicator variable and using F-tests to assess whether differences in impacts across subgroup levels are statistically significant.


Assessing and correcting for grantee nonparticipation. As part of the recruitment process, the study will collect data on key grantee characteristics and compare the characteristics of the selected grantees that agree to participate to those that do not. This information will be used to help interpret the analysis findings. However, because the study will not randomly select grantees, but rather, purposively select them based on their suitability for the study, there is not a well-defined universe of grantees to which the study grantees will generalize. Thus, our benchmark approach will not adjust the impact estimates for grantee nonparticipation, because external validity is not a well-defined concept for this evaluation. However, the study will re-weight the data for sensitivity analyses.


Adjusting for no-shows and crossovers. In any experiment in the real world, some members of the treatment group may not receive intervention services (no-shows), and some controls may be exposed to the interventions (crossovers). To correct for these sample members, the study will use an instrumental variable approach by replacing the indicator in the models above with the indicator variable, that equals 1 for those who received intervention services and 0 for those who did not, and the study will use as an instrument for .


b. Statistical Power


To adequately address the evaluation’s research questions, the design must have sufficient statistical power to detect impacts that are policy relevant and of practical significance. The sample sizes needed for the study were determined by focusing on minimum detectable impacts (MDIs) for the primary outcome of quarterly earnings but the study also present MDIs for completion of an apprenticeship program, a key proximal (mediating) outcomes. Enrolling 4,000 (split evenly between the treatment and control groups) in an RCT would enable us to detect MDIs of $275 on quarterly earnings and 4.4 percentage points on apprenticeship program completion (Table B.2). This is smaller than the gains from participation in apprenticeship programs found in other studies. For example, Reed et al. (2012) found that participating in Registered Apprenticeship was associated with a gain of $6,595 in annual earnings ($1,649 in quarterly earnings) compared to the earnings of nonparticipants. This $1,649 earnings gain is also larger than our calculated $779 MDI for a 13 percent subgroup analysis based on 500 participants. For a design comparing an enhanced-service treatment group to a business-as-usual treatment group, it is expected that the study will have sufficient power to detect likely program effects if the enhanced services are intensive (for example, providing intensive case management and supportive services).


Table B.2. Minimum detectable impacts on key outcomes for an RCT

Sample size
(treatment and control combined)

Quarterly earnings
(impact,
dollars)

Apprenticeship program completion
(impact, percentage points)

100

1,755

28.3

500

779

12.6

1,000

550

8.9

4,000

275

4.4

Notes: Calculations above were made using Microsoft Excel tables. Assumptions made include: individuals are randomly assigned; equal assignment probabilities to treatment and control; 50% control group mean for completion; $3,102 standard deviation of earnings; covariates explain 20% of the variation in outcomes; attrition of 20% in survey data; alpha level 0.05, two-sided test, 80% power. The MDIs are calculated using the following formula: , where where is the significance level (0.05), is the power (80%), is the inverse of the student’s t distribution function evaluated at with degrees of freedom, and is equal to the sample size (after accounting for attrition) minus 2. is the proportion of variation in the outcome explained by covariates (20%), and and refer to the sample size in the treatment and control group, respectively, after accounting for attrition.


c. Statistical methodology for sample selection


All participants who meet the program eligibility requirements and consent to be part of the study will be subject to random assignment. Stratified random assignment will be conducted online using RAPTER® with pre-specified random assignment strings, developed separately for each sample intake location. Strata will be formed using information from the baseline survey and program application forms to ensure the research groups are balanced along key dimensions such as age, special populations, and those targeted for specific occupations

B.3. Methods to maximize response rates and minimize nonresponse

For the grantee and participant respondents to the surveys and for new study enrollees responding to the online baseline survey, the study team will make use of survey methods and best practices to encourage high response rates while minimizing burden and non-response. These methods include:


Web administration. We will administer these surveys online. We have previously administered web surveys successfully to grantees and to participants6 This online administration allows the respondent to complete on their own schedule and pace, as well as complete the survey over multiple sessions. The web survey system used by the data collection team also supports mobile browsers, such as tablets or cellular phones.


Multiple modes of administration. To comply with Section 508 of the Rehabilitation Act, participants who may have difficulty completing a web survey will be offered the option of completing the surveys by telephone.


Technology to reduce burden. To reduce burden, the surveys will employ drop-down response categories so respondents can quickly select from a list, dynamic questions and automated skip patterns so respondents only see those questions that apply to them (including those based on answers provided previously in the survey), and logical rules for responses so respondents’ answers are restricted to those intended by the question. These features should minimize data entry burden by participants and facilitate high quality responses.


Tested questionnaire. The study team has pilot tested the grantee surveys with 3 grantees (one from each study category) to ensure that the instrument is clearly written and understandable to participants, offers participants a complete and understandable listing of response categories for each close-ended question, and tests initial time estimates for completion. The surveys are adapted from surveys that have been approved by OMB and being used in prior studies, including grantee surveys for the Scaling Apprenticeship and Closing the Skills Gap grant evaluation, the Youth Apprenticeship Readiness Grant evaluation, and the American Apprenticeship Initiative evaluation. The participant surveys are adapted from an existing apprenticeship survey that have been successfully fielded for the American Apprenticeship Initiative evaluation. The Information Collection Review for the AAI Apprentice Survey is available at: https://www.reginfo.gov/public/do/PRAViewICR?ref_nbr=201903-1290-003. The OMB Control Number is 1290-0028.

For the interviews, we will send pre-interview emails (or conduct phone conversations) that outline the study objectives, how results will inform the field, and the importance of respondents’ contributions. We expect that all grantee staff will agree to participate. We will then work with the grantees to identify partner staff and customers to interview. We will provide these respondents with similar information.


Data from completed baseline surveys will be reviewed throughout the fielding period for accuracy and consistency. Participants that do not complete the baseline survey at all will be excluded from the analysis.



B.4. Tests of procedures or methods to be undertaken


All data collection procedures and instruments included in this request to be used in the evaluation have been reviewed by content and methodological experts to ensure clarity and optimal ordering of the questions.


Just as the instruments to be used are based closely on prior surveys that have been extensively tested to evaluate the clarity of the questions to be asked, to identify possible modifications to either question wording or question order that could improve the quality of the data, and to estimate respondent burden (B.I.3), the procedures used to collect the data will be based closely on the procedures used successfully for similar surveys and interviews, which ensures that they can be used effectively to conduct the data collection for this study.


B.5. Individuals consulted on statistical aspects of design and on collecting and/or analyzing data

Staff responsible for overseeing the collection and analysis of data are listed in Table B.3 and individuals consulting on the efforts are listed in Table B.4.


Table B.3 Individuals overseeing the collection and analysis of data for the Evaluating Registered Apprenticeship Initiative Study

The Urban Institute

Demetra Nightingale

Co-Principal Investigator


Daniel Kuehn

Pre-Apprenticeship and Youth Apprenticeship Grants Study Director and Impact Evaluability Co-director


Lauren Eyster

Registered Apprenticeship Hub Grants Study Director


Mathematica

Linda Rosenberg

Co-Principal Investigator & State Grants Study Director


Jonah Deutsch

Impact Evaluability Co-director




Table B.4 Individuals consulting on the collection and analysis of data for the Evaluating Registered Apprenticeship Initiative Study

The Urban Institute


Demetra Nightingale

Co-Principal Investigator


Daniel Kuehn

Pre-Apprenticeship and Youth Apprenticeship Grants Study Director and Impact Evaluability Co-director


Lauren Eyster

Registered Apprenticeship Hub Grants Study Director


Mathematica

Linda Rosenberg

Co-Principal Investigator & State Grants Study Director


Jonah Deutsch

Impact Evaluability Co-director


Social Policy Research Associates

Leela Heeber


Kristin Wolfe

Technical Work Group Members

Alex Camardelle
Vice President of Policy and Research, Atlanta Wealth Building Initiative

Carolyn J. Heinrich
Professor of Public Policy and Education, Vanderbilt University.

Kevin Hollenbeck
Consultant, Former VP, W.E. Upjohn Institute for Employment Research

Maura Kelly
Professor, Sociology Department, Portland State University

Christopher Maclarion
Director of Apprenticeship and Training, Maryland Department of Labor


Lul Tesfai

Director of Program Development, Irvine Foundation



1 The original grant announcement states that ABA grantees “are required to participate in an evaluation, if undertaken by DOL. … We may require applicants to collect data elements to aid the evaluation. As a part of the evaluation, as a condition of award, grantees must agree to: (1) make records available to the evaluation contractor on participants, employers, and funding; (2) provide access to program operating personnel, participants, and operational and financial records, and any other relevant documents to calculate program costs and benefits; and (3) in the case of an impact analysis, facilitate the assignment by lottery of participants to program services, including the possible increased recruitment of potential participants; and (4) follow evaluation procedures as specified by the evaluation contractor under the direction of DOL.” Notice of Availability of Funds and Funding Opportunity Announcement for: Apprenticeship Building America (ABA) Grant Program, March 2022, page 62. https://www.dol.gov/sites/dolgov/files/ETA/grants/pdfs/ABA_FOA-ETA-22-06.pdf.



2For example, see the following report that included 100% response to the TAACT Round 3 grant survey. Eyster, Lauren, Kelly S. MikelsonCarol HaffordJohn TrutkoChristin DurhamCarolyn T. O'BrienAnanda Martin-CaugheyAmanda BriggsAlex TrutkoKim Nguyen. Implementation of the Round 3 Trade Adjustment Assistance Community College and Career Training Grants, Urban Institute, 2020. https://www.dol.gov/sites/dolgov/files/OASP/evaluation/pdf/ETA_Round3TAACCCTImplementation_Report_Sep2020.pdf



3 See response rates for the American Apprenticeship Initiative participant survey, Walton, Douglas, Karen N. Gardiner, and Burt Barnow. 2022. Expanding Apprenticeship to New Sectors and Populations: The Experiences and Outcomes of Apprentices in the American Apprenticeship Initiative. Prepared for the U.S. Department of Labor, Employment and Training Administration. Rockville, MD: Abt Associates.

4 A summary of this research is provided at https://www.dol.gov/agencies/oasp/evaluation/completedstudies/Apprenticeship-Evidence-Building-Portfolio.

5 Copson, Elizabeth, Tresa Kappil, Karen Gardiner, Andrew Clarkwest, Hannah Engle, Alex Trutko, John Trutko, Asaph Glosser, Riley Webster, Daniel Kuehn, Robert Lerman, Jessica Shakesprere. 2021. Implementing Registered Apprenticeship Programs: Experiences of 10 American Apprenticeship Initiative Grantees. Report prepared for the U.S. Department of Labor, Employment and Training Administration. Rockville, MD: Abt Associates.

6 For prior experience conducting grantee surveys online see Eyster, Lauren, Kelly S. MikelsonCarol HaffordJohn TrutkoChristin DurhamCarolyn T. O'BrienAnanda Martin-CaugheyAmanda BriggsAlex TrutkoKim Nguyen. Implementation of the Round 3 Trade Adjustment Assistance Community College and Career Training Grants, Urban Institute, 2020 and for https://www.dol.gov/sites/dolgov/files/OASP/evaluation/pdf/ETA_Round3TAACCCTImplementation_Report_Sep2020.pdf and for similar apprentice participant surveys online see Walton, Douglas, Karen N. Gardiner, and Burt Barnow. 2022. Expanding Apprenticeship to New Sectors and Populations: The Experiences and Outcomes of Apprentices in the American Apprenticeship Initiative. Prepared for the U.S. Department of Labor, Employment and Training Administration. Rockville, MD: Abt Associates.



9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMathematica Staff
File Modified0000-00-00
File Created2024-10-26

© 2024 OMB.report | Privacy Policy