DEI Eval OMB Package Supporting Statement Part B_FINAL

DEI Eval OMB Package Supporting Statement Part B_FINAL.docx

Disability Employment Initiative Evaluation

OMB: 1230-0006

Document [docx]
Download: docx | pdf









Paperwork Reduction Act Submission
Supporting Statement Part B for the Office of Disability Employment Policy (ODEP)

2010-2013 Disability Employment Initiative Evaluation

January 20, 2012

Table of Contents

Shape2

Part B: Collection of Information Employing Statistical Methods 2

B.1 Respondent Universe and Sampling Methods 2

B.1.1 Site Selection 2

B.1.2 Respondent Universe and Study Sample Size 7


B.2 Procedures for the Collection of Information 10

B.2.1 Power Analysis 12


B.3 Methods of Maximize Response Rates and Deal with Non-Response 14

B.3.1 Site Visits 14

B.3.2 DEI Data System 15


B.4 Tests of Procedures or Methods to be Undertaken 15


B.5 Individuals Consulted on Statistical Aspects and/or Analyzing Data 16

References – Part B 17


List of Tables


Table B.1: LWIAs Included in the Selection Pool, the Stratum to Which

Each Was Assigned, and Random Assignment Outcome 4


Table B.2: Number of LWIAs Participating in the DEI and

Estimated Number of Individuals Participating in the DEI

Data Collection 8


Table B.3: MDE and Margin of Error Estimates for Key Outcomes 12


Appendices

Appendix 5: System Change Framework 20

Appendix 6: Solicitations for Grant Application – Rounds One and Two 22


PART B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS


Because of the DEI’s emphasis on systems change, the unit of analysis for the DEI evaluation is the LWIA and random assignment must be conducted at the LWIA (or system) level. In traditional experimental studies of employment programs, One-Stop customers are randomly assigned to the treatment group or a control group so that the impact of the program can be estimated by comparing the program outcomes of each group. The comparison group provides the counterfactual so that there are no systematic differences between the two groups of customers that may influence program outcomes, and the overall impact of the program can be attributed to the intervention.


The U.S. DOL has a history of conducting traditional experimental evaluations with random assignment taking place at the customer level. For example, the WIA Adult and Dislocated Worker Programs Gold Standard Evaluation, currently being conducted by Mathematica Policy Research and MDRC, is designed to measure three One-Stop service packages (all WIA services, core and intensive services but not training, and core services only) to determine if higher levels of service intensity have an appreciable impact on customer outcomes. Previous experimental studies, such as Schochet’s (2000) evaluation of the National Job Corps program, Miller’s (2005) study of the Center for Employment and Training Replication Sites and Dunham and Wiegand’s (2008) evaluation of the Youth Offender Demonstration Project, were designed to measure the impact of a single intervention where systems level issues were not a major component of the intervention.


The Disability Navigator Program (DNP) is similar to the DEI, in that it was designed in part, to improve the employment outcomes of One-Stop customers with disabilities through the development of system-wide partnerships and resources, as well as direct support to the target population. Although the DNP did not focus exclusively on systems level issues, it did seek to improve workforce development systems in ways that would increase the utilization of One-Stop services. But the key difference between the DNP and the DEI is that the latter focuses exclusively on systems change, while the DNP was more concerned about increasing the utilization of One-Stops by customers with disabilities and their employment outcomes. In contrast, the DEI is concerned primarily with systems level issues and how they influence One-Stop customer outcomes.


In the case of the DEI, random assignment at the LWIA level is being proposed because the program is designed to make improvements in LWIA workforce development systems (WFDS). WFDS’s are composed of a Workforce Investment Board (WIB), One-Stop Career Centers and their private sector and public agency partners. An inefficient WFDS approaches its work linearly because each entity operates with little information or interest in the activities, goals, and objectives of the other entities in the system. A functional WFDS is characterized by recognized interdependencies among its component parts, shared goals and objectives and clearly defined priorities; resources are consumed efficiently and redundancy across the entire system is minimal due to effective systems planning (French & Bell, 1995, Midgley, 2003) (See Appendix 5: DEI Evaluation System Change Framework).


In the DEI evaluation, changes in the workforce development system within each DEI LWIA are hypothesized to have a causal effect on the employment outcomes of customers with disabilities. The DEI evaluation includes an extensive system-level qualitative data collection and analysis component (See Part A) that will produce ordinal level systems change variables that serve as independent variables in the quantitative analysis of program impact. Because of the DEI’s emphasis on systems change, the unit of analysis for the DEI evaluation is the LWIA and random assignment will be conducted at the LWIA level. In the following section, we discuss in greater detail, the proposed sampling methods and statistical modeling techniques planned for the DEI evaluation.




B.1 RESPONDENT UNIVERSE AND SAMPLING METHODS


The DEI is being implemented in selected states using grant funding awarded on a competitive basis. The Department of Labor (DOL) selected states to receive DEI grants based on the funding available and the merits of the DEI activities proposed in the states’ applications. DOL solicited applications from states in two rounds, the first occurring in the summer of 2010, and the second in the summer of 2011. The selection criteria used by DOL were the same in both rounds: the state’s strategic approach; partnership commitments and resources; demonstrated experience providing services targeting people with disabilities; project management; and potential for achieving stated outcomes and sustainability. See the Solicitation for Grant Applications provided in Appendix 6 for the specific evaluation criteria used to rank and select the grantee states. As a result of the application process, nine states were awarded grants in Round 1 and seven states were awarded grants in Round 2.


A primary goal of the DEI evaluation is to identify the impact of the DEI activities on employment-related and other outcomes for people with disabilities. To accomplish this, a clustered randomized selection procedure was used to assign sites within the grantee states to serve as pilot sites, which will implement the DEI services, or as comparison sites, which will continue their normal operations without the additional services. All individuals receiving services at a One-Stop career center located in a pilot site and who self-identify as having a disability will be eligible to receive the DEI services and will become members of the treatment group. Individuals receiving services at a One-Stop career center located in a comparison site and who self-identify as having a disability will receive traditional services and will become members of the comparison group.


B.1.1 Site Selection


As noted above, DOL selected the states that would participate in the DEI based on the funding available and the merits of the grant applications submitted by states. In their applications, states were asked to identify the specific Local Workforce Investment Areas (LWIAs) that would be willing and able to implement the DEI strategies and also participate in the data collection activities associated with the evaluation. Because separate Local Workforce Investment Boards (LWIBs) administer the funding and programs delivered in the One-Stop career centers located in their respective LWIAs, and because many of the DEI strategies are implemented at the LWIA system level, rather than at the individual level, the LWIA is the unit of assignment for the evaluation. LWIAs were selected using a stratified random selection procedure. The process for conducting the random assignment involved the following steps:


  • A list of the LWIAs proposed in each grantee state’s application was obtained.. State representatives were informed of the random selection process and were asked to confirm that the LWIAs proposed in their applications understood that they would be randomly assigned to implement the DEI or to act as a comparison site, and were willing to participate in the random assignment process and data collection, and were capable of implementing DEI if selected.


  • Each state grantee was informed that one-half of the LWIAs identified for participation in the DEI would be selected as pilot sites. To a first approximation at least, a 50/50 split maximizes power. In addition, a 50/50 split facilitated the stratification process undertaken for purposes of increasing the comparability of pilot and comparison sites with respect to key characteristics (urban/rural status and in some instance, region of the state), allowing the inclusion of strata with as few as two LWIAs. All strata are within a specific state, because of many state-specific factors that might affect outcomes. Because the outcomes of interest might also vary substantially by LWIA within a state, LWIAs were stratified within states to improve comparability between pilot and comparison sites and to improve the precision of the impact estimates. In all states participating in the random assignment process, additional explicit strata were created to include LWIAs that were judged to have similar characteristics. This stratification was performed based on the characteristics that state representatives believed to be most relevant in determining the similarity of LWIAs within the state. These characteristics included urban/rural status, but sometimes also included geographic location. The stratum descriptor associated with each site in Table B.1 indicates the urban/rural and geographic criterion (if any) that were used to group the sites into strata. The number of people with disabilities at each site was not an explicit stratification criterion, but the urban/rural designation of sites inherently takes this into account as urban areas are more densely populated and people with disabilities are more likely to reside in urban areas. 1


Round 1 Sites. Two Round 1 DEI grantee states, Delaware and Alaska, have state-wide workforce investment boards (WIBs) and are not divided into LWIAs. Accordingly, these states did not participate in the random assignment process; the DEI will be implemented state-wide with certainty in these locations. Other specific issues made random assignment problematic for two LWIAs: Chautauqua County in New York, and the Tri-County LWIA in Maine. These LWIAs were assigned to pilot status with certainty. The two sites and two state-wide WIBs that were selected with certainty as pilot sites will be excluded from the impact analysis, but will be included in the data collection effort, and descriptive statistics will be produced for these sites.

In most cases, exactly half of LWIAs in each stratum were randomly selected as pilot sites. One stratum in Virginia contained an odd number of sites, and three out of the five LWIAs were chosen randomly out of that stratum. The results of the random selection process are shown in Table B.1.


Round 2 Sites. One Round 2 DEI grantee state, South Dakota, has a state-wide WIB and is not divided into LWIAs. Accordingly, the state did not participate in the random assignment process; the DEI will be implemented in an area purposefully selected by the state. South Dakota will be excluded from the impact analysis, but will be included in the data collection effort, and descriptive statistics will be produced for these sites. A list of each participating LWIA, along with a text description of the stratum it was assigned to, is in Table B.1.

In most cases, exactly half of LWIAs in each stratum were randomly selected as pilot sites. The following strata contain three LWIAs, two of which were selected as pilots: Ohio’s “Less Urban” stratum, Tennessee’s “Urban/Rural Mix” stratum, Washington’s single statewide stratum, and Wisconsin’s “Small Urban Center” stratum. The results of the random selection process are shown in Table B.1.



TABLE B.1 LWIAs Included in the Selection Pool, the Stratum to Which Each Was Assigned, and Random Assignment Outcome


State/LWIA


Stratum

Random Assignment Outcome

Round 1 States



Alaska*

*

*

Arkansas



Central

Arkansas single stratum

Comparison

Eastern

Arkansas single stratum

Pilot

North Central

Arkansas single stratum

Comparison

Northwest

Arkansas single stratum

Pilot

Southeast

Arkansas single stratum

Pilot

Southwest

Arkansas single stratum

Comparison

West Central

Arkansas single stratum

Pilot

Western

Arkansas single stratum

Comparison

Delaware*

*

*

Illinois



3 (Rockford area)

Rural

Comparison

15 (Peoria area)

Rural

Pilot

8 (Northern Cook County)

Urban

Pilot

9 (Chicago)

Urban

Comparison

Kansas



I (west)

Rural

Comparison

V (southeast)

Rural

Pilot

II (northeast / Topeka)

Urban

Comparison

IV (south / Wichita)

Urban

Pilot

Maine



2 (Tri-County)

Assigned with certainty to pilot

Pilot

1 (Northern Maine)

Maine single stratum

Pilot

4 (Coastal Counties)

Maine single stratum

Comparison

New Jersey



Bergen County

Northern suburbs

Pilot

Greater Raritan

Northern suburbs

Comparison

Cumberland/Salem

Rural

Pilot

Morris/Sussex/Warren

Rural

Comparison

Burlington County

Southern suburbs

Pilot

Gloucester County

Southern suburbs

Comparison

Camden County

Urban

Comparison

Passaic County

Urban

Pilot

New York



Chautauqua County

Assigned with certainty to pilot

Pilot

Capital Region (Albany/Rensselaer/Schenectady)

Central macro area / Albany and suburbs

Pilot

Columbia-Greene Counties

Central macro area / Albany and suburbs

Comparison

Chenango-Delaware-Otsego Counties

Central macro area / rural

Pilot

Fulton-Montgomery-Schoharie

Central macro area / rural

Comparison

Broome-Tioga Counties

Central macro area / small city surrounded by rural

Pilot

Chemung-Schuyler-Steuben Counties

Central macro area / small city surrounded by rural

Comparison

Herkimer-Madison-Oneida

Central macro area / small city surrounded by rural

Comparison

Tompkins County

Central macro area / small city surrounded by rural

Pilot

Erie County

Northwest macro area / Buffalo area

Comparison

Niagara County

Northwest macro area / Buffalo area

Pilot

Monroe County

Northwest macro area / large city

Comparison

Onondaga County

Northwest macro area / large city

Pilot

Jefferson-Lewis

Northwest macro area / northern small towns and rural

Comparison

North Country

Northwest macro area / northern small towns and rural

Pilot

Oswego County

Northwest macro area / northern small towns and rural

Pilot

St. Lawrence County

Northwest macro area / northern small towns and rural

Comparison

Finger Lakes (Ontario-Seneca-Wayne-Yates)

Northwest macro area / western small towns and rural

Comparison

Genesee-Livingston-Orleans-Wyoming Counties

Northwest macro area / western small towns and rural

Pilot

Orange County

Southern macro area / less urban

Pilot

Rockland County

Southern macro area / less urban

Comparison

Sullivan County

Southern macro area / less urban

Comparison

Ulster County

Southern macro area / less urban

Pilot

Westchester Balance/Putnam

Southern macro area / more urban

Comparison

Yonkers

Southern macro area / more urban

Pilot

Virginia



III Western

Urban/Rural Hybrid

Comparison

VI Piedmont

Urban/Rural Hybrid

Pilot

XIII Bay Consortium

Urban/Rural Hybrid

Pilot

XIV Greater Peninsula

Urban/Rural Hybrid

Pilot

XVII West Piedmont

Urban/Rural Hybrid

Comparison

I Southwest

Rural

Comparison

VIII South Central

Rural

Pilot

XI Northern

Urban

Comparison

XII Alexandria/Arlington

Urban

Pilot

Round 2 States



California



Golden Sierra

Rural - Northern

Pilot

Los Angeles City

Urban Center

Pilot

Madera County

Rural - Central

Pilot

Merced County

Rural - Central

Comparison

North Central Counties Consortium

Rural - Northern

Comparison

Sacramento Employment and Training

Urban Center

Comparison


San Bernardino County

Balance of State

Comparison


San Francisco

Balance of State

Pilot


Southeast Los Angeles County

Los Angeles Suburbs

Comparison


Verdugo

Los Angeles Suburbs

Pilot


Hawaii




Hawaii

Hawaii Single Stratum

Pilot


Maui

Hawaii Single Stratum

Pilot


Oahu

Hawaii Single Stratum

Comparison


Kauai

Hawaii Single Stratum

Comparison


Ohio




1 (Adams, Brown, Scioto, Pike Counties)

Less Urban

Pilot


2 (Medina, Summit Counties)

Less Urban

Comparison


3 (Cuyahoga County)

More Urban

Pilot


9 (Lucas County)

Less Urban

Pilot


11 (Franklin County)

More Urban

Comparison


South Dakota*

*

*


Tennessee




LWIA 1

Rural

Pilot


LWIA 3

Smaller City

Pilot


LWIA 5

Smaller City

Comparison


LWIA 7

Urban/Rural Mix

Comparison


LWIA 8

Urban/Rural Mix

Pilot


LWIA 9

Large City

Comparison


LWIA 10

Urban/Rural Mix

Pilot


LWIA 12

Rural

Comparison


LWIA 13

Large City

Pilot


Washington




2. Pacific Mountain

Washington Single Stratum

Comparison


4. Snohomish County

Washington Single Stratum

Pilot


5. Seattle-King County

Washington Single Stratum

Pilot


Wisconsin




1 Southeast

Urban

Comparison


2 Milwaukee County

Urban

Pilot


3 Waukesha-Ozaukee-Washington

Suburban

Pilot


4 Fox Valley

Small Urban Center

Pilot


5 Bay Area

Small Urban Center

Comparison


6 North Central

Small Urban Center

Pilot


7 Northwest

Predominantly Rural

Comparison


8 West Central

Predominantly Rural

Pilot


9 Western

Predominantly Rural

Comparison


10 South Central

Suburban

Comparison


11 Southwest

Predominantly Rural

Pilot


* The entire states of Alaska, Delaware, and South Dakota will be included as DEI pilot sites in the evaluation, but will be excluded from the impact analysis due to a lack of comparison groups in these states.



B.1.2 Respondent Universe and Study Sample Size


The population of interest includes One-Stop career service centers and their users in the DEI pilot and comparison LWIAs who self-identify as having a disability. Most of the DEI grantees will target adult customers with disabilities, but some will target youth customers with disabilities.


All individuals receiving One-Stop services in the pilot and comparison LWIAs will be included in the study sample, but most of the analysis will focus on the subset of One-Stop clients who self-identify as having a disability.2 A probability sample is not used for two reasons. First, administrative data is currently collected from all individuals using One-Stop services for the WIASRD and Wagner-Peyser data, so the additional data collection effort for people with disabilities would be a natural extension of the existing data collection effort. The additional data collection creates a relatively small burden per person, and a process that randomly samples people with disabilities would be cumbersome without substantially reducing overall customer burden. Second, many sites are likely to have only a small number of customers with disabilities, so collecting data from only a sample would result in low precision. The higher precision is needed in order for the impact analysis to detect the smallest effects that are economically meaningful. The precision of the impact estimates is determined primarily by the site selection process.


It is estimated that 69,119 adults in 46 sites that will target adult clients, and 7,837 youth in 17 sites that will target youth clients will be included in the Round 1 DEI data collection effort over the two year period.3 An estimated 67,956 adults in 42 sites targeting adult clients, and 169 youth in one site targeting youth clients will be included in the Round 2 DEI data collection effort over its two year period. Because a small number of sites in each round were not randomly assigned to pilot or comparison status, data from these sites will be excluded from the impact analysis. Therefore, estimates of the impact analysis power are based on a slightly smaller number of individuals and sites. Table B.2 shows the estimated number of LWIAs and individuals that will participate in the DEI, including both pilot sites and comparison sites, disaggregated by whether the state will target adults or youth. In each of these sites, all individuals who self-identify as having a disability will participate in the data collection effort.


Table B.2 Number of LWIAs Participating in the DEI and Estimated Number of Individuals Participating in the DEI Data Collection


Number of LWIAs

Average number of Individuals with Disabilities per LWIA

Total Number of Individuals with Disabilities

All sites participating in the DEI

106

1,369

145,081

Round 1 Adult sites

46

1,503

69,119

Round 1 Youth sites

17

461

7,837

Round 2 Adult sites

42

1,618

67,956

Round 2 Youth sites

1

169

169

Note: for the purpose of the table, Alaska, Delaware, and South Dakota are each considered to be one LWIA.



The study universe is One-Stop service users who self-identify as having disabilities and receive services at One-Stop centers that have indicated a willingness and ability to implement DEI strategies to improve services to people with disabilities in states sufficiently motivated to apply for DEI grant funding, and whose applications were deemed superior by DOL among all applications received. There are important implications of the site selection process and respondent universe for estimating the impacts of the DEI and generalizing the results of the study, which we discuss below.

States and LWIAs Included in the Study. The fact that both the states and the LWIAs included in the study were selected based on their willingness to seek DEI grants and DOL’s assessment of the quality of the state’s proposal suggests that the impact findings will not necessarily be representative of the experiences of all states and LWIAs that might implement such strategies. The estimated impacts might be biased as estimates of impacts for a national program, in which grant funds are made available to all LWIAs, but the direction of the bias is uncertain. By design, the estimates should be unbiased for the set of LWIAs that agree to participate in the grants and whose states are successful in obtaining grants (internal validity). If grants are made indiscriminately to LWIAs external to the study, impacts might be smaller because those LWIAs are not motivated or capable of using the grants as intended. It is also possible, however, that impacts for the external LWIAs will be larger because, in contrast to the study LWIAs which are presumably already motivated to implement DEI and ready and able to use the funds well, the external LWIAs might become motivated by the newly available funding and might learn how to use the funds well from technical assistance that would also be available. Ultimately the impacts of a national program will depend on how DOL distributes grant funds, regulates their use, provides technical assistance, and monitors LWIA performance.

While the impact estimates and other study findings may not necessarily be generalizable to all One-Stop centers, this does not diminish the need to conduct a rigorous evaluation. DOL expects to learn a great deal about the experiences of One-Stop centers in implementing a variety of service strategies intended to improve the employment outcomes of customers with disabilities, a group that historically has been neglected, in part because it is viewed as hard to serve in the One-Stop service environment. Selecting states and LWIAs that are motivated to improve services to this population, rather than selecting LWIAs at random, increases the likelihood that the DEI strategies will be implemented with integrity. The result will be a rigorous, internally valid test of the impacts of these strategies and a comprehensive assessment of One-Stop experiences in undertaking them that can inform how best to implement such services nationally.

Individuals Who Self-Identify Disability. Focusing on individuals who self-identify their disabilities might affect the impact estimation if the DEI strategies implemented at the pilot sites affects the likelihood that individuals self-identify disabilities or changes the composition of those who self-identify. For example, one of the DEI strategies is to have One-Stops increase their efforts to become and provide services as an Employment Network under SSA’s Ticket to Work program. This activity, and the community outreach associated with it, might result in more Social Security disability beneficiaries seeking One-Stop services at DEI pilot sites. As those meeting the SSA disability criteria have, by definition, very severe and long-lasting disabilities, and most face significant barriers to employment, it is possible that having a larger share of these individuals represented in the DEI pilot sites will result in lower observed employment outcomes at the pilot sites. Differences in means between the DEI pilot and comparison sites will still be unbiased as estimates of impacts on outcome means for those reporting disabilities, but the estimated impacts will include effects that reflect the impact of DEI on the number of users with disabilities and their characteristics, not just the impacts of DEI on those users with disabilities who would have used One-Stop services in the absence of DEI. To address this and other potential issues associated with relying on customer identification of disabilities in order to identify the DEI target population, DOL plans to match the DEI data with SSA administrative records to identify One-Stop customers who have recently participated in the Social Security disability programs. Recent disability program participation will act as another independent measure of disability that can be used to assess the impact of the DEI on One-Stop service utilization and employment outcomes among people with disabilities. Further, the evaluation will be able to assess the extent to which the impact of the DEI on One-Stop use by Social Security disability beneficiaries and on the characteristics of beneficiary users, and contributes to the impact on mean outcomes for beneficiary users.


B.2 PROCEDURES FOR THE COLLECTION OF INFORMATION


Outcomes of interest will be measured at the individual level and include measures of service utilization and employment outcomes. The key utilization measure is whether an individual registering for services in a One-Stop center self-identifies as having a disability, or, for some analyses, is identified as a current or recent participant in a Social Security disability program (as determined by matching the DEI data with SSA administrative data). Employment outcomes will be analyzed specifically for customers with disabilities and include employment entry, employment retention, and earnings. For youth sites, the outcomes of interest also include degree/certification completion, high school/GED graduation, and completion of internship/job shadowing experience. Analysis of utilization measures will use administrative data on all One-Stop users from randomly assigned LWIAs, while analysis of outcome measures will use data only on customers with disabilities from those LWIAs. Estimates will be developed collectively for all states.


With a random assignment design, there should be no systematic observable or unobservable differences between LWIAs in the treatment and comparison groups except for the availability of the DEI services, which are determined randomly. Thus, simple differences in the mean values of utilization measures between LWIAs assigned to the two research groups will yield unbiased impact estimates of program effects on mean outcomes for users with disabilities, and the associated t-tests (adjusted appropriately for design effects due to weighting, which accounts for unequal probability of assignment to treatment groups across strata, and clustering) can be used to assess statistical significance. Effect sizes will also be calculated.


The analysis will measure the differences in the mean values of employment and other outcomes for customers with disabilities who use the treatment and comparison LWIAs. Because the availability of the DEI services in pilot LWIAs may influence the customers that choose to use One-Stop services and those who self-identify as having a disability, the difference in mean outcomes reflects both any impact of the DEI services on outcomes for those who would have used the treatment One-Stop centers in the absence of the DEI services as well as the impact of the DEI services on One Stop utilization by customers with disabilities. Measuring an unconditional impact of the DEI on employment outcomes of customers with disabilities who would use One-Stop services in the absence of DEI would have required random assignment at the customer level. This option was not feasible because of the systemic nature of some components of the DEI (e.g., maintaining One-Stop accessibility and the requirement that they become an Employment Network). Random assignment at the customer level also would have imposed a much greater burden on the One Stop centers.

The previous paragraph provides an intuitive description of how the analysis will be performed, but abstracts from important technical details. While the difference in means estimates described would be unbiased estimates of the impacts, their statistical precision can be improved by controlling for individual baseline characteristics. Further, estimation of unbiased standard errors requires the analysis to appropriately account for the stratified, clustered design of the study. That is, sites (and all users with disabilities within the site) are randomly assigned to DEI or comparison groups as a cluster within strata, defined within each state as described in Table B.1. .


A two-level hierarchical linear model (HLM) will be used to address both of these issues (Bryk & Raudenbush 2002). HLM provides unbiased, minimum variance (i.e., efficient) estimates of impacts for experiments with cluster-randomized designs like the one to be used for the DEI. The HLM estimates represent differences in means that have been adjusted for observable differences in baseline characteristics, with standard errors that have been adjusted for the clustered design of the demonstration. HLM procedures have been implemented in major statistical packages (such as SAS and STATA) and widely used for estimation of models under similar circumstances to those for the DEI.

Several other analytic methods were considered, but HLM was determined to be the most appropriate. Although other regression estimators (e.g., ordinary least squares) would produce unbiased estimates of the impact of the DEI, HLM produces unbiased, efficient (minimum asymptotic variance) estimates with standard errors and associated confidence intervals that account for the cluster-randomized study design. A differences-in-differences estimator was also considered. This method, which would still be implemented in an HLM framework, would compare changes over time in outcomes of individuals in pilot sites to changes over time in outcomes of individuals in comparison sites. Although this estimator would likely improve the precision of the estimates, perhaps substantially, it was judged that the required data before the implementation of the DEI is of poor quality. Furthermore, one of the chief arguments for using differences-in-differences, reducing bias in quasi-experimental settings, (e.g. Card and Krueger, 1994) is not relevant to the DEI evaluation given the experimental design.



Separate HLM models will be estimated for adults and dislocated workers. Separate analyses for adults and youth, as well as a combined analysis, will be conducted. Each is specified as follows:


where level 1 corresponds to customers and level 2 to sites, and

is an outcome variable for customer i in site s

is a vector of customer-level characteristics

is a variable that equals 1 if site s is a pilot site and 0 if it is a comparison site



is a set of stratum fixed effects

represent site-specific random intercepts

represents the treatment effect

β is a vector of parameters that represent the relationship between customer-level characteristics and the outcome

are customer-level error terms

are site-specific random error terms



By inserting the level-2 equation into the level-1 equation, the following single-equation version of this two-level HLM framework is obtained:



The HLMs will include customer-level control variables that pertain to the period prior to random assignment and may be correlated with key outcome measures. These include individual characteristics such as educational attainment, marital status, employment status and earnings at time of registration, employment history variables, race, ethnicity, gender, disability type, severity of impairment, or Ticket assignment. Indicator variables for the random assignment strata will also be included at the site level.


Equation (2) will be estimated across all sites. We will use site-level weights that account for unequal probability of assignment of sites to treatment and comparison groups across stratum. In this formulation, represents the regression-adjusted impact estimate of the DEI relative to the comparison group. The standard error for accounts for the clustering of customers within sites because of the inclusion of the site-level error terms (us and s) in equation (2). The associated t-statistic for these estimates will be used to test for statistical significance using a 2-tailed test. .

The specific maximum-likelihood methods for estimating the parameters of the HLM will depend on the form of the dependent variable. Some outcomes will be continuous (such as earnings), some will be binary (such as employment), and some will be categorical (such as number of quarters with any earnings). Accordingly, linear regression procedures will be used for continuous outcomes, logistic regression procedures will be used for binary outcomes, and multinomial regression procedures will be used for categorical outcomes. The SAS proc mixed and proc nlmixed procedures will be used to estimate HLM model for continuous and binary/categorical outcomes, respectively, producing an estimate of the treatment effect and a variance that accounts for the clustered design. The procedures will produce associated standard errors and t-statistics that enable statistical tests of the treatment effect. Chi-squared tests will be used to test contrasts.



The analysis will be performed separately for sites with a youth focus and those with an adult focus. In addition, a combined analysis will be performed in which the impact of the DEI is estimated for all randomly assigned sites, regardless of youth or adult focus.


B.2.1 Power Analysis


Outcomes are likely to be correlated across individuals within each LWIA even in the absence of the DEI. Because random assignment of pilot and comparison status occurs at the LWIA level, this clustering effect is expected to reduce the precision of the impact estimates. The precision of the impact estimates is therefore limited by the number of LWIAs participating in the random assignment process. Although the DEI may be expanded in future years to include more sites, the precision of the impact estimates cannot be improved substantially due to this limitation.


To estimate the minimum detectable effect (MDE), that is, the minimum impact that we expect can be detected given the study sample size and method of site selection, several additional assumptions were needed. It is assumed that in each site, 2.7 percent of entrants self-identify as having a disability at baseline on the basis of U.S. Department of Labor (n.d.). In the absence of the DEI, 30 percent are assumed to enter employment, and within each site, the intra-class correlation of each outcome is five percent.


The estimated MDE for selected outcomes is shown separately for several combinations of states. Table B.3 shows the MDEs for Round 1 states with a youth focus (Arkansas and New Jersey) and those with an adult focus, as well as an analysis that measures a combined youth / adult effect. Table B.3 also shows MDEs for an analysis of Round 2 states, which will include only states with adult focus. MDEs for a combined analysis using both Round 1 and Round 2 states are also included. An alternate descriptor of the precision of the analysis is the expected margin of error, or the half-width of the 95 percent confidence interval of the impact estimate. The margin of error for each test is also shown in Table B.3.

Table B.3 MDE and Margin of Error Estimates for Key Outcomes


Round 1

Round 2

Round 1 and 2

Number of Sites (LWIAs) Randomly Assigned




Adult

43

42

85

Youth

16

0

16

Combined

59

42

101

MDE (Percentage Points) for Key Outcomes




Employment Rate




Adult

9.2

9.3

6.4

Youth

16.1

N/A

16.1

Combined

8.2

9.3

6.1

Percent of Customers Reporting a Disability




Adult

3.2

3.3

2.2

Youth

5.6

N/A

5.6

Combined

2.9

3.3

2.1

Margin of Error (Percentage Points) for Key Outcomes




Employment Rate




Adult

6.5

6.6

4.5

Youth

11.5

N/A

11.5

Combined

5.8

6.6

4.3

Percent of Customers Reporting a Disability




Adult

2.3

2.3

1.6

Youth

4.0

N/A

4.0

Combined

2.0

2.3

1.5

Note: The standard error of the estimate (SE), MDE and margin of error (MOE) are calculated as follows. First SE is calculated using the expression , where N is the total number of individuals in all sites, n is the number of sites, is the intraclass correlation, and p is the fraction of individuals with a positive outcome in the absence of the DEI. The MDE are calculated by multiplying the SE by a factor that is based on assumptions about the statistical test and its degrees of freedom; the latter varies by the number of sites. We assume use of two-tailed test, 80 percent power, and a 5 percent significance level, with the degrees of freedom equal to one less than the number of sites minus the number of strata. This yields factors for Round 1 and Round 2 analysis of 2.86 for adults 3.08 for youth, and 2.85 for the combined analysis. Factors are slightly larger for when considering analyses of a single round. The MOE are calculated by multiplying the SE by a different factor that depends on the significance level (but not power) and the degrees of freedom. The factors for the MOE are 2.01 for adults, 2.20 for youth, and 2.00 for the combined analysis.

The Round 1 MDEs are substantial economically: a 9.2 percentage-point impact on the employment rate of adults with disabilities represents a 31 percent increase based on a 30 percent employment rate for comparison sites. For the youth sites, it will only be possible to detect very large effects. The analysis for the percentage of One-Stop users who self-identify as having a disability shows that the MDE for adult sites will be over three percentage points, and will be just under twice as large for youth sites. Although small relative to the MDE for the employment rate, these values are very large relative to the percentage of One-Stop clients who report having a disability— 2.7 percent by our assumptions. In other words, the MDE for adult sites represents more than a doubling of the percentage of users who report having a disability.


The Round 2 MDEs are similar for analyses of states with an adult focus. Sites in the only Round 2 state with a youth focus were not randomly assigned, so impacts cannot be measured for youth states using Round 2 data alone. Combining Round 1 and Round 2 data into a single analysis further reduces the MDEs for the adult and combined adult/youth analysis. Effects on employment rate as low as 6.1 percentage points and effects on entry rate as low as 2.1 percentage points may be detected for an analysis of all participating states, regardless of round and focus.


The number of sites is determined by the parameters of the DEI grants and the ability of each site to participate in the program, so a calculation with other site sample sizes is not presented. The average number of customers with disabilities in each LWIA as used in the power calculations is estimated from data available in the DEI grant applications, as described in section B.1. The average number of total customers in each LWIA is estimated by dividing the average number of customers with disabilities by 0.027, the proportion of customers with disabilities nationwide as reported in U.S. Department of Labor (n.d.).


B.3 Methods to Maximize Response Rates and Deal with Non-response


B.3.1 Site Visits


Conducting site visits and in-person interviews with DEI stakeholders (e.g. state officials, DEI State Leads, Workforce Investment Board staff, DRCs, One-Stop staff, service providers, public and private agency partners, employers and customers) is an important component of the DEI evaluation because:4


  • Talking to respondents in person allows the interviewer to establish rapport;

  • Evaluators can get at the whole story through the totality of the interpersonal experience, such as observation of body language and other visual cues;

  • Evaluation staff can observe other activities occurring on-site adding to the “fullness” of the data;

  • Evaluation staff can get a tangible sense of the issues in a locality, which allows site visitors to have richer conversations with respondents because they have a better knowledge of their environment.


Thorough preparation for the site visits will minimize the risk of non-response from sites and participants. The Solicitations for Grant Announcement for Rounds 1 and 2 (Appendix 6) requires participation in the evaluation. However, recognize that participation among all stakeholders will require communication to reduce reluctance to participate among partners, employers and customers. Through early communication that emphasizes the importance of all stakeholders participation in the study, and the importance of gathering information and different perspectives from each individual respondent, the DEI evaluation team will reduce reluctance to participate fully in the DEI evaluation. The evaluation team will also identify and develop a trust relationship with a local liaison who will advocate for the study, assist in putting together a site visit schedule that takes into account ease and convenience for the respondents, help convince additional respondents to cooperate, and be persistent in following up with participants. Additionally, the evaluation team has designed minimally intrusive data collection methods and tools to help reduce the burden on participants.


Each site will receive advance communication from the evaluation team about the study, as well as the expectations for participants. The study team Evaluation Liaison assigned to each DEI grantee will work with the DEI state leads and WIB staff to arrange the interviews with key stakeholders. The DEI grant requires all participating LWIAs to support the evaluation. Once the interviews are scheduled, each participant will receive written confirmation of the scheduled interview and topics to be covered. Once the interviews for a site have been scheduled, the site visit lead will review the schedule to ensure that an appropriate amount of flexibility is built in to account for potential last-minute schedule conflicts with interview participants. Should an interview respondent fail to keep a pre-scheduled interview appointment, the site visit team will work with the local liaison and the interview participant to reschedule the interview for an alternative time when the team is on site. If this proves too difficult, the site visit team will work with the site liaison and interview participant to schedule a phone interview or identify a substitute respondent.



B.3.2 DEI Data System


The DEI Data System will be integrated into the existing registration process at all participating One-Stops. Therefore, it is expected that response rates will be relatively high as customers that receive services are required to register at a One-Stop. Existing One-Stop administrative data systems have a non-response rate of 8-10 percent. Because the DEI Data System will be linked to the existing administrative data systems via customers’ Social Security Numbers, it is highly likely that non-response rates will be 8-10 percent for the DEI grantees that elected to integrate the DEI Data System data elements into their existing data collection system.


The DEI evaluation team conducted conference calls with each DEI grantee to determine how best to incorporate the collection of the additional DEI data elements into their existing data collection process. Grantees were given three options: Add the data elements to their existing WIASRD and Wagner-Peyser data collection systems, use the internet based DEI Data system or submit the additional data elements using hardcopy forms. States that elected to use the Internet web-link or hardcopy form options may have lower response rates because these modes of data collection are not linked directly to existing data systems. In order to minimize non-response for these grantees, the DEI Evaluation Team will provide ongoing technical assistance to all participating One-Stop Career Centers, using webinar technology and site visits. These technical assistance activities also will include a DEI Evaluation Manual, a toll-free helpline, and quarterly monitoring of incoming data for data quality and completion. In addition, the evaluation team will review on a quarterly basis WIASRD and Wagner-Peyser data systems to ensure that each customer with a disability has a DEI Data System record.


B.4 Tests of Procedures or Methods to be Undertaken


The evaluation team has tested the site visit instruments with individuals knowledgeable about the workforce system and employment issues for people with disabilities to ensure that question wording is clear, that the questions are evoking the appropriate information, and that the overall process is not placing an unreasonable burden on participants. These tests included interviews with stakeholders and a discussion about each component of the site visit instrument. Revisions were made to ensure that the questions collect information that is relevant to the study’s research questions. These tests were designed to identify and eliminate problems, allowing the evaluation team to make corrective changes or adjustments before actually collecting the data. The DEI Evaluation Team reviewed the completed interviews to determine if respondents interpreted the questions and probes the way they were intended to be interpreted; analyzed the data and modified the instruments based on the information gathered during the pilot test. Tests were completed in LWIAs that are not part of the DEI Evaluation.


The qualitative data collected through the pilot test interviews indicated that the instruments collect the information they were supposed collect the resulting data were relevant to DEI and each stakeholder group. However, the pilot interviews also showed how the instruments could be improved. Questions were reworded and reordered for clarity and flow of the interview process and several probes were added to the instruments. The DEI Evaluation Team will continue to test the site visit instruments during the OMB review and comment period to refine wording and confirm the estimates of burden.


In addition, the DEI Data System design has been reviewed extensively by DOL and the DEI grantees. A pilot test was conducted with 9 respondents in January 2011 to determine respondent burden. The DEI Data System prototype will be completed in May 2011 at which time the DEI Evaluation Team will complete another pilot test to assess system usability. Tests will focus on time to complete, comprehension of system instructions, ease of use and human-computer interaction. The DEI Evaluation Team’s Data System staff will provide technical assistance to all participating LWIAs throughout the DEI Evaluation period to ensure that users of the system, regardless of the mode of data collection, complete data collection in a timely manner and that all data element fields are filled-in.



B.5 INDIVIDUALS CONSULTED ON STATISTICAL ASPECTS AND/OR ANALYZING DATA


  1. Gina Livermore, Ph.D., Mathematica Policy Research, Inc. (DEI Evaluation Team Member)

[email protected]

202-264-3462


  1. Peter Schochet, Ph.D., Mathematica Policy Research, Inc. (DEI Evaluation Team Member)

[email protected], 609-936-2783

  1. David Stapleton, Ph.D., Mathematica Policy Research, Inc. (DEI Evaluation Team Member)

202-484-4224

[email protected]


  1. Nathan Wozny, Ph.D., Mathematica Policy Research, Inc. (DEI Evaluation Team Member)

609-936-2795

[email protected]



References – Part B


Card, David & Krueger, Alan B. (1994). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania," American Economic Review, vol. 84(4), pages 772-93, September

Bryk, A., & Raudenbush, S. (2002). Hierarchical linear models: Applications and data analysis methods. 2nd edition. Newbury Park, CA: Sage Publications.


Dunham, Kate, and Andrew Wiegand. “The Effort to Implement the Youth Offender Demonstration Project (YODP) Impact Evaluation: Lessons and Implications for Further Research.” Oakland, CA: Social Policy Research Associates, 2008.

French, W. L., & Bell, C. H. (1995). Organization Development: Behavioral Science Interventions for Organization Improvement.  5th Edition.  Englewood Cliffs, N.J.: Prentice-Hall.


Livermore, G., & Coleman, S. (2010). Use of One-Stops by Social Security Disability beneficiaries in four states implementing Disability Program Navigator initiatives. Washington, DC: Mathematica Policy Research.


Midgley, G. (Ed.) (2003). Systems thinking. London: Sage.


Miller, Cynthia, Johannes Bos, Kristin Porter, Fannie Tseng, and Yasuyo Abe. “The Challenge of Repeating Success in a Changing World: Final Report on the Center for Employment Training Replication Sites.” New York, NY: MDRC, 2005.

Schochet, Peter Z., John Burghardt, and Steven Glazerman. “National Job Corps Study: The Short-Term Impacts of Job Corps on Participants’ Employment and Related Outcomes.” Princeton, NJ: Mathematica Policy Research, 2000.


U.S. Department of Labor. (n.d.). Wagner-Peyser Act employment services, state by state PY 2009 performance. Retrieved from http://www.doleta.gov/performance/results/wagner-peyser_act.cfm


APPENDIX 5

System Change Framework


Shape3


APPENDIX 6


DEI Solicitation for Grant Applications


ATTACHED








1 Note that because we are randomizing LWIAs, rather than individuals, the number of individuals at a particular site has a relatively minor effect on the study power compared with the number of sites.

2 DOL plans to have Social Security Administration (SSA) data from SSA’s disability programs matched to the DEI data for purposes of identifying One-Stop customers with disabilities based on disability program participation, rather than self-identification. Thus, a part of the analysis will be based on this subset of the client universe.

3 A site refers to an LWIA or the state-wide LWIBs in Alaska and Delaware. To determine the number of customers with disabilities from whom data will be collected via the DEI Data System, the numbers of FY 2009 WIASRD and Wagner-Peyser services users were obtained from the DEI grant applications. These counts were reduced by 11 percent (based on information reported in Livermore, & Coleman 2010) to obtain an approximate unduplicated count of customers with disabilities. The adjusted annual counts are then doubled to obtain the total number of customers with disabilities participating in data collection over the two year period.

4 Site visits will include visits to each study site, which is defined as each state’s participating LWIAs. Within each LWIA, visits will be made to LWIBs, One-Stops, public and private agency partners and employers.

22


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleTable of Contents
AuthorAuthors
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy