1205-0506 Green Jobs ARRA Part_A final 8-27 -2013

1205-0506 Green Jobs ARRA Part_A final 8-27 -2013.doc

Follow-Up Survey Information for Green Jobs and Health Care Impact Evaluation, American Recovery Reinvestment Act Grants

OMB: 1205-0506

Document [doc]
Download: doc | pdf

PART A: SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION: FOLLOW-UP surveys, Green Jobs and Health Care and High Growth Training grant initiatives

The Employment and Training Administration (ETA) in the U.S. Department of Labor (DOL) is undertaking the Green Jobs and Health Care (GJ-HC) Impact Evaluation of the Pathways Out of Poverty and Health Care and High Growth Training grant initiatives. The goal of this evaluation is to determine the extent to which enrollees achieve increases in employment, earnings, and career advancement as a result of their participation in the training provided by Pathways and Health Care grantees and to identify promising practices and strategies for replication. ETA has contracted with Abt Associates and its subcontractor, Mathematica Policy Research, to conduct this evaluation.

In July, 2011, OMB approved the baseline data collection for this evaluation (OMB 1205-0486), and in March, 2012, OMB approved a subsequent request for the process study data collection, which includes site visits and focus group administration (OMB 1205-0487). The request for approval in this information collection request is limited to the follow-up interviews planned for all study participants 18 months and 36 months after baseline collection (draft follow-up telephone questionnaire presented in Attachment 1).

Requests for approvals for this evaluation needed to be submitted in three parts for several reasons. The main reason is that it was necessary to (1) conduct random assignment and collect baseline data early in the study period to obtain a sample size needed for the estimation of program impacts and (2) conduct two rounds of process study visits, including one when the early sample was participating in the training program. In addition, the study structure required that the baseline data inform the development of the follow-up data collection effort. As a result, it was necessary to initiate the baseline data collection and gain experience in its implementation before the follow-up instruments could be developed. Thus, ETA is now requesting OMB approval of these follow-up instruments so that the evaluation can be completed on schedule.

The full request for this evaluation needed to be submitted in three parts for several reasons. The main reason is that it was necessary to (1) conduct random assignment and collect baseline data early in the study period in order to obtain a sample size needed for the estimation of program impacts and (2) conduct two rounds of process study visits, including one when the early sample was participating in the training program. In addition, the study structure required that the baseline data inform the development of the follow-up data collection effort. As a result, it was necessary to first obtain clearance for the baseline data collection and gain experience in its implementation before the follow-up instruments could be developed and then submitted for clearance.

Justification

1. Circumstances Necessitating Data Collection

As part of a comprehensive economic stimulus package funded under the 2009 American Recovery Reinvestment Act (ARRA), DOL funded a series of grant initiatives to promote training and employment in select high-growth sectors of the economy. Individuals facing significant barriers to employment, as well as those who were recently displaced as a result of the economic downturn, are the high-priority labor pools targeted by these ARRA initiatives. As part of the ARRA’s focus on labor demand, the Department places particular emphasis on high-growth and emerging industries, with a particular focus on emerging “green” sectors of the economy and pressing skill shortages in health care fields. These grant programs are consistent with ETA’s emphasis on more customized or sector-based labor market solutions, and on targeting job seekers (including incumbent workers) who face significant barriers to economic self-sufficiency as a resource for those specific growth sectors facing skill shortages, or that anticipate a need to hire.

ARRA’s focus on providing training for workers to fill jobs in high-growth and emerging industries comes at a critical time. During periods of both recession and expansion, it is important to address the challenge of building and maintaining a productive workforce to ensure long-term economic competitiveness. This applies particularly in industries, such as health care, education, and energy, in which the Bureau of Labor Statistics projects significant job growth over an extended time (Bureau of Labor Statistics 2010). However, several factors, including declines in educational attainment among American workers, a skilled workforce in need of replacements for aging and retiring workers, and continued immigration, are affecting workforce skill levels and the ability of employers to remain competitive and increase productivity (Dohm and Shniper 2007). Training programs like those funded by ARRA are designed to either provide these skills or to provide an entry-level career path toward acquiring them.

ETA’s grant programs represent an important step towards increasing post-secondary education and training in high-growth areas, particularly those related to health and green jobs. These programs supply resources for providing training, encourage partnerships between different service delivery systems, feature strong employer involvement, and focus on the provision of innovative and promising training strategies. To learn about the impacts of this significant investment of resources in training programs, ETA has funded a rigorous evaluation using a random assignment research design.

The two goals of this evaluation are to (1) determine if members of the randomly assigned treatment group (who have access to grant-funded services) achieve greater employment, earnings, and career advancement than otherwise equivalent control group members, and (2) identify promising practices and strategies for producing those effects for possible future replication. The study uses an experimental design to measure the impact of access to grant-funded training and support services, as well as a process study to examine intervention implementation and operations and provide context for interpreting impact study results. The evaluation is a census of the participating trainees at four study sites and will measure the effectiveness of the training strategies adopted by these four grantees, which were selected from among the 93 grantees funded under the Pathways Out of Poverty and the Health Care and Other High Growth Industries programs. The evaluation team based selection of these grantees primarily on the strength and scale of the grantees’ intervention and their ability to support the requirements of this type of evaluation. The study will not indicate whether the two grant funding vehicles as a whole produce beneficial effects, but rather will tell ETA whether any of the specific training approaches used in the four study sites are worth emulating as successful models for improving workforce outcomes in the green jobs and/or health care sectors.

As described in the clearance package submitted earlier for the baseline data collection, during the intake period all persons who apply for grant services in the study sites and are determined to be eligible are given information about the study (including information on random assignment) and asked to sign a form confirming they have received and understand information about the study. (This form was approved in the earlier package). Everyone who consents to participate is asked to complete a Baseline Information Form that gathers information on sample members’ background characteristics (this form was also approved in the earlier package). Grantee staff then enter the person’s data into the web-based Participant Tracking System and the system randomly assigns each participant to either the treatment or the control group. Staff then notify the participant of his or her assignment status. People who do not consent to participate in the study are not randomized or served with grant funding, but may obtain training and employment help from other sources on their own. Such training and employment help is also available to randomly assigned control group members who are excluded from grant-funded training and services as well as treatment group members who can access grant-funded services.

As noted above, a second clearance package was submitted for the process study visits. This research activity involves conducting two rounds of site visits to the four grantees participating in the evaluation. During these site visits, the evaluation team will observe program activities; conduct semi-structured interviews with administrators, staff, partners, and employers; and—in the first round of visits— hold focus groups with participants. Observations and discussion topics cover the program environment, participant flow through random assignment and program services, the nature and content of the training provided, the control group environment, and grantee perspectives on implementation challenges and intervention effects. The qualitative information collected during these visits will enable the team to describe the program design and operations in each site, help interpret the impact analysis results, and identify lessons learned for purposes of program replication for those models found to have positive labor market impacts.

The final round of data collection—the two follow-up surveys submitted with this package—will complement these baseline and process study data collection efforts by looking at outcomes for the treatment and control group members. The evaluation will address the following research questions:

  • What is the impact of the selected grantee programs on the receipt of education and training services by treatment group members, in terms of both the number who receive these services and the total hours of training received?

  • What is the impact of the programs on the completion of training and educational programs and on the receipt of certificates and credentials from these programs?

  • What is the impact of the programs on employment levels and earnings? To what extent do the programs result in earnings progression?

  • To what extent do the programs result in any employment (regardless of sector)? To what extent do the programs result in employment in the specified sector in which the training was focused?

  • What features of the programs seem to be associated with positive impacts, particularly in terms of target group, curricula and course design, and additional supports?

  • What are the lessons for future programs and practices?

At each selected site, individuals are being randomly assigned to a treatment or control group. A total of 2,652 sample members were randomizedacross the four sites, with finalsample size totals varying by site as shown in Table A.1.

Table A.1. Final Number of People Randomized for the Green Jobs-Health Care Impact Evaluation, by Site

Site

Treatment Group Members

Control Group Members

Total Sample

AIOIC (MN)

272

271

543

Grand Rapids (MI)

186

91

277

North Central Texas

555

448

1,003

Kern (CA)

414

415

829

Total

1,427

1,225

2,652



AIOIC = American Indian Opportunities Industrialization Center

For this evaluation, the treatment condition is defined as having the opportunity to enroll in training funded by either a Pathways or the Health Care grant. The treatment condition varies across the sites, depending on the training programs grantees implement with their funds and the context in which the program operates. The control condition, or counterfactual, is defined as not having the opportunity to enroll in training funded by Pathways or Health Care grants. However, control group members will not be prevented from enrolling in other locally available training programs or services in the community. It is reasonable to assume that some people assigned to the control group will find opportunities to receive training from other sources.1 This configuration—a comparison between outcomes for participants with access to the services of the focal programs and those with access to other services in the community—is a common design for random assignment studies of training programs. It is also one that answers the most relevant policy question: Are participant outcomes improved when services of the type funded by the Pathways and Health Care grants are added to the configuration of training services already available in the community?

The conceptual framework, depicted graphically in Figure 1.1, outlines the ways in which elements of the intervention as well as outside factors are expected to influence short- and long-term outcomes of treatment group members. The intervention characteristics are what DOL is funding under the Pathways and Health Care grants. The programs themselves generate outputs as shown in Figure 1.1. Such program outputs then lead to the short-term outcomes of employment, earnings, job quality, and potential additional education and advanced training. These short-term outcomes lead to the longer term outcomes of potentially better employment, earnings, job quality, job persistence, career advancement, and personal or family economic stability. The conceptual framework recognizes that it is not only the characteristics of the program that generate these outputs and outcomes, but also that environmental context and personal characteristics are influential factors. The arrows between specific boxes in the model represent the expected influences among the factors.

Figure 1.1. Conceptual Framework for GJ-HC Impact Evaluation


The conceptual framework is general enough to capture variation along several dimensions. A major dimension of interest to the evaluation is what distinguishes green jobs from health care or high-tech industry jobs, and elements of the conceptual framework capture variation that might exist in that regard. For example, among environmental factors that might matter, this framework includes sector distribution, which might be measured as the percentage of the local labor market that manufacturing jobs comprise. Similarly, what is included in the central Intervention Characteristics box is site-specific, as relevant to the evaluation, but can all be captured in one overarching framework. This conceptual framework is the foundation for both the process and impact portions of the evaluation. For instance, while the process analysis approved earlier may create site-specific versions of this conceptual model, providing rich context for use in interpreting results from the impact analysis, the model suggests some key subgroup analyses to be explored.

Overview of Data Collection

In order to address the above impact research questions adequately, the evaluation will need to collect detailed baseline and follow-up data from different sources at various points in time.

Data collected at the earlier, baseline stage enable the evaluation team to describe the characteristics of study participants at the time they are randomly assigned to the treatment or control group, ensure that random assignment produces matching groups, create subgroups for the analysis, provide contact information to locate individuals for follow-up surveys, and improve the precision of the impact estimates. As noted earlier, the request for approval of the baseline data collection effort was included in a separate OMB package.

Data collected through process study site visits will enable the evaluation team to look closely at grantees’ program structures and how program staff deliver services to the treatment group. Two rounds of visits will be conducted. The visits will include interviews with key program partners (such as One-Stop Career Centers, community-based service organizations, and community colleges) and a few local employers from relevant employment sectors. The first round of visits also will include group discussions with program participants.

Two follow-up telephone surveys, the focal point of this clearance package, will be attempted with 2,652 study participants (all 1,427 members of the treatment group and 1,225 members of the control group). Telephone interviews, which will be conducted 18 and 36 months after random assignment, will serve two major purposes. The first is to provide information on service receipt and educational outcomes, the second is to examine long-run employment, and economic security. While each wave of the survey addresses both issues to some extent, given their timing in relation to participation in training, the first will have a greater focus on service receipt and educational attainment, and the second will have a relatively greater focus on employment, earnings, and career progression.

2. How, by Whom, and for What Purposes Will the Information Be Used?

ETA requests clearance to collect follow-up survey data on service receipt, educational outcomes, long-run employment, economic security, and other outcomes pertinent for evaluating the impacts of the ARRA grant initiatives on participants. The 18-month and 36-month follow-up surveys are described in detail below along with specific details on how, by whom, and for what purposes the information will be used.

18-Month Follow-up Telephone Survey

Telephone interviewers will attempt to reach all study participants, including all treatment and control group members as part of the first telephone survey 18 months after their random assignment dates.

Data on service receipt, a primary focus of the 18-month survey, will aid in developing an understanding of any subsequent program impacts on labor market outcomes. Because impact estimates will be based on differences in outcomes between the treatment and control groups, it will be especially important to understand what, if any, training and related services the control group received. The reason for this is that an estimated impact that cannot be statistically distinguished from zero could be driven by high participation of control group members in services that are similar to grant-funded services.

Additional data elements from the 18-month survey will support analysis of the short-term impacts of the interventions. In particular, as shown in Figure A.1, the 18-month survey will collect information on key outcomes of interest in domains such as the acquisition of credentials, employment and earnings, quality of jobs, match between job type and the training program, and total income and use of public benefits. In an effort to determine whether grant-funded training affects barriers to and attitudes towards work, the 18-months survey will also collect opinions about work similar to the information collected on the baseline information form (Attachment 1 contains the draft follow-up telephone questionnaire).

36-Month Follow-up Telephone Survey

Telephone interviewers will also attempt to contact all treatment and control group members 36 months after random assignment to administer a second telephone follow-up survey. This survey will use the same instrument used for the 18-month data collection effort (again, see Figure 2.1 and Attachment 1). However, given the activities that participants are likely to be involved in over time, the 36-month survey will focus less on program participation measures and more on long-run employment and earnings. The survey will document longer-run employment and income, wage and earnings progression, career advancement, job characteristics (including, employee benefits), and use of public benefit programs. If successful at interviewing individuals at 36 months who were not interviewed at 18 months, then retrospective questions going back to the previous interview will be extended back to these respondents to study enrollment.

Figure 2.1. 18- and 36-Month Survey Data Elements


EMPLOYMENT AND EARNINGS

Employment Since the Beginning of the Follow-Up Perioda

Employed (name of employer, location of employer)

Earnings

Wage rate and hours worked

Industry/Occupation

Length of time in current job

Availability of fringe benefits (paid time off, health insurance, etc)b

Work schedule (regular, split shift, odd job, etc)

Job is on a career pathway

Represented by union

Number of jobs held

Industry/occupation of previous jobs

Periods when laid off from job

Reasons for job separation

Work-related activities in the past week


Barriers to employment & Opinions About Work

Factors that limit ability to work

Lowest acceptable hourly wage

Criminal Behavior (asked in Grand Rapids only) (Parole/probation violations, arrests, convictions, incarceration)

EDUCATIONAL OUTCOMES & SERVICE RECEIPT

Training/Education

Type & number of basic education course

Secondary education

Post-secondary education

Occupational skills training

Occupation for which being trained

Other types of skills training

Date and duration attended

Reason program not completed

Obtained new job or promotion because of training

Degree/training useful for current job

Who and how much paid for training

Acquisition of Credentials

Completion of training/education

Attained a degree, license, certification, or other credential

Type of degree, license, certification, credential

Field of study of degree

Received a high school diploma or GED

Where obtained degree, diploma, license, certification, credential


Employment-Related Support

Additional types of assistance received

Paid/Unpaid internship or on-the-job training

Case-management/counseling


Supportive and Other Services

Type of supportive services

Received needs-related payment

Other services received


Perspectives On Services, If Received Any

Why chose to seek training/employment services

Financial Hardship

Difficulty making ends meet

Financial difficulties related to housing/paying bills/savings


Current Family Status & Demographics

Date of birth / age (if not obtained during baseline)

Race and ethnicity (if not obtained during baseline)

Gender (if not obtained during baseline)

Household composition


Income and Receipt of Public Benefits

Receipt of TANF, SNAP, SSI, UI, TAA, WIC or other benefits

Total months receiving benefits

Total average monthly amount of benefit

Household income


aSome data elements about employment since program completion are collected for all jobs since the beginning of the follow-up period, which is “since random assignment” for (1) all survey respondents completing the 18-month interview and (2) survey respondents who did not complete the 18-month interview but who are completing the 36-month interview. The beginning of the follow-up period is “since the 18-month interview” for survey respondents who ompleted the 18-month interview and who are completing the 36-month interview. Other data elements about employment are asked about for focal jobs only.


bHealth insurance coverage is asked about in the context of both employer-provided coverage and coverage through other sources.


3. Use of Improved Technology to Reduce Burden

Computer assisted telephone interviewing (CATI) will be used for the telephone surveys. CATI allows interviewers to move swiftly through the survey instrument, asking only those questions that are relevant to a particular respondent, based on his or her earlier answers. This reduces the length of time respondents spend on the phone, and minimizes the likelihood that respondents will be asked to answer questions that do not apply to them, which is often an issue with in-person or paper-and-pencil interviews.

CATI is a good choice of method of administration for telephone interviews with large numbers of respondents. With CATI, information about sample members, such as information collected on their baseline information forms, can be preloaded to improve question flow and data accuracy, and reduce respondent burden. CATI programs are efficient and accept only valid responses based on preprogrammed checks for logical consistency across answers. Interviewers are thus able to correct errors during the interview, eliminating the need for costly callbacks to respondents. Also, dialing errors are almost completely eliminated because calls will be made through a preview dialer. The preview dialer allows interviewers to review case history notes and the history of dispositions. The interviewer then presses one button to dial the number after reviewing the case (this is akin to one-touch or speed dialing). An automated call scheduler will simplify scheduling and rescheduling of calls to respondents, and can assign cases to specific interviewers such as those who are trained in refusal conversion techniques or those who are fluent in Spanish. In addition, the flexibility of CATI programming allows for the scheduling of interview times that are convenient for the sample member.

4. Efforts to Identify Duplication

The study team has reviewed the existing literature and existing data sources to ensure that this data collection effort does not represent duplication of existing/available data. There is no other source for the information that will be collected in the follow-up surveys. Answers to the survey questions are not included in the data that grantees are required to collect and report to the DOL and there are no administrative data sources that provide the range of data elements needed. The study will collect information about sample members employment and earnings covered from wage records data maintained by state Unemployment Insurance (UI) agencies. This will be done either directly from the states or from the National Directory of New Hires (NDNH). However, these wage records data provide only total quarterly earnings in UI-covered employment and not the critical information the study needs about employment experiences—information, for example, on wage rates, hours worked, availability of fringe benefits, or on whether the job made use of the person’s training.

5. Methods to Minimize Burden on Small Businesses or Entities

This data collection does not involve small businesses or other small entities.

6. Consequences of Not Collecting the Data

The data collected in the follow-up administrations will enable the GJ-HC impact evaluation to generate precise, unbiased estimates of the impacts of the training services offered. Results from this rigorous evaluation will inform policymakers about net impacts for participants and the context within which the programs operate.

Without collecting follow-up interview data from study participants, the study could not meet its goal of determining the extent to which enrollees in the four sites included in the evaluation experienced increases in service and credential receipt, employment and earnings, and career progression.

7. Special Data Collection Circumstances

This data collection effort does not involve any special circumstances.

8. Federal Register Notice and Consultations Outside the Agency

Federal Register Notice

As required by 5 CFR 1320.8 (d), a Federal Register Notice, published on April 12, 2012 (Vol. 77, pp 22001 – 22003) announced the evaluation of the Green Jobs and Health Care Impact Evaluation of the Pathways Out of Poverty and Health Care and High Growth Training Grant initiatives. The Federal Register announcement provided the public an opportunity to review and comment on the planned data collection and evaluation for 60 days following its publication. No comments were received.

Consultations Outside the Agency

Consultations on the research design, sample design, and data collection procedures were part of the study design phase of the evaluation. The purposes of these consultations were to ensure the technical soundness of the study and the relevance of its findings and to verify the importance, relevance, and accessibility of the information sought in the study.

Peer Review Panel Members

Ms. Maureen Conway, Executive Director, Economic Opportunities Program, Aspen Institute

Dr. Harry J. Holzer, Professor, Georgetown Public Policy Institute

Dr. Robert J. LaLonde, Professor, The Harris School, University of Chicago

Mr. Larry Orr, Larry Orr Consulting

Dr. Burt S. Barnow, Amsterdam Professor of Public Service, The Trachtenberg School of Public Policy and Public Administration, George Washington University

Ms. Mindy Feldbaum, Director for Workforce Development Programs, National Institute for Work and Learning

9. Respondent Payments

The offer of respondent payments. It is critical to maximize cooperation of sample members with follow-up survey data collection efforts and increase survey response rates, thereby ensuring the representativeness of the sample and providing data that are complete, valid, reliable, and unbiased. Given the importance of this evaluation, the data collection must maintain the highest standards. Providing a modest payment to study subjects who complete a given follow-up interview can contribute to the achievement of that goal by significantly increasing response rates, thereby ensuring data collection from a sample that is truly representative. Because response to telephone surveys has been declining in recent years and costs associated with achieving high response have been increasing, the use of respondent payments has become common practice for survey studies (Curtin et al. 2005). These payments can help achieve high response rates by increasing the sample members’ propensity to respond (Singer et al. 2000). Studies offering respondent payments show decreased refusal rates and increased contact and cooperation rates. Among sample members who initially refuse to participate, the availability of payments increases refusal-conversion rates. These payments also can help contain costs by increasing sample members’ propensity to respond, thus significantly reducing the effort and funds expended to resolve a case and the number of interim refusals. These operational cost savings and direct participant benefits provide justification for offering payments to survey respondents.

In addition to helping gain cooperation to increase the overall response rate, respondent payments also increase the likelihood of participation from subgroups with a lower propensity to cooperate with the survey request. This is another important factor in helping to ensure the representative nature of the outcome data and the quality of the data being collected. For example, Jäckle and Lynn (2007) find that respondent payments increase the participation of sample members who are more likely to be unemployed. There is also evidence that respondent payments bolster participation among those with lower interest in the survey topic (Jäckle and Lynn 2007; Kay 2001; Schwartz et al. 2006), resulting in data that are more nearly complete. It has also been established that payments do not impair the quality of the data obtained (for example, by increasing item nonresponse or the distribution of responses) from groups who would otherwise be underrepresented in a survey (Singer et al. 2000).

Offering respondent payments is the final critical addition to intensive efforts to establish contact with prospective respondents, and gain their cooperation with the planned data collection.

The initial study plan involved offering a $25 payment to all respondents to each interview to thank them for the time they spent completing an interview. The study plan now calls for a $45 incentive payment (see details for rollout of this new amount below), as the current lower-than-expected response rate suggests that $25 may not be a high enough incentive for the survey, which averages 40 minutes to complete. Although the study is striving for an 80-percent response rate, the current cumulative response rate for the 18-month survey for sample members released between February 2013 and July 2013 is 52 percent as of August 5th, which is lower than expected for this stage of the fielding period. In addition, a small differential in response rates between the treatment and control groups has emerged, with a 56 percent response rate for the treatment group and a 47 percent response rate for the control group. While this differential is not necessarily problematic for the purposes of the planed analyses, the study team will continue monitoring this differential over time to determine if it becomes problematic, at which point the team will investigate solutions to help correct the differential. A large portion of the survey sample released as of the end of July 2013 has yet to be fully worked in the field; thus, it is too early to tell what the ultimate response rate (and the treatment-control differential in response rates) will be, given the study’s current plans for the fielding effort. However, it is expected that the final response rates overall and for treatment and control groups will be higher (for example, the response rate for the first completed release—i.e., the first group of respondents eligible for the survey and for whom the contractor has completed outreach about the survey—is 69%). Nevertheless, the lower-than-expected response rates so far suggest that a change to the survey plans is warranted to increase the likelihood that the surveys can achieve the target 80-percent response rate.

To improve response rates, the initial study plan has now been adjusted to offer a higher incentive, of $45, to the remaining 18-month survey sample members and the entire 36-month survey sample. The survey fielding effort involves the release of sample members in batches, called “releases.” So far, the first 7 releases out of the 23 that are planned for the 18-month survey have occurred; these 7 releases represent about 34 percent of the full study sample. For sample members who are currently in the field, we will continue offering the $25 incentive through August, until the new $45 incentive is offered to all remaining sample members starting in September 2013 (pending OMB approval). Sample members who are released in September (release number 8) will receive notification of the increased incentive in the advance letter they receive prior to being contacted by phone to complete the interview. At that time, all remaining sample members who have not yet completed an interview (but have previously been “released”) will also be informed of the new incentive via the regular follow-up contact mailings and field locator scripts. This approach ensures that all sample members who are still active as of September 2013 will become eligible for the increased incentive at the same time. In addition to enhancing operational efficiency, this approach enables the increased incentive to appeal to both newly released sample members and remaining nonrespondents. Moreover, this incentive amount will be offered to both treatment and control group members, with no distinction between the two groups; thus, there is potential for the higher incentive amount to reduce the likelihood of a problematic treatment-control group differential in responses rates at the end of the survey fielding period (as higher overall response rates lead to less concern about non-response bias).

The additional incentive cost for the two surveys is estimated at $70,536, but this cost will not increase the total evaluation study budget because it will be counterbalanced by cost savings from smaller-than-expected total study sample sizes and greater efficiency in the fielding effort due to the higher incentive amount.

As noted above, the literature on the effectiveness of incentives on response rates in phone surveys generally finds that increases in monetary incentives improve response rates. However, the relationship is not strictly linear, as there is a declining effect on response rates as the dollar amount of the incentive increases (Gelman, Stevens, and Chan, 2002). For example, in the National Evaluation of Trade Adjustment Assistance (TAA) Program, which serves a similar population of unemployed and/or dislocated workers (although mostly in the manufacturing field), an incentive experiment was carried out for different categories of sample members. In the TAA study, which was initially approved by OMB to offer a $25 incentive to all sample members, the incentive was increased to $50 and to $75 for some sample members and kept at $25 for others. Both the $50 and $75 incentives significantly increased response rates compared to sample members who only received $25. The results from this experiment showed that the higher incentive increased the overall response rate for nonrespondents from 41 percent to 55 percent. Among new sample members who were randomly assigned to the $50 incentive group compared to the $25 incentive group, the response rate was 53 percent and 49 percent, respectively.

Ultimately, it is expected that $45 as the incentive amount strikes a good balance between encouraging cooperation and responsiveness to the survey, on the one hand, and being efficient in the use of project resources and not being coercive to study participants, on the other hand.

It is expected that the $45 incentive payment will motivate sample members to participate in the survey, and it may influence their decision to provide updated contact information during the 18 months between the first and second follow-up surveys; thus this incentive payment offered at the 18-month interview is also expected to help reduce the locating effort at 36 months. Additionally, the increased incentive cost will be largely offset by reduced staff time spent on field locating and calling participants. Furthermore, we expect that offering $45, rather than the initially-planned $25, for the 36-month survey fielding effort will have a direct, beneficial effect on response rates for that survey as well.

To leverage fully the benefits of both rounds of payments at 18 and 36 months, the incentive payments will be mentioned when contact is established with the participants and attempts are made to gain their cooperation. For the 18-month survey, this process is expected to start in September 2013 for all remaining sample members. For the 36-month survey, this process will start for all sample releases beginning with the first release in August 2014.

10. Confidentiality

Abt Associates and Mathematica have well-established safeguards to ensure the privacy and protection of all data collected from study participants. This includes policies and procedures related to privacy, physical and technical safeguards, and approaches to the treatment of personally identifiable information (PII).

Privacy Policy

Abt and Mathematica are committed to compliance with federal, state, and DOL data security requirements, and will take steps to ensure that all study staff comply with relevant policies related to secure data collection, data storage and access, and data dissemination and analysis. Both contractors have security policies that meet the legal requirements of the Freedom of Information Act and related regulations to ensure and maintain the privacy of data relating to program participants.

Privacy Safeguards

All interviewers as well as regular contractor staff are required to sign a company data security pledge as a condition of employment. The data security agreement covers all data employees use in the course of their normal duties. Employees who break this agreement face immediate dismissal and possible legal action. Beyond this, all staff working with PII will sign data security agreements. Hard copies of documents will be kept in securely locked file cabinets, electronic data files will be encrypted, and access to study files will be strictly limited to study staff who have been identified by the project director as having a need to view those files. Personal computers of study staff will be locked when not in use. Respondents will be given written assurance in all advance materials and verbal reminders during the survey administration that the information they provide will be kept private and will not be disclosed to anyone but the researchers authorized to conduct the study, except as otherwise required by law. No information will be reported by the contractor in any way that permits linkage to individual respondents, unless required by law, and the information will be destroyed once the final study report has been released.

11. Questions of a Sensitive Nature

The follow-up interviews will collect information from participants who have consented to participate in this evaluation. Information will be collected on services received through the program in areas such as case management, assessments, training or educational courses, and supportive services; any credentials earned; the details of jobs held since random assignment; income and the use of public benefits such as Temporary Assistance for Needy Families (TANF), Supplemental Nutrition and Assistance Program (SNAP), Social Security Insurance, and UI, opinions about work, experiences with the services received through the program, and criminal activity. This type of information is generally collected as part of enrollment in government-funded training programs and is therefore not considered sensitive. However, depending on an individual’s particular circumstances, any question could be perceived as sensitive. Evaluation team interviewers are well trained to show sensitivity while remaining impartial. Also, if a respondent refuses or appears reluctant to answer a question that asks for specific financial information, such as the amount earned in a given period, numeric ranges are generally offered as an alternative. Finally, to encourage reporting, reluctant respondents are reminded that their answers will be kept private.

Listed below are items that may be considered sensitive and the justification for including them: 

  • Information on employment history; participation in TANF, SNAP, and other government programs; household income; and work-related barriers is needed to conduct analyses of employment outcomes, income support program participation, and household income/poverty. The outcomes, taken together, provide a comprehensive picture of sample members’ economic self-sufficiency throughout the follow-up period. Information on work-related barriers also provides important insight on whether the training programs can reduce barriers to work. Such understanding will facilitate the design of programs that include appropriate strategies for overcoming those barriers. Information about sample members’ involvement in the criminal justice system during the follow-up period is to be collected at the one study site that is serving a large number of individuals who had been involved in the justice system prior to study enrollment. This information also will provide insights about sample members’ availability for participation in the legitimate labor market and their integration into productive society.

  • Updated participant contact information, collected during the first follow-up interview at 18 months after random assignment, is essential for re-establishing contact for the 36-month follow-up survey. The name and contact information of up to three individuals who know the participant are collected for use in the event that the contact information the study team has for the participant from baseline becomes outdated during the period between follow-up administrations.

  • Information on date of birth, address, and telephone numbers is needed to identify and contact participants. This information was collected at baseline, and remains part of the respondent’s information. Except in instances in which errors are found, there will be no need to collect this information again. However, during follow-up survey administration, respondents will be asked to confirm this information.

12. Hour Burden of the Collection of Information

The time burden for administering the follow-up surveys is estimated to be 40 minutes for the average interview in each wave of data collection, 18 months, and 36 months. The estimated total hour burden on study participants of collecting the 18- and 36-month follow-up surveys is 2,918 hours (Table A.2). Based on a targeted response rate of 80 percent, an estimated 2,122 respondents (comprising both treatment and control group members) are expected to complete each of the two follow-up surveys, and each collection is estimated to take (on average) 40 minutes to complete. The 20 percent who do not complete the survey may still be contacted by telephone interviewers; therefore, 5 minutes per nonrespondent is used as an estimate of their burden. Hence, the total time for sample members to complete the two surveys is (2,122 × 40 × 2) + (530 × 5 × 2) minutes, which, when divided by 60, equals 4,426 hours.

Table A.2. Burden Estimates for Study Participants

Respondents
(Follow-up Surveys)

Number of
Instances of Collection

Frequency of Collection

Average Time
Per Respondent

Burden
(Hours)

18 month follow-up respondents

2,122

Once

40 minutes

1,415

18 month follow-up contacted non-respondents

530

Once

5 minutesa

44

36 month follow-up respondents

2,122

Once

40 minutes

1,415

36 month follow-up contacted non-respondents

530

Once

5 minutes

44


Total: 5,304

--

--

2,918



a The 20 percent who do not complete the survey may still be contacted by telephone interviewers, therefore, five minutes per respondent is used as an estimate of their burden.


The estimated total cost burden for the data collection is presented below in Table A.3. The total estimated costs for these data collection activities are $56,668. The average hourly wage in that table, $19.42, is based on the Bureau of Labor Statistics (BLS) average hourly earnings of production and nonsupervisory employees on private, nonfarm payrolls (May 2011 Employment Situation table B-8, Current Employment Statistics, BLS, U.S. DOL). Our respondents, by nature of their eligibility for the programs in which they are participating, often are unemployed or employed at a low wage. Though the follow-up surveys are conducted well into or after completion of the targeted programs, the wage used for this calculation and resultant cost estimate are likely overestimates, so the projected annual cost shown here is likely to be higher than the actual cost incurred.

Table A.3. Total Cost Estimates for Follow-up Surveys

Data Collection Activity

Total Burden Hours

Average Hourly Wage

Total Burden Cost

Respondents and contacted non-respondents

2,918

$19.42

$56,668




13. Estimated Annualized Respondent Capital and Maintenance Costs

There are no direct costs to respondents and will be no start-up or ongoing financial costs incurred by respondents. The cost to respondents solely involves the time involved in being interviewed. These costs are captured in the burden estimates provided in Item 12.

14. Estimated Annualized Cost to the Federal Government

Table A.4 presents the total cost to the federal government of engaging the Abt-Mathematica team to conduct the GJ-HC Impact Evaluation over a five year period. To annualize the cost, we divide the five year total ($7,992,852) by 5 for an average annual cost of $1,598,570. It is important to note that these figures are total costs for the entire evaluation and not just for the follow-up surveys.

Table A.4. Annual Costs for Entire Green Jobs and Health Care Impact Evaluation

Year

Dates

Cost

1

2010-2011

$1,598, 570

2

2011-2012

$1, 598, 570

3

2012-2013

$1, 598, 570

4

2013-2014

$1, 598, 570

5

2014-2015

$1, 598, 570

Total


$7,992,850


Please note that no annualized costs for this final segment of the evaluation were entered in ROCIS with this submission because the total costs were already reflected in segments previously approved by OMB.


15. Changes in Burden

The burden changes are due to the sampling adjustment requested in this non-substantive change request to improve the response rate.

P. Publication Plans and Project Schedule

The first round of follow-up surveys will begin in early 2013 with the second round beginning 18 months later. The timeline for reporting survey findings is given in Table A.5.

Table A.5. Study Timeline

Time

Activity

Summer 2011

Baseline data collection begins

Winter 2013

Baseline data collection ends

Winter 2013

First round of follow-up surveys begins

Summer 2014

Second round of follow-up surveys begins; first round of follow-up survey data collection ends

Spring 2015

Interim report published based on first round (18-month) survey data

Summer 2015

Second round of follow-up survey data collection ends

Fall 2016

Final report published based on 36-month survey data


Q. Reasons for Not Displaying Expiration Date of OMB Approval

The expiration date for OMB approval will be displayed on all forms associated with this data collection.

R. Exception to the Certification Statement

Exception to the certification statement is not requested for the data collection.



1 Treatment group members also will be able to access other community-based training programs or services not offered by the grant.

III.18

File Typeapplication/msword
AuthorAugust Pitt
Last Modified ByNaradzay.Bonnie
File Modified2013-08-29
File Created2013-08-29

© 2024 OMB.report | Privacy Policy