JIAC 40260 OMB Package_Part B_Suppt_Statement

JIAC 40260 OMB Package_Part B_Suppt_Statement.docx

Job Innovation and Accelerator Challenge Grants Evaluation

OMB: 1205-0518

Document [docx]
Download: docx | pdf

Jobs and Innovation Challenge Grants Evaluation

Part B: collection of information involving statistical methods

The U.S. Department of Labor (DOL), Employment and Training Administration (ETA) contracted with Mathematica Policy Research and the W.E. Upjohn Institute for Employment Research to conduct an evaluation of the Jobs and Innovation Accelerator Challenge (JIAC) grants. In partnership with other federal agencies, ETA awarded two rounds of grants to 30 self-identified regional industry clusters that have high growth potential. The main objective of the evaluation is to build a better understanding of how multiple federal and regional agencies worked together on these grant initiatives, how the ETA grants are being used, the training and employment-related outcomes that the clusters are able to achieve, lessons learned through implementation, and plans for sustainability. Part A of this submission includes additional information about the logic model for the initiative and the research questions for the evaluation.

This package requests clearance for two data collection efforts to be conducted as part of the JIAC Evaluation:

  1. Site visit interviews. Two rounds of in-person visits to a subset of clusters will provide information on implementation of the JIAC initiative. The first round will involve nine clusters. The second round will involve a return visit to three of the nine clusters that participated in round 1 visits. The evaluation team will conduct interviews with cluster management staff, activity leaders, frontline staff, participants, the local workforce investment board, employer groups, and local economic development agencies. Attachments A through E include protocols for these interviews.

Survey of grantees and partners. The evaluation team will administer a survey to up to 330 individuals from partner organizations (one cluster manager and representatives from up to 10 partner agencies in each of the 30 clusters). The survey will focus on cluster organization, communication, funding sufficiency, the types and usefulness of federal support, and program management and sustainability. The survey instruments are included in Attachments F, G, H and I.

Survey fielding will be preceded by a template that will be used to collect information from the ETA funding stream administrators to assemble the sampling frame. (See Attachment J). In order to pretest the planned process and time required, the evaluation team took the following steps. The team reviewed grant proposals and other documents submitted by the 30 clusters to generate initial listings of organizations partnering with the grantees. These initial listings of partners ranged from 4 to 36 partners, with a median of about 14. The evaluation team then asked the cluster administrator and grant administrators to review, correct, and update these listings, as well as to provide additional information on involvement levels and current contact information for partners. Both pretest clusters identified different sets of partners than those included in their proposals. Three types of changes arose. First, they designated a larger number of partners as having low involvement than was implied in the proposal. Second, some of the partners mentioned in the proposal were ultimately not involved in grant activities. Finally, respondents did not have contact information for some partners and had not worked with them. They were nonetheless hesitant to delete such partners because they were not sure if other cluster members had engaged these organizations on the grant. Given these responses, the study team confirmed the importance of working with the clusters to develop complete and accurate lists, rather than using only partners listed in the proposal documents, before proceeding with the survey sample selection.

In pretesting the survey instruments with some of the low and medium involvement partners, the evaluation team found that these partners were sometimes not aware of the cluster or the grant even though they were performing activities funded by the grant. These respondents indicated that they were simply engaging in the activities because they were asked, or contracted with, to do so by an organization in the cluster. These organizations were unable to complete the survey meaningfully because they had no awareness of the topics of interest in it. Based on this feedback, the evaluation team plans to exclude from the sampling frame those partners listed on the contact sheet that have low or no involvement. The study will also exclude those partners for whom no contact information is provided. While the size of the sampling frame for each site cannot be known precisely until information is collected from the grant administrators after receipt of OMB approval, it is likely to be somewhat smaller than the list of partners identified through the initial review of grantee documents.

  1. Respondent Universe and Sampling

The evaluation will collect in-depth qualitative data from respondents in a subset of the 30 grant clusters and survey data from respondents across all 30 clusters. Details follow on the selection of clusters for site visits and the selection of respondents within the 30 clusters for the survey.

a. Selecting Study Sites for Site Visits

The first round of site visits for this evaluation will include visits to 9 of the 30 grantee clusters. The evaluation contractor anticipates visiting approximately 6 JIAC and 3 advanced manufacturing (AM)-JIAC grantees, but this distribution may shift based on input from federal staff. The second round will include 3 of the 9 grantee clusters visited during the first round. It may include 2 JIAC and 1 AM-JIAC grantee clusters.

The evaluation team will conduct site selection based on information on grantee clusters from the document review and input from federal agency staff. The evaluation contractor will solicit input on site selection from up to 30 federal staff during summer 2014. The respondents will include the regional ETA staff who are the federal project officers (FPOs) for the ETA-funded portions of the grants, individuals from the national offices of the funding partners, and possibly staff from the nonfunding federal partners. After completing the interview calls, the evaluation contractor will (1) compile respondents’ site visit recommendations in a spreadsheet along with their rationale for each suggestion, (2) review this information in tandem with information from the document review and compile a list of possible sites, (3) consult with ETA about these site suggestions, and (4) derive a consensus list of nine sites (and two alternates) to visit.

The input provided by federal partners and FPOs will play a large role in determining how sites are selected. After compiling the list of suggested sites, the evaluation team will coordinate with ETA to finalize a list that ensures diversity across the following six key dimensions:

  • Grantee and key partner types: the study will seek to include sites that collectively include some variation in the types of organizations that play key roles—such as local workforce investment areas, colleges or universities, public agencies, and nonprofit organizations.

  • Maturity of grantee partnerships before the grant application: the study will a include a mix of sites with some that had robust partnerships and a long history of collaboration prior to grant application and others where partnership development has been a more recent phenomena. 

  • Geographic location and type: the study will ideally include sites in each of the six DOL regions, as well as  include a mix of urban, rural, and blended communities.

  • Industry and occupational focus of the cluster: we will select sites in which cluster activities focus on diverse industries and occupations, making sure to include those of particular interest to DOL.

  • Targeted populations: we will select sites that vary in the target populations served by grant activities, including a cross-section of clusters serving unemployed workers, incumbent workers, or underrepresented groups or communities.

  • Types of training offered: we will select sites that vary in the types of trainings offered, including on-the-job training, classroom occupational training, customized training, and contextualized learning.

ETA priorities will determine site selection for round 2 site visits. For example, if ETA decides that additional information on practices for sustaining cluster efforts is a priority, sites that have developed formal sustainability plans or those that actively sought funding for continued cluster efforts will be selected. If the priority is to examine promising practices, it might make more sense to choose a set of grantees with demonstrated success along key dimensions of interest to the agency. As such, selection of round 2 sites will occur after completion of the round 1 visits so that the information gained can answer the questions of greatest interest to ETA.

b. Survey Universe and Sampling

About 330 representatives from across the 30 JIAC and AM-JIAC grantee clusters will be asked to participate in the online survey. This includes a total of up to 11 individuals within each cluster.

The evaluation contractor will use a two-step approach to develop the sampling frame for the survey. First, the evaluator has used grantees’ statements of work, grant proposals, progress reports, and interviews with federal staff to develop a list of grantee partners. From these resources, the evaluator has developed an initial list including the cluster manager, the funding stream administrators, and partners in each cluster. Second, after receiving Office of Management and Budget (OMB) clearance, the evaluator will send these lists to each ETA funding stream administrator, asking him or her to verify that the list of organizations is complete and accurate. Each respondent will be instructed to add any organizations that are not represented on the list and update or add any missing or inaccurate contact information, contacting the grantees for listings of their partners in order to do so. They will also be asked to populate a series of columns for each participating organization. These include yes/no responses for whether the organization received ETA, U.S. Small Business Administration (SBA), or Economic Development Administration (EDA) grant funds for JIAC activities (or ETA, SBA, EDA, U.S. Department of Energy [DOE], or National Institute of Standards and Technology, Hollings Manufacturing Extension Partnership [NIST-MEP] funds for AM-JIAC activities), whether the organization has maintained a high, medium, or low level of participation in grant-related cluster activities, and why any organizations listed in initial materials that are deleted were ultimately not involved in the project. An example of such a completed sheet is included as Attachment J.

Based on the pretest, the sampling frame is expected to consist of about 400 partnering organizations, slightly fewer than the initial listing. This small reduction is based on the finding that some partners listed in the application ultimately had little or no involvement in the cluster or were listed multiple times under varying names. The evaluation team anticipates that ETA funding stream administrators will be able to provide all the required information to refine the sampling frame and have a high degree of accuracy in verifying the lists. However, researchers will conduct follow-up calls and web searches to gather information in the case of significant deficiencies in the contact lists. In the event that contact information for a partner organization cannot be located or is known to be incorrect and corrected information cannot be found (for example, if the telephone number is no longer in service and/or the website has been taken down), the evaluator will assume the partner is no longer operating and exclude it from the sample frame. This exclusion will be unlikely as these updates to the sampling frame will be gathered a matter of weeks before fielding the survey and we anticipate that ETA funding team administrators will make every effort to provide updated lists.

The evaluation team will sample the survey respondents using these verified sampling frames, following the steps depicted in Figure B.1. The team will begin by sampling the cluster manager within each cluster with certainty (a probability of 1.0). In cases in which the cluster manager and the ETA funding stream administrator are not the same person, the ETA funding stream administrator will be sampled with certainty as well. A sample of staff at cluster partners will comprise the remaining survey respondents. Any partners rated as having low or no involvement, or for whom no contact information was provided, will be dropped from the sampling frame. If a cluster has fewer than 9 or 10 partners (depending on whether the cluster manager and ETA funding stream administrator are from the same organization), all of them will be contacted for interviews and there will be no need for sampling. If there are more, the evaluator will use the following sampling strategy to select organizations as survey respondents.

Figure B.1. The Survey Sample Selection Process

Group 42

Sampling will begin with partners receiving ETA grant funds. This will ensure that the survey captures the experiences and opinions of those organizations most familiar with workforce-related activities. If three or fewer partners receive ETA funds (excluding the ETA funding stream administrator), all of them will be selected with certainty. If there are more than three, the three organizations with the highest reported levels of involvement will be selected, or a random sample will be selected if they are equally involved.

Next, the evaluation team will select the remaining partners from the rest of the partner organizations, including any remaining ETA grant recipients, using the probability proportional to size (PPS) selection method. In this case, extent of involvement in implementing grant activities will be the measure of size (MOS). Highly active partners will be assigned a MOS of 0.59 and those with medium activity will be assigned a MOS of 0.41. We will use the survey select procedure in SAS to select the sample using a systematic sequential PPS method. Any partners with a size larger than the selection interval are selected with certainty. We will determine all of the certainty selections and then the balance of the sample will be selected from the remaining partners.

No unusual problems requiring specialized sampling procedures have been identified in the course of the pretest.

  1. Analysis Methods and Degree of Accuracy

The mixed-methods design of this study relies on qualitative and quantitative analysis methods to fully explore the data gathered and extract the useful and interesting details that the data hold. The evaluator’s plan is to organize the analyses according to the research topics identified in Table B.1 and discussed in more detail in Part A of this supporting statement. The quantitative analysis will provide descriptive data on all 30 clusters and aims to answer questions related to what and how many. The qualitative analysis will provide richer detail on activities at the federal level and at the subset of clusters involved in site visits. It aims to answer the why and how questions and to identify promising practices and lessons learned. The following two subsections present the methods used for the site visit analysis and analysis of survey data, respectively.

a. Analysis of Site Visit Data

During the site visits, the evaluation contractor will use semistructured interviews to collect information from the following types of respondents: cluster management staff; activity leaders; frontline staff; participants; and the workforce investment board, employer groups, or local economic development agencies. The evaluation contractor will also conduct other on-site activities, such as observation of grant activities. The contractor will use a two-step process to analyze these data.

The first stage—a within-site implementation analysis—will involve preparing summary narratives for each of the demonstration sites. The study team will use these narratives to document the topics noted earlier. Using standard templates, the evaluation contractor will develop detailed internal notes from all site visit interviews to feed into the analysis. This common organizational framework will help cover all topics in a consistent and comprehensive fashion.


Table B.1. Study Topics Covered by Each Data Source


Document Review


Federal Staff Interviews


Site Visit Interviews


Survey

Topics and Subtopics

Grant Agreements

Progress Reports

Design Stage

Site Selection Stage

Cluster Administrators

Activity Leaders

Frontline Staff

Participants

LWIB /Employer Groups

Cluster Manager and ETA Grantee Manager

Grantee Partners

I. Role of Multiagency Collaboration

A. History and Goals of Grants

X


X

X








B. Federal Partner Roles, Organizational Structures, and Governance



X

X








C. Federal Support for Grantees



X

X

X

X




X


D. Grant Monitoring

X

X

X

X

X





X


II. Development of Regional Clusters, Programs, and Partnerships

A. Overview of Grants and Clusters

X

X










B. History of Collaboration and Federal Grant Receipt

X




X

X

X


X

X

X

C. Grant Funding

X

X



X

X

X





D. Grantee and Partner Engagement, Decision Making, Communication

X




X

X

X


X

X

X

E. Types of Grant Activities

X

X



X

X

X

X

X

X

X

F. Experience with Grant





X

X

X


X

X


III. Project Outcomes

A. Metrics Used for Project Output and Outcome Measurement

X

X


X

X

X

X





B. Output and Outcome Monitoring





X

X

X





C. Assessment of Quality and Usefulness of Monitoring Data




X

X

X




X


D. Beneficiaries of Grant Activities

X

X



X

X

X

X

X

X

X

E. Rate of Outcome Achievement

X

X



X

X

X





IV. Program Management and Sustainability

A. Cluster Features

X



X

X

X



X

X


B. Cluster Agility




X

X

X



X

X


C. Cluster Dependence on Outside TA




X

X

X

X



X


D. Matching Funds/Leveraged Funds

X



X

X

X




X


E. Plans for Sustainability





X

X

X



X


V. Program Replicability and Lessons Learned

  1. Best Practices




X

X

X






  1. Replicability of or Uniqueness of Best Practices




X

X

X

X


X



C. Lessons Learned




X

X

X

X


X



ETA = U.S. Department of Labor, Employment and Training Administration; LWIB = Local Workforce Investment Board; TA = technical assistance.

The second stage will draw on this narrative information as raw data to conduct a cross-site analysis. In conducting this cross-site analysis, one of the greatest challenges will be dealing with the volume and complexity of information. The evaluation contractor will use ATLAS.ti, qualitative data-coding software, to help organize the information. The contractor will prepare a systematic coding scheme and senior staff will refine it to code raw data by theme, site, and type of respondent, and then apply the codes to these detailed notes. The coding will enable the evaluation contractor to organize the notes by key topics to help facilitate a detailed analysis. For example, the software can use queries to pull all information related to a single topic, such as sustainability efforts, and examine the data across all clusters.

To lay the groundwork for the second stage of analysis, the study team will adopt an iterative process of distilling themes from the qualitative data, drawing not only on respondents’ perspectives about their own best practices as captured in the interview data, but also the study team’s insights based on its understanding of practices across multiple sites. Following each round of site visits, the team will meet as often as necessary to reach a consensus on the practices that seemed to warrant special attention, lessons learned, and challenges overcome.

Using the coded site visit data together with selected data drawn from document reviews, the analysis team will conduct a cross-site analysis to describe common elements and differences across sites in the development of regional clusters, grant-funded programs, and partnerships; conduct of JIAC/AM-JIAC ETA-funded activities; their perceptions of the quality and adequacy of multiagency federal support; perceived progress toward outcomes and mechanisms for monitoring output and outcome metrics, steps taken toward program sustainability; and lessons learned. The analysis will highlight common phenomena and unique or interesting cluster experiences. To the extent possible, the evaluator will document the number of clusters that reported types of experiences and the types of organizations and staff within those clusters that contributed their perspectives on the topic. When differences emerge in the viewpoints of different respondents within a single cluster, researchers will triangulate the data across as many sources as possible to understand why perspectives might differ. This information will provide a rich understanding of grantees’ cluster experiences.

An important part of ensuring the accuracy of the conclusions derived through analysis of site visit data will be to ensure reliable data collection. As described in Section B.3.d, strategies to ensure the reliability and completeness of data include using a flexible approach to schedule visits and ensuring respondents that the information they provide will remain private. Furthermore, using structured, predetermined protocols to collect the data; thoroughly training the site visitors in the use of such protocols in the preparation of systematic summary narratives that cover all key topics; and conducting ongoing review of summary narratives by senior staff during the data collection period will help achieve a high degree of accuracy in the data. Because most questions will be asked of more than one respondent during a visit, the analysis will allow for comparisons and triangulation of the data so that discrepancies among different respondents can be interpreted.

b. Analysis of Survey Data

The analysis of the survey data will include simple descriptive statistics, frequencies, distributions, and cross-tabulations. The evaluator will also explore the usefulness of conducting subgroup analyses based on cluster characteristics—such as type of sector, maturity, and amount of matching funding—and by characteristics of the respondents, such as cluster manager, ETA funding stream administrator, or partner. Key variables from the cluster manager and ETA funding stream administrator surveys include those on the types of organizations in the cluster, existence of the cluster prior to the grant, growth in the strength of partnerships, likelihood of partnership sustainability, use of an iterative work plan, strategies for including underrepresented populations, federal support, use of data, and perception of outcomes. Key variables from the partner survey include the type of organization, collaboration as a strategy, existence of the cluster prior to the grant, involvement in planning, grant activities undertaken, use of data, and perception of outcomes. Given that a large proportion of the pool of partners will be represented, subgroup analyses by the type of respondent will likely provide sufficient insights into whether highly involved respondents have different perspectives than less involved partners, and weighting responses by the activity level of the organization will be unnecessary. Survey data analysis and any statistical tests will be conducted using STATA software which contains survey procedures commands that would correctly estimate variance. Because a treatment is not being tested and data analyses are purely descriptive, minimal substantively significant effect sizes are not applicable.

Missing data. Despite best efforts to encourage full response to the survey instrument, respondents will likely leave some missing or incomplete items. During data cleaning, the evaluator will look for unusual patterns of item nonresponse. If item nonresponse is less than 10 percent, the study report will simply indicate the proportion missing. If it is greater than 10 percent, the evaluation team will examine the types of respondents that did not respond and determine whether the data item suffers from nonresponse bias. Some items of less significance could be dropped from the analysis. Others could be presented in reports, but the study report will provide clear information on the nonresponse issue and describe any cautions that readers should take in interpreting the results. Information on whether or not a respondent organization was receiving grant funds, the type of grant funds, the level of involvement of the organization in grant activities, and possibly the type of organization will be available in the nonresponse frame and could be used for nonresponse bias analysis. The desire to gather additional information while constructing the sample frame that could be used for a nonresponse bias analysis must be tempered by the risk of the additional burden on respondents that could lower response rates.

Reconciling contradictory responses. As with any survey of multiple respondents in a single site, the evaluator will have to reconcile findings if different respondents within a cluster provide seemingly contradictory responses to some survey items. Few questions ask the same factual information from various respondents, but it is possible that some contradictory responses will occur due to the varying involvement, experiences, and interactions that respondents had during the grant period. For example, technical assistance might have different quality scores across different organizations or respondents who had divergent levels of experience and need for assistance. In cases of contradictory responses within a single cluster, the evaluation team will triangulate the data across as many sources as possible to understand why perspectives differ. If it appears that an error in completing the survey caused the issue, the evaluator might follow up with the respondent to resolve the issue and explore whether other responses are affected.

  1. Methods to Maximize Response Rates and Data Accuracy

Even though all JIAC and AM-JIAC clusters have agreed to participate in evaluation efforts as a condition of receiving the grant, the evaluator will still use several methods to maximize response and data reliability. This section discusses planned strategies for each data collection effort that is part of this request for approval.

a. Site Visit Data Collection

The plan to collect study data during site visits will ensure that response rates are high and that the data are reliable.

Response rates. Site visitors will begin working with staff at grantee clusters well before each visit to ensure that the timing of the visit is convenient. The site visits will take place over a period of several months, which also will provide flexibility in timing. Because the visits will involve several interviews and activities each day, there will be flexibility in the scheduling of specific interviews and activities to accommodate the particular needs of respondents. Should scheduling conflicts prevent a meeting with all respondents while visitors are on site, the contractor will conduct follow-up telephone calls accordingly. Because of this flexibility in scheduling, the ability to follow-up by telephone, and several staff members at sites likely qualifying as potential respondents, a response rate of over 90 percent to site visit interviews is expected.

Data reliability. The evaluation team will use four well-proven strategies to ensure the reliability of the data. First, two experienced site visitors will conduct a pilot site visit. During this visit, the site visitors will assess the flow and pacing of the discussion guided by the questions in the site visit protocol to ensure that it is feasible during a visit to collect comprehensive information that is in accord with the study’s goals. As needed, the evaluation team will revise the protocol to facilitate the data collection effort. Second, all site visitors, most of whom already have extensive experience with this data collection method, will be thoroughly trained in the issues of importance to this particular study. This training will include techniques to probe for additional details to help interpret responses to interview questions and to assure all interview respondents of the privacy of their responses to questions. Third, site visitors will be trained in systematic documentation of data gathered on all key topics through the use of a standardized template. These notes will generally follow the topics in the protocols and integrate the perspectives of multiple respondents to describe the site’s overall program implementation experiences accurately and comprehensively. Finally, to minimize subjectivity and ensure consistency across sites, both members of site visit teams will review all site visit notes and coordinate to resolve inconsistencies or missing information.

b. Survey

The evaluator will use well-established methods to maximize response rates and data reliability for the survey. The strategy for maximizing survey response begins with the survey development and carries through the entire survey process. The methods employed mitigate all types of individual nonresponse, from failure to locate the sample member to a refusal to participate in the survey. An overall survey response rate of at least 80 percent is expected. In the evaluation of the AT&T Foundation’s Aspire Program, a web-only survey of 172 grantees – school districts, foundations, and non-profit organizations – a 97 percent response rate was achieved during a one-month field period. In an evaluation of Early Head Start, an 89-percent response rate was achieved from all 748 Early Head start programs in existence at the time, using web surveys with telephone and mail follow-up over a five month period. The National Study of Substance Abuse Treatment Services web survey, administered to about 17,000 substance abuse treatment facilities throughout the United States and its territories, achieved a 95 percent response rate. Unfunded partners may be somewhat less likely to respond to the survey than are grantees. Because both of these types of respondents will be included in the JIAC/AM-JIAC survey, the response rate to this survey may be slightly lower than those achieved in these past experiences. As the response rate is expected to be over 80 percent, and item nonresponse is not expected to be below 70 percent, plans for nonresponse bias analysis are not required.

Survey language and length. The questionnaire is designed to be easy to complete. The questions use clear and straightforward language and skips ensure that respondents answer only items that apply to them. The estimated average time required for completion is 23 minutes.

Locating sample members. An essential step in a successful survey effort is the ability to locate as much of the survey sample as possible. The evaluation team will initially identify survey respondents using grant proposals, project plans, performance reports, interviews with federal staff, and other materials. Shortly after the receipt of OMB clearance, the team will ask the ETA funding stream administrator in each cluster to provide additional and updated information on the partners in the cluster, including contact information for potential survey respondents. This individual will be instructed to contact other grantees as necessary to obtain information on the partners of those grantees, and to assimilate all information into a single partner contact information sheet. The sample will be selected and the survey fielded soon after collecting this information, minimizing the possibility of it becoming out of date. If the evaluation team encounters difficulty contacting a sampled respondent, it will follow up by holding discussions with the cluster manager and/or grantees, asking the cluster manager and/or grantees to contact cluster members, and/or using Internet and other searches. If contact information for a partner organization cannot be located or is known to be incorrect and corrected information cannot be found (for example, if the telephone number is no longer in service and/or the website has been taken down), the evaluation team will assume the partner is no longer operating and exclude it.

Initial contact with sample members. Although grantees agreed to participate in the evaluation as a condition of grant receipt, the evaluator will send all grantees and partners an email that provides details on the goals and components of the evaluation and their role in it. An attached letter on ETA letterhead will also introduce the evaluation and expectations for participation. All cluster managers, funding stream administrators, and partners selected for the survey will be informed of the survey and receive a fact sheet addressing likely questions. Establishing the authenticity of the survey effort with sample members from the start lays an important foundation in promoting a high response. The email to sampled individuals will include a link to the survey and will provide individualized log-in information. This log-in information will route respondents down the proper path for their role when they begin the survey. On the survey website, respondents will also be able to access a fact sheet on the evaluation and a letter of support issued by ETA.

Gaining and maintaining cooperation. One week after sending email invitations to participate in the survey, and at several other points in the 16-week field period, the evaluation team will send email reminders to encourage response. If email reminders are not sufficient, the evaluation team will follow up by telephone. Telephone follow-up with nonrespondents will begin in week eight of the survey period and continue throughout the remaining two months of the field period. In these calls, trained study team members who were involved with the study design and other data collection efforts will answer any questions the respondents have about the survey, encourage them to complete it online, or complete surveys with respondents over the telephone, entering their responses in the web instrument for them. If contact information is incorrect or the identified individual has left his or her position, the evaluator will attempt to locate another appropriate respondent from the organization through web searches and conversations with the FPO and/or grantees, federal grant administrators, cluster managers, and/or other partners. If it is determined that a partner organization is no longer in operation, this will be noted, but the organization will not be replaced in the sample.

Data reliability. The survey has been extensively reviewed by project staff and staff at DOL and thoroughly tested in a pre-test involving nine individuals from two clusters. At the end of the survey period, the evaluator will prepare data files including item responses and any verbatim responses that will have to be coded for analysis. Unusual patterns of item nonresponse will be addressed, as described previously, and contradictory responses will be reconciled to the extent possible through triangulation.

4. Tests of Procedures or Methods

All instruments and protocols used in the conduct of the JIAC Grants Evaluation have been tested to evaluate the clarity of the questions asked, identify possible modifications to question wording or order that could improve the quality of the data, and estimate respondents’ burden.

Site visit data collection. To ensure that the site visit protocols provide effective field guides that will yield comprehensive and comparable data across the nine study sites, site visit protocols are based on those used for related evaluations. Senior team members, including the project director and the principal investigator, will conduct the first two site visits as pilot tests, before launching the full round of site visits. These pilot site visits will help to ensure that the protocol will appropriately assist site visitors in delving into the topics of interest and does not omit relevant topics of inquiry. Senior research staff will also assess the site visit agenda—including the data collection activities to conduct and the structure of these activities—to ensure that they can be feasibly conducted as part of the site visits and yield the desired information. Based on the results of the first two visits, the contractor will adjust the site visit protocols as necessary. Before the initial site visits, senior team members will review the protocols, site materials, evaluations goals, and data recordings. Following the first two site visits, the senior team members will participate in a debriefing meeting to refine the protocols and develop training materials for the remaining site visitors.

Survey. The survey instruments were thoroughly pre-tested using paper versions of the instruments that were distributed and returned via email. A debriefing conversation then took place by telephone. Because all 30 clusters will be involved in the evaluation, the evaluation team could not select clusters that were not involved in the evaluation for the pre-test. Additionally, organizations not participating in either of the grants would not have had a basis for comprehending or responding to the survey questions. Therefore, the survey instrument has been pre-tested in two clusters that agreed to participate and for which the evaluation team had reliable contact information and a strong sense of partner participation levels. Pre-testing in two clusters provided a varied mix of cluster administrator, grantee, and partner respondents.

During the pre-test, the evaluation team administered the survey in two clusters to a total of eight respondents. The evaluation team requested sample frame selection information for two clusters from the cluster manager and ETA funding stream administrator. The responses provided and conversation with these individuals made it clear that this information also needed to be requested from the other funding stream administrators in order to get a complete list of participating partners. Cluster managers and funding stream administrators were not familiar with all partners to all of the grants, particularly those with lower levels of involvement. Several changes were also made to the form used to collect this information based on pretest usability. During the first round of pre-testing, the evaluation team administered the survey to an individual who was both the cluster manager and the ETA funding stream administrator, an ETA funding stream administrator, a cluster manager, and two partners. Second round participants were from three partner organizations, as the partner survey required the most revision. Pretest participants were chosen to ensure a mix of ETA and non-ETA respondents and funded and non-funded respondents. A 15-minute telephone debriefing followed survey completion in the pre-test. During the debriefing, respondents answered questions about their experiences completing the survey, any challenges faced in understanding the questions or their intent, the availability of needed information, and how long the survey took to complete. The instrument was modified following the first phase of the pre-test to address identified issues. For the cluster managers and ETA administrators, these were largely matters of adopting the most commonly used and understood vocabulary and adding response options to some items to more completely cover the range of possibilities. Changes to the questions partners receive were more extensive and involved deleting questions in several areas with which partners were found to be unfamiliar, rewording others, and adopting the most commonly used and understood vocabulary. The modified instrument was then used for the second phase of pre-testing to determine if the changes successfully addressed the identified issues, which they did.

5. Individuals Consulted on Statistical Methods

Consultations on the statistical methods used in this study have been used to ensure the technical soundness of the study. Specifically, ETA has contracted with Mathematica and the W. E. Upjohn Institute for Employment Research to conduct the JIAC Grants Evaluation. The evaluation team also engaged three technical working group (TWG) members to consult on the design of the study, the interim report findings, and the final report findings. Table B.2 displays the technical staff members consulted in planning for the evaluation of the JIAC grants.

Table B.2. Contractor Technical Staff

Affiliation and Name

Role on Project

Telephone Number

Mathematica Policy Research



Ms. Jeanne Bellotti

Project director

(609) 275-2243

Ms. Stephanie Boraas

Survey director; task leader

(202) 484-3292

Ms. Samia Amin

Deputy project director; task leader

(609) 275-2375

Ms. Michelle Derr

Senior researcher

(202) 484-4830

Ms. Patricia Nemeth

Senior survey researcher

(609) 275-2294

W.E. Upjohn Institute



Dr. Kevin Hollenbeck

Principal investigator

(269) 343-5541

The Aspen Institute



Ms. Maureen Conway

TWG member

(202) 736-1071

Public Policy Associates



Dr. Nancy Hewat

TWG member

(517) 485-4477

University of California, San Diego



Dr. Mary Walshok

TWG member

(858) 434-3412



6

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSamia Amin
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy