B_Research objectives and questions by data source

B_Research objectives and questions by data source.docx

Evaluation of Supplemental Nutrition Assistance Program (SNAP) Employment and Training Pilots.

B_Research objectives and questions by data source

OMB: 0584-0604

Document [docx]
Download: docx | pdf


ATTACHMENT B

STUDY DESCRIPTION: RESEARCH QUESTIONS, DATA SOURCES, AND KEY OUTCOMES


This page has been left blank for double-sided copying.

In the Agricultural Act of 2014, Congress called for the independent longitudinal evaluation of ten pilot projects aimed at fostering employment and reducing reliance on public assistance through the provision of employment and training opportunities. This evaluation, henceforth denoted as SNAP E&T Pilots, will measure and compare impacts, client participation, implementation, and costs across ten pilot projects. Attachment B provides an overview of the evaluation approach, including the research questions, data sources, and key outcomes for each of the study’s objectives.

The evaluation of each pilot includes four primary research components addressing the following research objectives:

an implementation analysis that will document the context and operations of each pilot as well as help audiences interpret and understand impacts within and across pilots,

a random-assignment impact evaluation that will identify what works, and what works for whom, with respect to employment/earnings, public assistance receipt, and other outcomes such as food security, health, and housing,

a participation analysis that will examine the characteristics and service paths of pilot participants and the control group and assess whether the presence of the pilots and their offer of services or requirements to participate affect whether people apply to SNAP (entry effects), and

a cost-benefit analysis that will estimate the return to each dollar invested.

In order to meet these study objectives (using the list of data collection instruments in Table A.2.a), FNS through its contractor and subcontractors (called the “evaluation team” going forward) will (1) employ a rigorous evaluation design that includes random assignment (RA) and (2) use a variety of data collection techniques, as outlined in Attachment B, to examine the different study components (i.e., implementation, impact, participation, and cost-benefits).

Data collection began with the collection of the registration document (Attachments C.1 and C.2) from study participants at each of the 10 pilot programs. During sites visits, the evaluation team conducted in-depth interviews with staff and conducted focus groups with program participants. The evaluation team also observed operational activities and reviewed relevant documents. To minimize burden and cost, the evaluation relies on various types of administrative data to address the objectives related to participation, impact, and cost-benefit analyses. In addition, the evaluation team conducted follow-up telephone surveys of a random sample of study participants at 12-months and 36-months (Attachments O.1/O.2 and O.4/O.5 respectively). In the following section, we describe participant completion of the consent document (Attachment D.1 – D.4), registration document, the site visits, the participant follow-up surveys and administrative data collection (Attachment T).

Data Collection Activities

a. Enrollment and Consent

All participants that were eligible for the pilot and consented to participate are included in the study. No additional screening was required. As participants enrolled, they were asked to consent to (1) participate in the SNAP E&T activities and (2) participate in evaluation activities including follow-up surveys. On behalf of FNS, the contractor trained pilot site staff in the administration of the electronic study consent document which was available in English and Spanish (Attachments D.1 and D.2) and in how to address questions that arose. (Because of the different Institutional Review Board (IRB) requirements and needs of each pilot site, the consent documents were tailored for each site. But the content, at a minimum, is that which is found in Attachments D.1 and D.2.) For some sites, participation in regular core services is mandatory for individuals who must meet a work requirement in order to remain food stamp eligible. Therefore, a different consent document with a statement about that provision was provided to participants in those sites (Attachment D.3 and D.4).

Participants who consented received a hardcopy version of the consent form to keep. After they consented, some baseline information was collected from them before they were randomized into the treatment group (receiving the expanded services as part of the pilot) or control group (receiving the existing core services or no services).

b. Baseline information

Baseline information was collected at the time of enrollment from the study participants by intake/case workers. Information about study participants is needed at the point of random assignment (RA) for both logistical and analytical purposes. Identifying information (name, date of birth, social security number) is necessary to conduct RA, track the research sample members throughout the study, and obtain their administrative data. Detailed contact information (name, address, telephone numbers, social media address) for the sample member and one or more of his or her friends or relatives is essential to locating study participants for follow-up interviews. Collecting data on baseline characteristics also allows the evaluation team to: confirm that the treatment and control groups have similar characteristics, define key subgroups, improve the precision of impact estimates, and adjust for nonresponse bias. Participants were notified that this information collected cannot be linked back to any individual.

The Registration Document, in both English and Spanish, was used to collect these data (Attachments C.1 and C.2). It was based on registration instruments used for other evaluations of employment and training services such as for the Workforce Investment Act Gold-Standard Evaluation (WIA GSE), and was programmed into the E&T Pilot Information System (EPIS) so that program staff could administer the instrument using a computer or tablet device. EPIS was programmed with automatic checks to ensure that all data required for RA were inputted, checks for out-of-range responses, and alerts the program intake person so that incorrect or missing information could be corrected. Screenshots of the registration document, as it appears in EPIS, can be found in Attachment C.3.

Shortly after consent was obtained, participants received a welcome packet by mail that includes a letter (Attachments E.1 and E.2) welcoming them to the study and reminding them of the future surveys, a study brochure (Attachments F.1. and F.2) and a small gift (a magnet, approximate value of $1), with the project name. Including a refrigerator magnet in the welcome packet will provide an ongoing tangible reminder about the study. Midway between enrollment and the first follow-up survey, the contractor mailed a seasonal postcard (Attachments G.1 and G.2) reminding participants they would be contacted in a few months for the first telephone interview.

Enrollment in the pilots ended in September 2018 and no new participants will be permitted to enter the pilot. The Registration Document is no longer in use.

c. Site visits

The evaluation team conducted three rounds of site visits to each pilot project; the first focused on collecting data about planning and early implementation (June-September 2016), the second on collecting data about operations (May-July 2017), and the third on collecting data about full implementation and closeout (August-November 2018). The evaluation team attempted to visit all SNAP offices and local providers in each of the 10 pilots during the three rounds of site visits. However, if a pilot project operates across many offices and involves many providers (more than 10), the evaluation team purposely selected for the visit 10 offices or providers representing a mix of characteristics (e.g., services offered, geography, technical assistance required, etc.). To understand program evolution, the evaluation team visited the same offices and providers across the three rounds of visits. Each visit spanned three days and incorporated the data collection activities described below.

Pilot program staff and partner interviews. The cornerstone of each site visit was in-depth interviews with various types of staff. Respondents or Grantees included staff at the State SNAP agency, local SNAP offices, E&T service providers, and relevant partner agencies. Our contractor conducted up to 300 interviews across the 10 pilot programs in each of the three rounds of site visits, for a total of approximately 900 interviews. Interviews across the three visits generally included the same key staff, but local office staff and provider staff varied between visits. Interviews conducted during the first site visit focused on activities that occurred during the planning period, including topics such as the vision or logic model for the project, planned project design, implementation plan, community context, and the planning process itself. Interviews conducted in the second round of site visits focused on the operation of the pilot. Interviews probed leadership and partner roles, staffing structures, recruitment and engagement strategies, specific services offered and received, deviations from plans, and respondents’ perceptions of challenges and successes, among other topics. The third round of site visits focused on capturing changes that have occurred since the prior interviews and identifying lessons learned over the course of the pilot. The master site visit protocol for all visits including topics for discussion and questions by respondent is included in Attachment H.

Each interview for Grantees was no more than 1 hour in length. Similar questions were asked of all respondents so that no single person’s opinions or responses are assumed to be definitive, and to ensure that it is understood not only how service delivery is meant to work, but also how it actually works.

Document review. In preparation for the site visits, the evaluation team reviewed and analyzed documents produced by staff at each pilot project, such as annual plans, outreach materials, tools for tracking participation and progress, internal management reports, and pilot marketing and participant communication materials that are provided by FNS staff or by pilot staff during the course of the technical assistance conversations. A list of documents that were reviewed from pilot projects can be found in Attachment AA.

d. Case studies and focus groups

During the second and third rounds of site visits, the evaluation team conducted some combination of client focus groups, employer focus groups, and case studies with SNAP E&T Pilot clients and providers. The decision about which types of additional research conducted in these rounds was based on the needs of the site and what data would be most beneficial to the study.

Case studies. The evaluation team conducted case study interviews with 40 clients and 120 local SNAP office staff and providers (3 staff per client, as appropriate). In-depth interviews with clients lasted no more than 90 minutes and interviews with providers (provider staff could be affiliated with the government or in the private section depending on the grantee and pilot) lasted up to 60 minutes. These case studies also included structured observations of local SNAP offices and E&T provider operations. The interviews and observations will help to understand participant pathways, eligibility and service referral determinations, service delivery systems, and intensity of services provided under each pilot project. To the extent possible, researchers used the observations guides while observing eligibility and case management meetings between participants and staff, participant assessments and orientations, and provision of key service components. Trained staff then used the interview guides to interview providers and clients about this observed process. The interview guides for case studies of clients and providers, as well as the observation guides are included respectively in Attachments I.1, I.2, and I.3.

For the case study interviews, the evaluation team first selected states with programs for which we want to learn more and then targeted providers that are providing services of interest. Because there are a minimal number of case studies overall, the evaluation team focused those interviews on pilots and in areas that could be beneficial for explaining the impact and implementation results. The evaluation team worked with area providers to identify clients (pilot participants) who are scheduled to be at a provider location for case management or training. Staff worked with the provider to reach out to clients to ask if they would be willing to participate in an interview prior to or following their appointment. The small number of client interviews were conducted in English, as study staff on this project who are trained to conduct these interviews are not bi-lingual. Participants interviewed for the case study received $50 in cash to cover the costs of participation.

Focus groups. The evaluation team conducted 20 client focus groups with 10 to 12 SNAP E&T Pilot participants (240 participants) and 20 employer focus groups with 10 to 12 participants (240 participants). Focus groups lasted no longer than 90 minutes.

Client focus groups. The focus groups with pilot participants will allow FNS to better understand participants’ decision-making processes as it relates to the selection of some services and not others. The English and Spanish versions of the client moderator guides are contained in Attachments J.1 and J.2. For each focus group, the evaluation team recruited 20 to 25 SNAP E&T Pilot program participants with the expectation that 10 to 12 would attend. Staff experienced in recruiting respondents contacted client focus group participants by telephone to explain the study’s purpose, topics to be discussed, incentives, and logistics. The make-up of each client focus group varied by pilot project. Each focused on slightly different target populations. Some sites focused on able-bodied adults without dependents (ABAWDs), while others focused on a hard to serve population, like the homeless. However, within the target population of the focus group, the evaluation team attempted to recruit a varied mix of clients, consistent with the demographic characteristics of the population (e.g., age, race, and gender). The English and Spanish focus group recruitment guides and recruitment criteria can be found in Attachment K.1 and K.2. Staff mailed a focus group confirmation letter to clients (Attachment L.1 and L.2) who agreed to participate to remind them of the upcoming focus group.

Employer focus groups. The employer focus groups provide valuable information about why employers become involved with these types of programs, what skills they value most, and how the programs could better match clients and employers. The employer focus group moderator guide is contained in Attachment J.3. The employers that are participating in the pilot as sites for work-based learning were targeted for the focus groups.1 The evaluation team determined which pilots are actively using employers for work-based learning, and had large enough samples for recruiting and conducting a focus group in an area. Focus groups were not conducted in all pilots, as not all pilots focus on work-based learning training involving employers. To the extent there is variation in the type of employers involved in the pilot, the team attempted to recruit employers from various industries and that are various sizes (small, medium, and large businesses). The contractor worked with Grantee and provider staff to identify employers for focus groups. The state staff reached out to the employers to gauge interest and make introductions. The contractor followed-up via email or by telephone as needed. An example recruitment email is included in Attachment L.3 and a confirmation letter reiterating the information about the study and focus group was mailed (or emailed depending on their preference) (Attachment L.4).

Focus Group recruitment, incentives, and consent. To recruit SNAP E&T pilot focus group participants, the evaluation team first selected states with programs for which we want to learn more. Using a convenience sample, the evaluation team used EPIS and MIS (management information system) data to identify participants with a variety of characteristics.

Focus groups for clients and employers were held at convenient times (including evening hours, if appropriate) and locations. The contractor determined which times and types of locations were appropriate for the interviews on a case by case basis, with guidance from the Grantee and providers in the area. In general, pilot areas within the state with larger E&T populations were selected, so there were large enough samples for recruitment. The contractor sought to collect participants’ and employers’ contact information from each pilot program and inquired about possible convenient locations to host the focus groups and case study interviews.

At the beginning of each focus group, research staff obtained verbal consent from all participants. After reading the consent section of the focus group guide, staff allowed those who did not wish to participate an opportunity to leave. At the end of each focus group, participants were asked to complete a Participant Information Survey (PIS). The purpose of the PIS is to capture basic characteristics of the focus group SNAP E&T Pilot participants who ultimately attended the discussion. The English and Spanish versions of the client Participant Information Surveys can be found in Attachments M.1.and M.2. The employer PIS can be found in Attachment M.3. The evaluation team offered each focus group participant a $50 MAX Discover® prepaid card to cover the costs of participation.


e. Participant follow-up surveys

Survey data from participants is being used to inform the impact, participation, and cost-benefit analyses. The evaluation team will collect information at baseline (see A3b) and during two follow-up telephone surveys, described below.

Follow-up surveys. Longitudinal follow-up surveys will be conducted with a randomly selected subsample of study participants at 12 months after RA (N=25,000), and again at 36 months after RA (N=18,240). The evaluation team anticipates a response rate between 65 and 80 percent for each follow-up survey. The surveys will collect data on service receipt and outcomes from both treatment and control group members. They will build on other surveys that have been administered successfully with similar low-income populations, such as the surveys used in WIA GSE or Rural Welfare-to-Work. Surveys will ask the same questions of all respondents. Later follow-up surveys will be consistent with earlier rounds, with only minor changes to the survey instrument where the respondent is asked to confirm rather than recollect information. The surveys will be kept to a maximum average length of 32 minutes.

All surveys were translated into Spanish by a certified bilingual translator using the Referred Forward Translation approach in which a translator having extensive experience in survey development translates the questionnaire, and then a second translator reviews that work and recommends changes in phrasing or wording, or dialectical variations. The two then meet to discuss recommendations and determine the preferred questionnaire wording. The surveys are administered via computer-assisted telephone interview (CATI) using trained telephone interviewers with field/in-person follow‑up.

Since most data collection instruments were drawn from previously administered surveys which have been tested and used successfully, the pretest focused mainly on testing the survey length and flow of questions. The pretest was conducted with nine current SNAP E&T participants from two states. Results of the pretest are provided in Attachment N. Upon approval of the proposed changes, the surveys were revised in response to the pretest results. Final versions of the English and Spanish 12-month follow-up surveys can be found in Attachments O.1 and O.2. The English and Spanish versions of the 36-month survey can be found in Attachments O.4 and O.5. Screenshots of the 12-month and 36-month surveys can be found in Attachments O.3 and O.6 respectively.

Conducting the participant survey. The evaluation team sends a survey advance letter (Attachments P.1 and P.2) to SNAP E&T Pilot participants before each follow-up survey to inform them that the survey field period is beginning. To encourage participation in the survey, sample members are offered a $30 MAX Discover® prepaid card for participating. At the second follow-up, the incentive increases to $40. Justification for the incentive is provided in Section A.9. After outbound calling begins, the evaluation team attempts to reach respondents with survey reminder letters (Attachments Q.1 and Q.2.) and survey reminder postcards (Attachments R.1 and R.2) if multiple call attempts prove unsuccessful. A survey refusal letter (Attachments S.1 and S.2) is also sent to sample members who initially decline to complete the interview to emphasize the importance of the survey and ask them to reconsider participating. Trained Staff conduct field locating / in-person locating to try to reach participants who do not respond by phone.

f. Administrative data

To minimize duplication of data collection efforts and decrease staff burden, the evaluation team uses administrative data in the participation, impact, and cost-benefit analyses (see the list of administrative data elements collected in Attachment T). The types of data used are contained within wage, public assistance, and service receipt records. These three types are data are described below, followed by a description of the process for obtaining them.

Wage records. By law, employers subject to the unemployment insurance (UI) tax must report to the State UI agency the employment and earnings of each employee each quarter. The evaluation team will request data from each State once per year for up to five years after random assignment.

Public assistance records. All States have integrated systems for SNAP and TANF. Many State systems also include Medicaid, though policy changes since the Affordable Care Act have made this less likely. The evaluation team will use this system integration to its advantage by making combined requests for data on all programs linked in integrated systems. The evaluation team has requested a small set of variables for all pilot participants including monthly SNAP participation status indicators, SNAP benefit amount, TANF participation status indicator for the pilot participant, the benefit amount for pilot participant’s TANF unit, and an indicator for whether the pilot participant is covered by Medicaid. The evaluation team also obtains information on income and sources of income for the SNAP unit. These data will be used in the impact analysis and will be obtained from Grantees quarterly from the start of random assignment for up to five years.

The evaluation team will also request Grantee’s State administrative data on SNAP participation or SNAP application rates, which will be used to support the planned entry effects analysis, which examines the impact of new work requirements on the levels of SNAP participation in the areas in which pilots operate. These data will be aggregated by geographical area, such as by county, in the state. This will be provided twice for each grantee: once before the interim report in September 2018 and again after the pilots end in January 2020.

Data on service receipt. Pilot project staff collect and provide data on services received through the pilot projects. These data are needed for monitoring site performance, describing the services received by treatment and control group members, documenting entry and exit dates for specific E&T activities, and providing data needed for the cost-benefit analysis.

Obtaining data from State agencies. In conjunction with our contractor, FNS first worked with the State agency director/manager (or designee) to identify the relevant agency or agencies, staff, and data type relevant to the pilot project. The evaluation team then developed a comprehensive memorandum of understanding (MOU) for each State specifying the data sources and variables to be shared for the evaluation, procedures to ensure data security, and plans for developing and transferring a public use data file at the end of the evaluation. The timing and mode of transmission for data from each source was determined in consultation between FNS’s contractor and pilot site. An MOU template can be found in Attachment Z.

To the extent that the States capture the necessary administrative data in their SNAP MIS, the evaluation team acquires it as part of collection of SNAP administrative data. For sites that already capture the required data at the desired level of detail in their E&T MIS, or can do so by adding items to their systems, the evaluation team has arranged for regular extracts from the existing MIS or data collection system.

The evaluation team created a secure file transfer protocol (FTP) site to which project sites transmit files on a predefined and agreed-upon schedule clearly stated in the MOU. All transmissions are checked by Mathematica programmers, and error reports are generated back to the site, alerting them to missing or out-of-range items and identifying cases for which no data have been recorded after a specified interval.

g. Cost data

The evaluation team collects cost data from each pilot project to generate estimates of start-up and ongoing pilot project implementation costs and estimates of overall, per component, and per participant costs. In addition, data on the costs of providing services to treatment group participants and, to the extent possible, the costs of providing services to control group participants are also collected. Cost data are collected by experienced research personnel who fully understand the conceptual framework for analysis, are adept a building rapport with site staff, and can describe complex cost issues clearly.

Pilot costs workbook. The evaluation team uses the pilot costs workbook (Attachment U) to collect data on pilot costs. The master workbook is designed to ensure systematic data collection across pilot projects, but is customized to account for pilot project differences and changes to pilot implementation over time. The pilot projects incur similar types of costs (e.g., staff, facilities, services, and supplies & equipment costs, etc.) but the nature of these costs—particularly service costs— vary by pilot project, and some pilot projects incur costs that others do not based on their specific project features. When a tab or field from the master pilot costs workbook does not apply to a particular pilot project, it was removed from that project’s cost workbook. Workbooks were further customized to account for pilot project differences as necessary; for instance, the “E. Services” tab of the pilot costs workbook requests data only on those services provided by each specific pilot project. To further avoid duplicating effort and burdening sites, the evaluation team discussed with FNS and pilot projects the availability of existing data sources that might provide the information needed to complete the workbooks.

The pilot costs workbook was administered to each pilot on a quarterly basis throughout pilot project implementation. The first round of cost data collection occurred immediately after the pilot planning period (November 2015) and collected data on start-up costs. Cost data on ongoing pilot implementation costs was collected on a quarterly basis thereafter (February 2016, May 2016, August 2016, November 2016, February 2017, May 2017, August 2017, November 2017, February 2018, May 2018)—a schedule that is comparable to the grant reporting schedule and which prevented unduly long respondent recall. After the first round of data collection and to further reduce respondent burden, the evaluation team pre-filled those workbook fields that are not expected to change over time (e.g., staff names and facilities).

All pilots received the same master workbook but all elements of the workbook might not be applicable to all 10 pilots. It was also the case that the grantee agency was able to report data on some, but not all, pilot costs, and that pilot project service providers also needed to report on their pilot-related costs.

During the planning period, the pilots were asked to identify a pilot cost liaison, the person in each pilot project responsible for tracking and reporting costs. With the evaluation team’s guidance and support, this cost liaison completed data collection workbooks, and assisted with necessary data collection from service providers as necessary. Cost liaisons submitted cost data via the secure FTP site used to collect administrative data, in case personally identified information (PII) was included.

For quality assurance, each member of the pilot evaluation team, especially the cost analyst, used its knowledge of each pilot and its staff and services to review the workbooks and ensure expected costs were captured. If cost data were reported that were not anticipated or seemed out of range for what was expected, the evaluation team used these issues as a basis for follow up with the grantee.  As data collection continued, new workbooks were checked against previously collected workbooks to identify anomalies and used as a basis for follow up with the grantee.  

Staff time-use survey. Because labor will be a major component of the pilot interventions, service cost estimates (e.g., the cost of providing case management) must account for how pilot project staff (particularly frontline staff) spend their time. The evaluation team therefore administered a staff time-use survey (Attachment V.1) to frontline staff at each pilot project. The survey asked staff to record how they divided their time between pilot and other activities each day over the course of one week.

The time-use survey was administered three times during pilot project implementation, once per year for three years. Up to 26 frontline staff at each pilot project completed the survey during each round of time-use survey data collection. When a pilot project had fewer than 26 frontline staff, the survey was administered to all frontline staff during each of the three rounds of time-use survey data collection. When a pilot project had more than 26 frontline staff, a different sample of 26 frontline staff was selected to complete the time-use survey during each round of data collection. The evaluation team used simple random sampling to select these staff. Selected staff received an email with information about the survey, and reminder emails if they did not complete it by the designated due date. Examples of the Time-Use Survey initial and reminder emails can be found in Attachment W.1 and W.2 respectively.

Table 1 below presents an overview of the evaluation approach.

Table 1: Overview of evaluation approach

Study component

Data sources

Key outcomes

Objective 1. 1. Document how each of the participating sites operates the pilot by describing project design, operations, and outcomes; by describing the exact nature of each treatment delivered; and by identifying specific lessons to improve policy and practice and promote replication of successful treatments in other locations.

Implementation

  • Site visits (1) planning and early implementation, 2) operations, and 3) full-implementation and close-out

  • In-depth interviews with grantee/state-level staff, SNAP eligibility staff, and E&T provider staff

  • Focus groups of participants

  • Observations of activities

  • Document review

  • Program features, processes, structure, and context

  • Implementation activities

  • Staff experiences and challenges

  • Program measurement and performance

Objective 2. 2. Document short- and long-term causal impacts on participants’ activities, outputs, and outcomes, overall and as they vary with participants’ characteristics and program contexts

Impact

  • State Administrative records:

  • Earnings, UI, SSI, SNAP, TANF, WIC, etc.

  • Registration Document

  • Follow-up surveys at 12 months and 36 months


  • Primary outcomes:

  • Employment (job search behavior, quarterly employment status, job duration, number of jobs, occupation/industry, type of employment, advancement, etc.)

  • Earnings (wages, hours, fringe benefits, earnings from off-the-books jobs, and transportation and child care costs)

  • Receipt of public assistance (SNAP, TANF, WIC, SSI, UI, Medicaid, EITC, etc.)

  • Secondary outcomes:

  • Household food security, physical and mental health, self-esteem and self-efficacy, and housing status

Objective 3. 3. Examine the participation patterns—in offered employment and training services and in SNAP—in each pilot treatment and determine if participation varied within and across the pilots.

Participation

  • Administrative data on SNAP participation

  • Registration Document

  • Follow-up surveys at 12 months and 36 months


  • SNAP participation rate

  • Number of SNAP applications

  • Support services

  • Eligibility for and referral to activities

  • Start and end dates of service participation and completion status

  • Receipt of support services, certifications, and post-employment services

  • Perceptions of SNAP E&T

Objective 4. Provide a full social accounting from the perspectives of governments, other program partners (such as educational institutions and employers), and participants of the consequences of undertaking the policy alternatives implicit in each pilot project, both needed inputs and their costs and resulting outputs and benefits.

Cost - Benefit

  • Cost data

  • Time use data

  • Site visits

  • MIS data

  • Administrative data

  • Baseline, 12 months, and 36 months survey




  • Costs:

  • Total costs

  • Per-component and pre-participations costs

  • Benefits:

  • Increased earnings and tax payments

  • Reduced public assistance receipt

  • Value of output produced during program

  • Unmeasured benefits

  • Cost-benefit summary:

  • Net benefits

  • Cost-benefit ratio


Plans for analysis

a. Implementation analysis

The implementation and process analysis will draw on data collected from the site visits, MIS files, and all other interactions with sites. Site visits occurred each year from June 2016 to November 2018. Each visit had a different focus. Table A16.3a displays the dates of the visit followed by the focus of collecting data for that year.

Table 2: Site visit schedule and focus

Timeframe

Focus

June-September 2016

Collecting data on planning and early implementation

May-July 2017

Collecting data on operations

August-November 2018

Collecting data on full implementation and closeout


Following each site visit, the evaluation team uses a structured write-up guide to synthesize collected information by implementation dimension, highlight themes, provide examples and illustrative quotes, and identify discrepancies and areas of agreement among data sources. This information will be uploaded into a database developed for the study, structured around the study questions and protocols. Descriptive tables will also be used to keep track of components of each site and ensure consistency in knowledge across all staff and sites.

Prior to submitting site reports to FNS in late 2016, 2017, and 2018, the data is analyzed by creating tables that identify common themes and outliers across respondents and sites for certain topics or research questions (Yin 1994). These tables enable large volumes of data to be reduced to a manageable number of topics, themes, and categories of interest relevant to the study’s research questions (Coffey and Atkinson 1996). Theme tables will also be used to identify similarities and differences across the 10 pilots.

Using theme tables, site notes, and the standardized descriptive tables, the evaluation team will describe the environment in which the pilot participants (both treatment and control group members) operate, as well as the implementation process and outcomes for each pilot. The assessment, to be performed from January to June 2019, will identify how the pilot was planned; whether it was implemented as intended; what challenges the sites experienced in planning, implementing, and operating the pilots, and the associated solutions; and how the pilot components and services differ from those available to the control groups and how these differences might affect impacts. In addition to pilot program-level analysis, the evaluation team will conduct cross-site analysis and subgroup analysis within and between pilots.

b. Impact analysis

Create study database and analysis files. The evaluation team will prepare restricted and public use SAS data files with variable names keyed to the question numbers of each instrument. Data files and documentation will be provided in September 2019 with the interim reports and again in September 2021 with the final reports. The evaluation team will clean the data files by checking for consistency, missing values, outliers, and other problem values. Next, the evaluation team will create and add constructed variables and sampling weights. Complete documentation—including the file structure, codebook, variable definitions and formulation, descriptions of editing and imputation procedures, and SAS code—will accompany each set of data files, to facilitate full replication of all analyses presented in the evaluation report.

Conduct Impact Study Analyses. At the heart of the evaluation is the estimation of the impact of the pilot projects on participants’ outcomes. The primary outcomes in these analyses will be measures of earnings, employment, and receipt of SNAP, TANF, and Medicaid. Key secondary outcomes will include household food security, health status, and self-esteem. All impact analyses discussed below will be conducted in preparation of the interim evaluation report (conducted from January 2019 to May 2019) and the final evaluation report (conducted from January 2021 through May 2021).

As impacts on some outcomes may emerge or fade over time, the evaluation team will estimate impacts on outcomes measured regularly throughout the period between random assignment and each follow up survey. For employment and earnings, the evaluation team will estimate the impact for each quarter since random assignment, whereas for public assistance receipt, the evaluation team will estimate an impact for each month since random assignment. Impacts on secondary outcomes will be measured at each follow up.

Before estimating the statistical models for the impact analysis, the evaluation team will complete several contextual analyses from June 2018 to January 2019. First, the evaluation team will assess the equivalence of members of the treatment and control groups to verify that those assigned to the groups have similar average baseline characteristics using data from the Registration Document. Second, to provide guidance on how the findings might generalize to a broader policy setting, the evaluation team will compare the study sample to target populations within each site, as well as across sites. This analysis will be conducted using data on sample members from the Registration Document as well as geographic and local labor market information in the Area Resource File (ARF) and from the Bureau of Labor Statistics that will be linked to sample members by zip code or county. Third, the evaluation team will examine data on service receipt to understand differences in outcomes between treatment and control groups. In particular, the evaluation team will describe intensity, nature, and quality of services that participants receive in each site using information collected in the follow up surveys and from State E&T MIS systems.

After these contextual analyses are completed around January 2019, the impact estimation approach will compare treatment group outcomes with a counterfactual estimate of what those outcomes would have been in the absence of the project. The method used to estimate this counterfactual will depend on the specific evaluation design developed for each project. See Section B.2.1 for a description of the specific econometric models to be estimated.

A critical part of the analysis will be to assess what works and for whom. These analyses can be used to assess the extent to which intervention effects vary across policy-relevant subpopulations. Results from subgroup analyses can help inform decisions about how best to target specific interventions, and possibly to suggest ways to improve the design or implementation of the tested interventions. From January to May 2019, the evaluation team will estimate impacts among subgroups of participants, defined by the following baseline characteristics:

Family composition (e.g., whether single or married, whether a parent, presence of children in the household, and presence of more than one adult in the household)

Labor force attachment (e.g., recent employment experiences)

Baseline earnings (e.g., whether the person had zero or positive earnings before the pilot)

History of SNAP receipt and duration of current SNAP receipt (e.g., whether the person has participated before the current spell)

Demographic characteristics (e.g., age, gender, race/ethnicity, education)

Extent of barriers to employment (e.g., language, lack of transportation)

Income (e.g., less than 100 percent of poverty level)

Examining a large number of outcomes or subgroups increases the risk of finding statistically significant impacts that are due to chance rather than the true effect of the program. The evaluation team will minimize this multiple comparison concern by identifying key subgroups and a small set of primary outcomes within each outcome domain before beginning the analysis; carefully assessing whether statistically significant impact estimates for the primary analyses are isolated or part of a pattern within their outcome domains; and assessing the strength of impact patterns within the domains by evaluating whether significant findings hold up after statistical adjustments for multiple comparisons within and between domains.

Finally, from January to May 2019, the evaluation team will use data from the implementation analysis to statistically test whether key measurable program features—such as types of intervention services, program organization and partnerships, the target populations, and the extent to which the interventions were implemented as planned—are associated with cross-site variations in impacts.

c. Participation Analysis

The participation analysis will describe the employment and training services received by SNAP participants for each tested intervention, the variation in service receipt over time within and across pilot sites, and the contrast in the services received by the treatment and control group members. It will also examine the extent to which the pilot E&T initiatives encourage or discourage entry into the SNAP program. These analyses will be conducted concurrently with the impact analyses, in preparation for the interim evaluation report (conducted from January through May 2019) and the final evaluation report (conducted from January to May 2021).

Understanding the E&T services received by the treatment and control group members in each pilot will provide important contextual information for the impact analysis. Beneficial program impacts on participants’ longer-term outcomes will be realized only if the treatment groups receive meaningful and high quality services distinguishable from those received by the control group. Furthermore, detailed information on service receipt can help to identify potential reasons why the impact findings vary across subgroups of participants and programs, which can help inform future program improvement and targeting.

For the service receipt analysis in the interim report, the evaluation team will use State E&T MIS data and follow-up survey data during the pilot periods from 2016 to 2018 and during the analysis period of January to May 2019 to examine the extent to which people targeted for the treatments participate in the services offered, whether characteristics distinguish participants from nonparticipants, and whether participation differs across sites, interventions, time, and subgroups of participants. The evaluation team will also describe the nature, amount, and types of services received by SNAP participants in each SNAP E&T activity and differences in services between the treatment and control groups. Analyses will be conducted by site and by groups of sites with similar program features, with an emphasis on identifying factors associated with cross-site variation in service receipt.

The evaluation team will also examine the extent to which the pilot E&T initiatives encourage or discourage entry into the SNAP program. In particular, are potential enrollment increases due to attractive program features offset by potential decreases due to program mandates and other disincentives? A rigorous analysis of these entry effects is of critical policy importance because these effects could swamp the estimated program effects on SNAP participants’ long-term outcomes. This could occur if sufficient numbers of potential applicants know about the interventions and respond strongly to them, as was the case for welfare reform.

The approach for estimating entry effects will be tailored to the site pilot project designs and, indeed, some designs may preclude the analysis of entry effects. For sites that randomly assign geographic areas such as counties to treatment and control statuses, for example, the evaluation team will compare county-level participation rates across treatment and control counties during the analysis period from January to May 2019. In some cases it may be necessary to use non-experimental methods to analyze entry effects, such as comparing SNAP participation rates in areas with and without the pilot, before and after the pilot, and between the SNAP E&T-eligible and ineligible populations.

d. Cost-benefit analysis

Future decisions about whether to replicate and expand the SNAP E&T pilots will hinge in part on whether their benefits are large enough to justify their costs. The evaluation team will conduct a cost-benefit analysis at the end of each follow up period that uses a systematic accounting framework (Ohls et al. 1996) to determine (a) overall, per-participant and per-component costs of providing treatment services relative to those of providing control services; (b) whether the benefits of each pilot exceed its costs; and (c) the extent to which planning, start-up, and early implementation costs are offset by the stream of current and future benefits. All cost and cost-benefit analyses discussed below will be reported in interim evaluation report (conducted from January to May 2019) and the final evaluation report (conducted from January to May 2021).

The evaluation team will use the ingredient, or resource cost, method (Levin and McEwan 2001) to compute pilot costs. This approach entails estimating pilot costs through itemizing and collecting data on the amounts and costs of the resources (or ingredients) necessary to provide services. Using cost data provided by pilots and estimates of the additional services received by each participant (estimated from the SNAP E&T MIS), the evaluation team will estimate the total pilot costs, per-component costs (for example, the cost of each type of service), and per-participant costs. For all three types of costs, distributions of costs will be compared within and across grantees.

The evaluation team will measure benefits from the estimates of the impacts on earnings and receipt of public assistance—both of which are already in dollar amounts. Fringe benefits will be valued using estimates of the cost of fringe benefits as a percentage of earnings obtained from the U.S. Department of Labor National Compensation Survey. The administrative costs of the receipt of SNAP and other public assistance will be obtained from published sources.

The evaluation team will summarize costs and benefits of each pilot for each stakeholder in terms of the net benefits (the difference between the present value of benefits and costs), and the benefit-cost ratio (the ratio of the present value of benefits to the cost), often also referred to as the return on investment. Each measure will be estimated both including and omitting start-up costs. The evaluation team will assess the sensitivity of the estimates to key assumptions and variations in determinants of costs.

e. Special Analyses

The evaluation team will also conduct a series of up to ten special analyses to supplement findings from the interim evaluation report. These additional analyses on specific topics will enable stakeholders to learn more about the impact of program components and how impacts differ across policy-relevant subgroups.

All special analyses will be conducted between October 2019 and August 2021. The special analyses will use the evaluation data being collected to answer more detailed questions about impacts for specific program services, groups of participants, and types of pilots. Up to ten analyses are planned, with the research question for two analyses to be determined later based on FNS’s evaluation needs. The eight pre-determined analyses include: 1) estimating differences in participant impacts, program characteristics, and participation in States with mandatory and voluntary E&T programs, in rural versus urban areas, among recently unemployed and long-term unemployed participants, and for ABAWDs versus other E&T participants; 2) understanding how key implementation events or changes may have affected pilot impacts on participant outcomes; 3) examining how participant engagement and participation varied key implementation events or changes as a function of time; 4) estimating the impact of supportive services on participants’ outcomes; 5) describing community-level factors (e.g., workforce characteristics, employer infrastructure, transportation, etc.) associated with successful program implementation; 6) estimating the effects of pilot service dosage on participant outcomes; 7) estimating the impacts of work-based learning on participant outcomes; and 8) a re-estimating the impact analysis included in the interim evaluation report using UI earnings records and SNAP administrative data through December 2018 (i.e., due to lag in data availability, only these data through March 2018 will be included in the interim evaluation analysis). Results of the special analyses will be disseminated in the form of memos and reports.

Reports to Congress. The study will disseminate findings in a series of reports (interim and final) and briefings that address all research questions of interest in each pilot as well as cross-cutting questions of broader concern across pilots. However, Congress has invested significant resources in this study and needs to be kept abreast of its progress and findings. Accordingly, in addition to the interim and final reports mentioned above, FNS will submit an Annual Report to Congress on December of each year of the base contract (6 annual reports), and if the option to fund a 5-year follow-up is exercised, 2 additional annual reports through completion of the 5-year follow-up analyses and reporting. Each annual report will provide summaries of progress on implementation status of pilots and activities of the evaluations during the previous year and planned activities for the next year. In addition to these summaries of progress, depending on availability, these reports will also summarize early results of process, impact, participation/entry-effects, and benefit-cost evaluations of each pilot project. Exhibit 1 shows for each report anticipated scope and content beyond the reporting of pilot evaluation progress. In terms of findings, Reports #2 – 4, respectively, will describe findings from the three rounds of site visit interviews with program and provide staff (inclusive of focus groups, in-depth participant interviews, and employer focus groups) and the analyses of program service receipt and entry-effects. On a staggered basis reflecting the different start-up dates for pilots’ operations, Reports #4 - #7 will include findings on short-term (12 months after random assignment) and longer-term (36 months after random assignment) impacts and benefits-costs. Reports #8 - #9, if exercised, will include preliminary findings on the 5-year follow-up.

Table 3: Annual congressional reports: scope and content

Report no.

Date

Scope/content

1

12/2015

Summaries of pilot characteristics—location, target populations, services, sample sizes; research designs; planning and technical assistance; and pilot launch dates

2

12/2016

Summaries of random assignment/evaluation processes trainings; summary of pilot start-up and fidelity to service and evaluation processes and procedures; early (first year) implementation experiences and program participation

3

12/2017

Summary of monitoring and TA activities; steady-state (2nd year) implementation experiences and program participation

4

12/2018

Close-out (final year) implementation experiences; program participation

5

12/2019

Short-term (12-month) impact findings and benefit-costs

6

12/2020

Findings from special analyses

7

12/2021

Longer-term (36-month) impact findings and benefit-costs

8

12/2022

5-year impact findings and benefit-costs

9

12/2023

5-year impact findings and benefit-costs


Approximately fourteen months after the date that a participants is randomly assigned, the evaluation team will have administrative data on their primary outcomes of earnings, employment, and public assistance receipt for the 12 months following random assignment. (The timing of administrative data availability and survey respondent locating account for the lag.) The evaluation team will have data on these outcomes for the entire research samples (treatment(s) and control(s)) in the ten pilots. For a random sample of the pilot participants, the evaluation team will also have survey data that provide richer detail on these primary outcomes, as well as on service receipt and secondary outcomes, such as household food security, physical and mental health, well-being, and housing status. Thus, the evaluation team will have sufficient data to estimate impacts at 14 months following random assignment across a variety of outcome measures.

Studies of the impacts of employment and training programs typically find negative or zero impacts on employment outcomes in the short run—when program participants are receiving education, training, and related services (see, for example, Card, et al. (2015). The literature suggests also that impacts tend to only become more positive 2 to 3 years after participants finish the program, obtain stable employment, and obtain better jobs than their counterparts who did not receive program services. When the impacts become positive, however, depends on the type of intervention—impacts typically become positive sooner for short interventions, such as job search assistance, and later for longer interventions, such as those offering education and training.

Mathematica and FNS set the first follow-up survey to occur at 12 months after random assignment to balance the goals of minimizing survey recall error on program-related experiences and being able to obtain a short-term look at impacts on employment-related outcomes. In designing the study, we recognized that examining impacts on employment at 12 months would understate the full, long-term earnings effects of the program, because many treatment group members will have only recently completed their services and some will still be receiving services (for example, those who will enroll in more intensive training programs).

Thus, when preparing the interim reports and making the findings available to Congress in the annual Congressional Reports, impact findings at 12 months will need to be interpreted carefully. Each of the pilot programs is offering a mix of services to participants. The 12-month mark may be less informative for pilots that end up having a larger percentage of participants that receive education or training services, compared to pilots that have a smaller percentage that receive them. Furthermore, there will likely be variation on service receipt and intensity within programs based on individuals’ service paths.

A critical part of the interim report will be to examine services that both treatment and control group members receive through SNAP E&T and elsewhere. This analysis will use survey data on service receipt that will be collected consistently across the research groups as well as pilot MIS data. This analysis will allow the evaluation team to obtain information about the intensity of services received by the sample (for example, how long they spent in a training programs) and whether the services had been completed. Information on the service differential between the research groups as well as on service duration of will allow the evaluation team to examine the extent to which short-term impacts in income and employment over the first twelve months will be reflective of longer-term impacts.

Use of Incentives

Research has long shown that incentives can increase response rates to surveys (thus minimizing non-response bias) without compromising the quality of the data (Singer and Kulka 2002; Singer et al. 1999). Further, sufficient incentives can help obtain a high cooperation rate and minimize the need of field interviewers to locate sample members to complete the survey. Incentives for this information collection are planned for the SNAP E&T Pilots participant focus groups and participant surveys, all of which are voluntary for respondents. Incentives are not planned for data collection activities with Grantee staff because they responded to and were awarded the grant, and therefore, are expected to fully participate in this data collection.

A random sample of participants that were enrolled in the pilot programs (approximately half) were selected for the household participant surveys. The incentives are an essential component of the multiple approaches used to minimize non-response bias described in Section B.3 of this information collection request, and are especially critical because of the longitudinal design in which household respondents will be contacted for up to 36 months for two follow-up surveys. Participants may be asked to engage in multiple data collection events during specified windows of time, and during a period in their lives when they face competing demands from young children and other family and work obligations. These respondents are exerting unusual effort, and therefore, the potential for response bias among subsets of participants must be avoided proactively to ensure high quality data.

The survey incentives proposed for the SNAP E&T Pilots Evaluation are based on the characteristics of the study population and experience with conducting telephone surveys with similar low-income populations:

  • The USDA-sponsored Supplemental Nutrition Assistance Program on Food Security (OMB Control Number 0584-0563, Discontinued September 19, 2011) offered a modest $2 pre-pay incentive and a $20 post-pay upon completing the telephone interview and had a response rate of 56 percent for baseline and 67 percent for a six-month follow-up.

  • Site-specific baseline survey response rates in the USDA-sponsored 2012 SEBTC study (OMB Control Number 0584-0559, Discontinued March 31, 2014) ranged from 39 percent to 79 percent across 14 sites using a $25 incentive. The average unweighted response rate was 67 percent; the rate was 53 percent for passive consent sites and 75 percent for active consent sites (Briefel et al. 2013).  Despite not achieving the target response rate, the increase in incentive for the 2012 full demonstration year from $10 to $25 proved effective in improving the response rate by 12% unweighted and 15% weighted over the 2011 pilot year. The increased incentive for the 2012 survey was successful in addressing respondent fatigue that was evident during the 2011 pilot year.

Both of these recent studies conducted with populations similar to the SNAP E&T Pilots indicate that a $22-25 incentive alone may not be sufficient for reaching the higher end of the target response rate for this study (i.e., 80 percent). As such, we propose a $30 incentive amount for the 12-month survey and a $40 incentive for the 36-month survey. Increasing the amount of the incentive to $40 for the 36-month survey may contribute to keeping more respondents engaged, reduce respondent fatigue with the burden involved in the number of data collection activities required over time, and minimize response bias in the study.

Mercer et al. (2015) conducted a meta-analysis of the dose-response association between incentives and response and found a positive relationship between higher incentives and response rates for household telephone surveys offering post-pay incentives. Singer et al. (1999) found in a previous meta-analysis that incentives in face-to-face and telephone surveys were effective at increasing response rates, with a one dollar increase in incentive resulting in approximately a one-third of a percentage point increase in response rate, on average. Further, sufficient incentives can help obtain a high cooperation rate for both the baseline and follow-up surveys so that at follow-up less field interviewer effort will be needed to locate sample members to complete the survey.

The above discussion summarizes evidence for the effectiveness of incentives for reducing non-response bias, and the response rates associated with offering lower incentive amounts to highly similar target populations. In addition, the justifications for offering incentives and the amounts to be offered are justified for several reasons that address key Office of Management and Budget (OMB) considerations (Office of Management and Budget 2006):

  • Improved data quality. Incentives can increase sample representativeness. Because they may be more salient to some sample members than others, respondents who would otherwise not consider participating in the surveys may do so because of the incentive offer (Groves et al. 2000).

  • Improved coverage of specialized respondents. Some of the populations targeted by the Pilot programs include homeless and ex-offenders, which are considered hard-to-reach (Bonevski et al., 2014). In addition, households in some of the pilot areas are specialized respondents because they are limited in number and difficult to recruit, and their lack of participation jeopardizes the impact study. Incentives may encourage greater participation among these groups.

  • Reduced respondent burden. As described above, the incentive amounts planned for the SNAP E&T Pilots Evaluation are justified because they are commensurate with the costs of participation, which can include cellular telephone usage or travel to a location with telephone service, particularly for the homeless population served by some of the pilot programs.

  • Complex study design. The participant surveys collected for the impact study are longitudinal. Participants will be asked to complete a registration document and two surveys over a period of 36 months. Incentives in amounts similar to those planned for this evaluation have been shown to increase response rates, decrease refusals and noncontacts, and increase data quality compared to a no-incentive control group in a longitudinal study (Singer and Ye 2013).

  • Past experience. The studies described above suggest incentives for surveys fielded to similar low-income study populations may be effective.

  • Equity. The incentive amounts will be offered equally to all potential survey participants. The incentives will not be targeted to specific subgroups, nor will they be used to convert refusals. Moreover, if incentives were to be offered only to the most disadvantaged individuals, such as the homeless or ex-offenders, the differing motivations to participate used across projects will limit the ability to compare results across target populations and sites.

In summary, the planned incentives for the longitudinal household surveys are designed to promote cooperation and high data quality and to reduce participant burden and participant costs associated with the surveys, which are similar in length and will be conducted with similar populations as in other OMB-approved information collections.

The $50 incentives for the focus groups and $50 for the case study interviews are also consistent with many of the key OMB considerations described above as well as other OMB-approved information collections. For example, $50 incentives are currently being offered to community members, including parents, participating in one-hour telephone interviews for the Evaluation of the Pilot Project for Canned, Frozen, or Dried Fruits and Vegetables in the Fresh Fruit and Vegetable Program for USDA/FNS (OMB Control Number 0584-0598, Expiration Date September 30, 2017). The study to assess the effect of Supplemental Nutrition Assistance Program on Food Security (OMB Control Number 0584-0563, Discontinued September 19, 2011) offered a $30 incentive to SNAP recipients for completing a 90-minute in-depth interview.

While it will vary, some focus group participants may need to travel long distances to focus group facilities and to the provider offices where the case study interviews will take place. Participants may incur child care costs for the time spent in the discussions/interviews and on travelling. The planned incentive amount is consistent with the costs of participation for some respondents.



REFERENCES

Bonevski, B., M. Randell, C. Paul, K. Chapman, L. Twyman, J. Bryant, I. Brozek, and C. Hughes. “Reaching the Hard-to-Reach: A Systematic Review of Strategies for Improving Health and Medical Research with Socially Disadvantaged Groups.” BMC Medical Research Methodology, 2014, vol. 14, no. 42.

Card, David, Jochen Kluve, and Andrea Weber. What works? A meta analysis of recent active labor market program evaluations. No. w21431. National Bureau of Economic Research, 2015.

Coffey, A., B. Holbrook, and P. Atkinson. “Qualitative Data Analysis: Technologies and Representations.” Sociological Research Online, vol. 1, no. 1, 1996. Available at http://www.socresonline.org.uk/1/1/4.html. Accessed August 20, 2010.

Groves, R.M., E. Singer, and A. Corning. “Leverage-Saliency Theory of Survey Participation: Description and an Illustration.” Public Opinion Quarterly, 2000, vol. 64, pp. 299-308.

Levin, Henry M., and Patrick J. McEwan. Cost-Effectiveness Analysis: Methods and Applications. Second edition. Thousand Oaks, CA: Sage Publications, 2001.

Markesich, J., and M.D. Kovac. “The Effects of Differential Incentives on Completion Rates: A Telephone Survey Experiment with Low-Income Respondents.” Presented at the Annual American Association of Public Opinion Research, Nashville, TN, May 16, 2003.

Mercer, A., A. Caporaso, D. Cantor, and R. Townsend, R. “How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys.” Public Opinion Quarterly, 2015, vol. 79 (1), pp.105-129.

Ohls, James C., Dexter Chu, and Michael Ponza. “Elderly Nutrition Program Evaluation Final Report, Volume III: Methodology and Appendixes.” Report submitted to the U.S. Department of Health and Human Services, Office of the Secretary, Administration on Aging, and Office of Assistant Secretary for Planning and Evaluation. Princeton, NJ: Mathematica Policy Research, July 1996.

Questions and Answers When Designing Surveys for Information Collections. Guidance on Agency Survey and Statistical Information Collections. Washington, DC: Office of Management and Budget, January 20, 2006. Available at: http://www.whitehouse.gov/
sites/default/files/omb/assets/omb/inforeg/pmc_survey_guidance_2006.pdf.

Singer, E., R.M. Groves, and A.D. Corning. “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation.” Public Opinion Quarterly, vol. 63, 1999, pp. 251–260.

Singer, E., and R.A. Kulka. “Paying Respondents for Survey Participation.” In Studies of Welfare Populations: Data Collection and Research Issues. Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs, edited by Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press, 2002, pp. 105–128.

Singer, E., and C. Ye. “The Use and Effects of Incentives in Surveys.” The Annals of the American Academy of Political and Social Science, 2013, vol. 645, no. 112.

Yin, R. Case Study Research: Design and Methods. 2nd edition. Beverly Hills, CA: Sage Publishing, 1994.


1 We did not target employers in general that may hire a pilot participant. Only those employers directly involved in pilot activities, like offering on-the-job training, apprenticeships, internships, subsidized employment, etc. were contacted for the focus groups.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleB Research objectives and questions by data source Final (090415)
SubjectRESEARCH QUESTIONS, DATA SOURCES, AND KEY OUTCOMES
AuthorMathematica Staff
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy