OMB Part A _011316 v2

OMB Part A _011316 v2.docx

Evaluation of Supplemental Nutrition Assistance Program (SNAP) Employment and Training Pilots”.

OMB: 0584-0604

Document [docx]
Download: docx | pdf



Evaluation of SNAP Employment and Training Pilots

OMB Supporting Statement 0584-NEW

Part A: Justification

January 13, 2016





Submitted to:

Office of Management and Budget






Submitted by:

Project Officer: Wesley Dean

Food and Nutrition Service

United States Department of Agriculture

3101 Park Center Drive

Alexandria, VA 22302-1500





CONTENTS

PART A: JUSTIFICATION

A.1. Explanation of circumstances that make collection of data necessary 1

A.2. How the information will be used, by whom, and for what purpose 3

1. Use of the information 3

2. Study objectives 3

3. Data Collection Activities 5

A.3. Uses of improved information technology to reduce burden 19

A.4. Efforts to identify and avoid duplication 20

A.5. Efforts to minimize burden on small businesses or other entities 21

A.6. Consequences of less frequent data collection 22

A.7. Special circumstances requiring collection of information in a manner inconsistent with Section 1320.5(d)(2) of the Code of Federal Regulations 23

A.8. Federal Register comments and efforts to consult with persons outside the agency 23

1. Federal Register notice and comments 24

2. Consultations outside the agency 24

A.9. Payments to respondents 25

A.10. Assurance of privacy 30

1. Privacy 30

2. Institutional Review Board (IRB) 31

A.11. Questions of a sensitive nature 32

A.12. Estimates of respondent burden 34

A.13. Estimates of other annual costs to respondents 40

A.14. Estimates of annualized government costs 40

A.15. Changes in hour burden 40

A.16. Time schedule, publication, and analysis plans 41

1. Study schedule 41

2. Publication of study results 41

3. Plans for analysis 42

A.17. Display of expiration date for OMB approval 52

A.18. Exceptions to certification statement 52


TABLES

A.2.a SNAP E&T Pilots study data collection instruments for OMB approval 5

A.12 Estimates of respondent burden 36

A.16.3a Site visit schedule and focus 42



EXHIBITS

A.16.a Project schedule 41

A.16.b Annual congressional reports: scope and content 50



ATTACHMENTS

A Agriculture Act of 2014 (The 2014 Farm Bill)

B Research Questions, Data Sources, and Key Outcomes


C.1 Registration Document – English


C.2 Registration Document – Spanish


C.3 Registration Document – Screenshots

D.1 Study Consent Document – English


D.2 Study Consent Document – Spanish


D.3 Study Consent Document Mandatory – English


D.4 Study Consent Document Mandatory – Spanish


E.1 Welcome Packet Letter – English


E.2 Welcome Packet Letter – Spanish

F.1 Study Brochure – English


F.2 Study Brochure – Spanish

G.1 Seasonal Postcard - English


G.2 Seasonal Postcard - Spanish


H Master Site Visit Protocol

ATTACHMENTS (continued)

I.1 Interview Guide for Client Case Study


I.2 Interview Guide for providers


I.3 Observation Guide Case Study


J.1 Focus Group Moderator Guide for clients – English


J.2 Focus Group Moderator Guide for clients – Spanish


J.3 Focus Group Moderator Guide for employers


K.1 Client Focus Group Recruitment Guide – English


K.2 Client Focus Group Recruitment Guide – Spanish


L.1 Focus Group Confirmation Letter: Client – English


L.2 Focus Group Confirmation Letter: Client – Spanish


L.3 Focus Group Recruitment Email – Employer

L.4 Focus Group Confirmation Letter - Employer


M.1 Participant Information Survey: Client Focus Group – English


M.2 Participant Information Survey: Client Focus Group – Spanish


M.3 Participant Information Survey – Employer Focus Group


N Pretest Results Memorandum


O.1 SNAP E&T Pilots 12-Month Follow-Up Survey – English


O.2 SNAP E&T Pilots 12-Month Follow-Up Survey – Spanish


O.3 SNAP E&T Pilots 12-Month Follow-Up Survey – Screenshot


O.4 SNAP E&T Pilots 36-Month Follow-Up Survey – English


O.5 SNAP E&T Pilots 36-Month Follow-Up Survey – Spanish


O.6 SNAP E&T Pilots 36-Month Follow-Up Survey – Screenshot


P.1 Survey Advance Letter – English


P.2 Survey Advance Letter – Spanish




ATTACHMENTS (continued)


Q.1 Survey Reminder Letter – English


Q.2 Survey Reminder Letter – Spanish


R.1 Survey Reminder Postcard – English


R.2 Survey Reminder Postcard – Spanish


S.1 Survey Refusal Letter – English


S.2 Survey Refusal Letter – Spanish


T Administrative Data Elements


U Pilot Costs Workbook


V.1 Staff Time-Use Survey


V.2 Staff Time-Use Survey Screenshots


W.1 Time-Use Survey Initial Email


W.2 Time-Use Survey Reminder Email


X Sample respondent burden table


Y NASS Reviewer Comments and Responses to Comments


Z SNAP E&T Pilots Memorandum of Understanding


AA Document List for Document Review Process


BB Confidentiality Pledge

CC Federal Register Comments





A.1. Explanation of circumstances that make collection of data necessary

Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection. Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.

In the Agricultural Act of 2014 (Public Law 113-79, Section 4022), Congress added a new section authorizing the implementation and evaluation of pilot projects to “reduce dependency and increase work requirement and work effort under the Supplemental Nutrition Assistance Program.” Specifically, pilot programs or projects (also referred to as Grantees) are tasked with reducing dependence on the Supplemental Nutrition Assistance Program (SNAP) and other public assistance programs by increasing the number of SNAP participants who obtain unsubsidized employment and increasing the income of employed participants.

This section mandates an independent longitudinal evaluation of the pilot programs “to measure the impact of employment and training programs and services provided by each State agency under the pilot projects on the ability of adults in each pilot project target population to find and retain employment that leads to increased household income and reduced reliance on public assistance, as well as other measures of household well-being, compared to what would have occurred in the absence of the pilot project.” The data being collected under this submission are necessary to meet the congressionally mandated requirement for an independent evaluation of the demonstration projects being conducted by the U.S. Department of Agriculture (USDA), Food and Nutrition Service (FNS) under this authorizing legislation. A copy of the statute is included in Attachment A.

FNS received nearly 50 applications in response to a Request for Applications (RFA) released in August 2014 and screened them to identify the top third of the State proposals. FNS’ contractor then reviewed these applications to support FNS’ selection of the final 10 pilots. FNS sought the 10 pilots to be diverse—in geographic location, whether the existing employment and training program is mandatory or voluntary, in the services they offer, and in the targeted groups of SNAP participants. FNS also sought pilots that demonstrated plans for strong implementation of innovative SNAP employment and training programs that will introduce distinct and meaningful differences in services received by treatment and control group members, coupled with faithfully implemented research designs with adequate sample sizes.

FNS’ contractor implemented a multi-step approach to narrow the pool of pilots to the 10 pilots with the strongest potential program and evaluation designs consisting of (1) having a multi-person team assess each application and complete a review template used across all applications, (2) drafting a memorandum that summarized the strengths and weaknesses of each applicant’s written proposal and written responses to questions that FNS submitted to applicants based on FNS’s initial review of applications, (3) performing up to several rounds of submitting questions to applicants based on detailed reviews of applications. Finally, FNS’ contractor submitted a Technical Review memorandum in which it ranked the top third of applicants into categories of “high”, “medium” and “low” rankings. Using this as input, FNS awarded grants to pilots in California, Delaware, Georgia, Illinois, Kansas, Kentucky, Mississippi, Virginia, Vermont, and Washington State.



A.2. How the information will be used, by whom, and for what purpose

Indicate how, by whom, how frequently, and for what purpose the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.

1. Use of the information

The study is a new information collection. FNS/USDA will use the information gathered in the data collection activities discussed here to describe the pilot projects and to determine if the employment and training programs and services provided by each pilot program led to unsubsidized work, increased earnings, and reduced reliance on public assistance programs, including SNAP.

The data collection described in this document is essential for meeting the congressional mandate for an independent longitudinal evaluation of the pilot projects to foster work and self-sufficiency. This information collection will result in 6 annual reports to Congress from 2015 to 2020, and interim and final evaluation reports submitted from FNS’ contractor to FNS for each of the 10 pilots completed in April 2019 and April 2021, respectively. There is currently no other effort that can address the research objectives of the proposed study.

2. Study objectives

In the Agricultural Act of 2014, Congress called for the independent longitudinal evaluation of ten pilot projects aimed at fostering employment and reducing reliance on public assistance through the provision of employment and training opportunities. This evaluation, henceforth denoted as SNAP E&T Pilots, will measure and compare impacts, client participation, implementation, and costs across ten pilot projects. Attachment B provides an overview of the evaluation approach, including the research questions, data sources, and key outcomes for each of the study’s objectives.

The evaluation of each pilot includes four primary research components addressing the following research objectives:

  1. an implementation analysis that will document the context and operations of each pilot as well as help us interpret and understand impacts within and across pilots,

  2. a random-assignment impact evaluation that will identify what works, and what works for whom, with respect to employment/earnings, public assistance receipt, and other outcomes such as food security, health, and housing,

  3. a participation analysis that will examine the characteristics and service paths of pilot participants and the control group and assess whether the presence of the pilots and their offer of services or requirements to participate affect whether people apply to SNAP (entry effects), and

  4. a cost-benefit analysis that will estimate the return to each dollar invested.

In order to meet these study objectives (using the list of data collection instruments in Table A.2.a), FNS and its contractor will (1) employ a rigorous evaluation design that includes random assignment (RA) and (2) use a variety of data collection techniques, as outlined in Attachment B, to examine the different study components (i.e., implementation, impact, participation, and cost-benefits).

Data collection will begin with the collection of the registration document (Attachments C.1 and C.2) from study participants at each of the 10 pilot programs. During sites visits, we will conduct in-depth interviews with staff and conduct focus groups with program participants. We will also observe operational activities and review relevant documents. To minimize burden and cost, we will rely on various types of administrative data to address the objectives related to participation, impact, and cost-benefit analyses. In addition, we will conduct follow-up telephone surveys of a random sample of study participants at 12-months and 36-months (Attachments O.1/O.2 and O.4/O.5 respectively). In the following section, we describe participant completion of the consent document (Attachment D.1 – D.4), registration document, the site visits, the participant follow-up surveys and administrative data collection (Attachment T).

Table A.2.a. SNAP E&T Pilots study data collection instruments for OMB approval

Attachment

Description

Instruments and guides


C.1 & C.2

Registration Document

H

Master Site Visit Protocol

I.1

Interview Guide for Client Case Study

I.2

Interview Guide for Providers

I.3

Observation Guide Case Study

J.1 & J.2

Client Focus Group Moderator Guide

M.1 & M.2

Client Focus Group Participant Information Survey

J.3

Employer Focus Group Moderator Guide

M.3

Employer Participant Information Survey

O.1 & O.2

12-month Follow-Up Survey

O.4 & O.5

36-month Follow-Up Survey

U

Pilot Cost Data Collection Workbook

V.1

Staff Time-Use Survey

Other study materials


D.1, D.2, D.3, D.4

Study Consent Document

E.1 & E.2

Welcome Packet Letter

F.1 & F.2

Study Brochure

G.1 & G.2

Seasonal Postcard

K.1 & K.2

Focus Group Recruitment Guide – Client

L.1, L.2, L.4

Focus Group Confirmation Letter

L.3

Focus Group Recruitment Email – Employer

P.1 & P.2

Survey Advance Letter

Q.1 & Q.2

Survey Reminder Letter

R.1 & R.2

Survey Reminder Postcard

S.1 & S.2

Survey Refusal Letter

W.1

Staff Time-Use Survey Initial email

W.2

Staff Time-Use Survey Reminder email


3. Data Collection Activities

a. Enrollment and Consent

All participants that are eligible for the pilot and consent to participate are included in the study. No additional screening is required. As participants enroll, they will be asked to consent to (1) participate in the SNAP E&T activities and (2) participate in evaluation activities including follow-up surveys. On behalf of FNS, the contractor will train pilot site staff in the administration of the electronic study consent document which will be available in English and Spanish (Attachments D.1 and D.2) and how to address questions that may arise. (Because of the different Institutional Review Board (IRB) requirements and needs of each pilot site, the consent documents will need to be tailored for each site. But the content, at a minimum, is that which is found in Attachments D.1 and D.2.) For some sites, participation in regular core services is mandatory for individuals who must meet a work requirement in order to remain food stamp eligible. Therefore, a different consent document with a statement about that provision will be provided to participants in those sites (Attachment D.3 and D.4).

Participants who consent will receive a hardcopy version of the consent form to keep. After they consent, some baseline information will be collected from them before they get randomized into the treatment group (receiving the expanded services as part of the pilot) or control group (receiving the existing core services or no services).

b. Baseline information

Baseline information will be collected at the time of enrollment from the study participants by intake/case workers. Information about study participants is needed at the point of random assignment (RA) for both logistical and analytical purposes. Identifying information (name, date of birth, social security number) is necessary to conduct RA, track the research sample members throughout the study, and obtain their administrative data. Detailed contact information (name, address, telephone numbers, social media address) for the sample member and one or more of his or her friends or relatives is essential to locating study participants for follow-up interviews. Collecting data on baseline characteristics also allows us to: confirm that the treatment and control groups have similar characteristics, define key subgroups, improve the precision of impact estimates, and adjust for nonresponse bias. Participants will be notified that this information collected cannot be linked back to any individual.

The Registration Document, in both English and Spanish, will be used to collect these data (Attachments C.1 and C.2). It was based on registration instruments used for other evaluations of employment and training services such as for the Workforce Investment Act Gold-Standard Evaluation (WIA GSE), and will be programmed into the E&T Pilot Information System (EPIS) so that program staff can administer the instrument using a computer or tablet device. EPIS will be programmed with automatic checks to ensure that all data required for RA have been inputted, checks for out-of-range responses, and alerts the program intake person so that incorrect or missing information can be corrected. Screenshots of the registration document, as it will appear in EPIS, can be found in Attachment C.3.

Shortly after consent is obtained, participants will receive a welcome packet by mail that includes a letter (Attachments E.1 and E.2) welcoming them to the study and reminding them of the future surveys, a study brochure (Attachments F.1. and F.2) and a small gift (a magnet, approximate value of $1), with the project name. Including a refrigerator magnet in the welcome packet will provide an ongoing tangible reminder about the study. Midway between enrollment and the first follow-up survey, we will mail a seasonal postcard (Attachments G.1 and G.2) reminding participants they will be contacted in a few months for the first telephone interview.

c. Site visits

We will conduct three rounds of site visits to each pilot project; the first will focus on collecting data about planning and early implementation (April-May 2016), the second on collecting data about operations (April-May 2017), and the third on collecting data about full implementation and closeout (April-May 2018). We will attempt to visit all SNAP offices and local providers in each of the 10 pilots during the three rounds of site visits. However, if a pilot project operates across many offices and involves many providers (more than 10), we will purposively select for the visit up to 10 offices or providers representing a mix of characteristics (e.g., services offered, geography, technical assistance required, etc.). To understand program evolution, we will visit the same offices and providers across the three rounds of visits. Each visit will span three days and incorporate the data collection activities described below.

Pilot program staff and partner interviews. The cornerstone of each site visit will be in-depth interviews with various types of staff. Respondents or Grantees will include staff at the State SNAP agency, local SNAP offices, E&T service providers, and relevant partner agencies. Our contractor will conduct up to 300 interviews across the 10 pilot programs in each of the three rounds of site visits, for a total of approximately 900 interviews. Interviews across the three visits will generally include the same key staff, but local office staff and provider staff may vary between visits. Interviews conducted during the first site visit will focus on activities that occurred during the planning period, including topics such as the vision or logic model for the project, planned project design, implementation plan, community context, and the planning process itself. Interviews conducted in the second round of site visits will focus on the operation of the pilot. Interviews will probe leadership and partner roles, staffing structures, recruitment and engagement strategies, specific services offered and received, deviations from plans, and respondents’ perceptions of challenges and successes, among other topics. The third round of site visits will focus on capturing changes that have occurred since the prior interviews and identifying lessons learned over the course of the pilot. The master site visit protocol for all visits including topics for discussion and questions by respondent is included in Attachment H.

Each interview for Grantees will be no more than 1 hour in length. Similar questions will be asked of all respondents so that no single person’s opinions or responses are assumed to be definitive, and to ensure that it is understood not only how service delivery is meant to work, but also how it actually works.

Document review. In preparation for the site visits, we will review and analyze documents produced by staff at each pilot project, such as annual plans, outreach materials, tools for tracking participation and progress, internal management reports, and pilot marketing and participant communication materials that are provided by FNS staff or by pilot staff during the course of the technical assistance conversations. A list of documents that we expect to review from pilot projects can be found in Attachment AA.

d. Case studies and focus groups

During the second and third rounds of site visits, we will conduct some combination of client focus groups, employer focus groups, and case studies with SNAP E&T Pilot clients and providers. The decision about which types of additional research will be conducted in these rounds will be based on the needs of the site and what data would be most beneficial to the study.

Case studies. We plan to conduct case study interviews with no more than 40 clients and 120 local SNAP office staff and providers (3 staff per client, as appropriate). In-depth interviews with clients will last no more than 90 minutes and interviews with providers (provider staff could be affiliated with the government or in the private section depending on the grantee and pilot) will last up to 60 minutes. These case studies also will include structured observations of local SNAP offices and E&T provider operations. The interviews and observations will help to understand participant pathways, eligibility and service referral determinations, service delivery systems, and intensity of services provided under each pilot project. To the extent possible, researchers will use the observations guides while observing eligibility and case management meetings between participants and staff, participant assessments and orientations, and provision of key service components. Trained staff will then use the interview guides to interview providers and clients about this observed process. The interview guides for case studies of clients and providers, as well as the observation guides are included respectively in Attachments I.1, I.2, and I.3.

For the case study interviews, we will first select states with programs for which we want to learn more and then target providers that are providing services of interest. Because there are a minimal number of case studies overall, the evaluation team will focus those interviews on pilots and in areas that could be beneficial for explaining the impact and implementation results. We will work with area providers to identify clients (pilot participants) who are scheduled to be at a provider location for case management or training. Staff will work with the provider to reach out to clients to ask if they would be willing to participate in an interview prior to or following their appointment. The small number of client interviews will be conducted in English, as study staff on this project who are trained to conduct these interviews are not bi-lingual. Participants interviewed for the case study will receive $50 in cash to cover the costs of participation.

Focus groups. We will conduct no more than 20 client focus groups with 10 to 12 SNAP E&T Pilot participants (240 participants) and 20 employer focus groups with 10 to 12 participants (240 participants). Focus groups will last no longer than 90 minutes.

Client focus groups. The focus groups with pilot participants will allow FNS to better understand participants’ decision-making processes as it relates to the selection of some services and not others. The English and Spanish versions of the client moderator guides are contained in Attachments J.1 and J.2. For each focus group, staff will recruit 20 to 25 SNAP E&T Pilot program participants with the expectation that 10 to 12 will attend. Staff experienced in recruiting respondents will contact client focus group participants by telephone to explain the study’s purpose, topics to be discussed, incentives, and logistics. The make-up of each client focus group will vary by pilot project. Each focuses on slightly different target populations. Some sites will focus on ABAWDs, while others may focus on a hard to serve population, like the homeless. When we have a better sense of the characteristics of the pilot population (after recruitment) we will determine what populations to target and in which states. However, within the target population of the focus group, we will attempt to recruit a varied mix of clients, consistent with the demographic characteristics of the population (e.g., age, race, and gender). The English and Spanish focus group recruitment guides and recruitment criteria can be found in Attachment K.1 and K.2. Staff will mail a focus group confirmation letter to clients (Attachment L.1 and L.2) who agree to participate to remind them of the upcoming focus group.

Employer focus groups. The employer focus groups will provide valuable information about why employers become involved with these types of programs, what skills they value most, and how the programs could better match clients and employers. The employer focus group moderator guide is contained in Attachment J.3. The employers that are participating in the pilot as sites for work-based learning will be targeted for the focus groups.1 The team will need to determine which pilots are actively using employers for work-based learning, and have large enough samples for recruiting and conducting a focus group in an area. Focus groups may not be conducted in all pilots, as not all pilots focus on work-based learning training involving employers. To the extent there is variation in the type of employers involved in the pilot, the team will attempt to recruit employers from various industries and that are various sizes (small, medium, and large businesses). The contractor will work with Grantee and provider staff to identify employers for focus groups. The state staff will reach out to the employers to gauge interest and make introductions. The contractor will follow-up via email or by telephone as needed. An example recruitment email is included in Attachment L.3 and a confirmation letter reiterating the information about the study and focus group will be mailed (or emailed depending on their preference) (Attachment L.4).

Focus Group recruitment, incentives, and consent. To recruit SNAP E&T pilot focus group participants, we will first select states with programs for which we want to learn more. Using a convenience sample, we will use EPIS and MIS data to identify participants with a variety of characteristics.

Focus groups for clients and employers will be held at convenient times (including evening hours, if appropriate) and locations. The contractor will determine which times and types of locations are appropriate for the interviews on a case by case basis, with guidance from the Grantee and providers in the area. In general, pilot areas within the state with larger E&T populations will be selected, so there are large enough samples for recruitment. The contractor will seek to collect participants’ and employers’ contact information from each pilot program and will inquire about possible convenient locations to host the focus groups and case study interviews.

At the beginning of each focus group, staff will seek verbal consent from all participants. After reading the consent section of the focus group guide, staff will allow those who do not wish to participate an opportunity to leave. At the end of each focus group, participants will be asked to complete a Participant Information Survey (PIS). The purpose of the PIS is to capture basic characteristics of the focus group SNAP E&T Pilot participants who ultimately attended the discussion. The English and Spanish versions of the client Participant Information Surveys can be found in Attachments M.1.and M.2. The employer PIS can be found in Attachment M.3. We will offer each focus group participant a $50 MAX Discover® prepaid card to cover the costs of participation.


e. Participant follow-up surveys

Survey data from participants will be used to inform the impact, participation, and cost-benefit analyses. We will collect information at baseline (see A3b) and during two follow-up telephone surveys, described below.

Follow-up surveys. Longitudinal follow-up surveys will be conducted with a randomly selected subsample of study participants at 12 months after RA (N=25,000), and again at 36 months after RA (N=18,240). We anticipate a response rate between 65 and 80 percent for each follow-up survey. The surveys will collect data on service receipt and outcomes from both treatment and control group members. They will build on other surveys that have been administered successfully with similar low-income populations, such as the surveys used in WIA GSE or Rural Welfare-to-Work. We will ask the same questions of all respondents. Later follow-up surveys will be consistent with earlier rounds, with only minor changes to the survey instrument where the respondent is asked to confirm rather than recollect information. The surveys will be kept to a maximum average length of 32 minutes.

All surveys were translated into Spanish by a certified bilingual translator using the Referred Forward Translation approach in which a translator having extensive experience in survey development translates the questionnaire, and then a second translator reviews that work and recommends changes in phrasing or wording, or dialectical variations. The two then meet to discuss recommendations and determine the preferred questionnaire wording. The surveys will be administered via computer-assisted telephone interview (CATI) using trained telephone interviewers with field/in-person follow‑up.

Since most data collection instruments were drawn from previously administered surveys which have been tested and used successfully, the pretest focused mainly on testing the survey length and flow of questions. The pretest was conducted with nine current SNAP E&T participants from two states. Results of the pretest are provided in Attachment N. Upon approval of the proposed changes, the surveys were revised in response to the pretest results. Final versions of the English and Spanish 12-month follow-up surveys can be found in Attachments O.1 and O.2. The English and Spanish versions of the 36-month survey can be found in Attachments O.4 and O.5. Screenshots of the 12-month and 36-month surveys can be found in Attachments O.3 and O.6 respectively.

Conducting the participant survey. We will send a survey advance letter (Attachments P.1 and P.2) to SNAP E&T Pilot participants before each follow-up survey to inform them that the survey field period is beginning. To encourage participation in the survey, we will offer sample members a $30 MAX Discover® prepaid card for participating. At the second follow-up, we will increase the incentive to $40. Justification for the incentive is provided in Section A.9. After we begin outbound calling, we will attempt to reach respondents with survey reminder letters (Attachments Q.1 and Q.2.) and survey reminder postcards (Attachments R.1 and R.2) if multiple call attempts prove unsuccessful. We will also send a survey refusal letter (Attachments S.1 and S.2) to sample members who initially decline to complete the interview to emphasize the importance of the survey and ask them to reconsider participating. Trained Staff will conduct field locating / in-person locating to try to reach participants who do not respond by phone.

f. Administrative data

To minimize duplication of data collection efforts and decrease staff burden, we will use administrative data in the participation, impact, and cost-benefit analyses (see the list of administrative data elements collected in Attachment T). The types of data to be used are contained within wage, public assistance, and service receipt records. These three types are data are described below, followed by a description of our process for obtaining them.

Wage records. By law, employers subject to the unemployment insurance (UI) tax must report to the State UI agency the employment and earnings of each employee each quarter. We will request data from each State once per year for up to five years after random assignment.

Public assistance records. All States have integrated systems for SNAP and TANF. Many State systems also include Medicaid, though policy changes since the Affordable Care Act have made this less likely. We will use this system integration to its advantage by making combined requests for data on all programs linked in integrated systems. We will request a small set of variables for all pilot participants including monthly SNAP participation status indicators, SNAP benefit amount, TANF participation status indicator for the pilot participant, the benefit amount for pilot participant’s TANF unit, and an indicator for whether the pilot participant is covered by Medicaid. We will also obtain information on income and sources of income for the SNAP unit. These data will be used in the impact analysis and will be obtained from Grantees quarterly from the start of random assignment for up to five years.

We will also request Grantee’s state administrative data on SNAP participation or SNAP application rates, which will be used to support the planned entry effects analysis, which examines the impact of new work requirements on the levels of SNAP participation in the areas in which pilots operate. These data will be aggregated by geographical area, such as by county, in the state. This will be provided twice for each grantee: once before the interim report in September 2018 and again after the pilots end in January 2020.

Data on service receipt. Pilot project staff will collect and provide data on services received through the pilot projects. These data are needed for monitoring site performance, describing the services received by treatment and control group members, documenting entry and exit dates for specific E&T activities, and providing data needed for the cost-benefit analysis.

Obtaining data from State agencies. In conjunction with our contractor, FNS will first work with the State agency director/manager (or designee) to identify the relevant agency or agencies, staff, and data type relevant to the pilot project. We will then develop a comprehensive memorandum of understanding (MOU) for each State specifying the data sources and variables to be shared for the evaluation, procedures to ensure data security, and plans for developing and transferring a public use data file at the end of the evaluation. The timing and mode of transmission for data from each source will be determined in consultation between FNS’s contractor and pilot site. An MOU template can be found in Attachment Z.

To the extent that the States capture the necessary administrative data in their SNAP MIS, we will acquire it as part of our collection of SNAP administrative data. For sites that already capture the required data at the desired level of detail in their E&T MIS, or can do so by adding items to their systems, we will arrange for regular extracts from the existing MIS or data collection system.

We will create a secure file transfer protocol (FTP) site to which project sites can transmit files on a predefined and agreed-upon schedule clearly stated in the MOU. All transmissions will be checked by Mathematica programmers, and error reports will be generated back to the site, alerting them to missing or out-of-range items and identifying cases for which no data have been recorded after a specified interval.

g. Cost data

The evaluation team will collect cost data from each pilot project to generate estimates of start-up and ongoing pilot project implementation costs and estimates of overall, per component, and per participant costs. In addition, data on the costs of providing services to treatment group participants and, to the extent possible, the costs of providing services to control group participants will also be collected. Cost data will be collected by experienced research personnel who fully understand the conceptual framework for analysis, are adept a building rapport with site staff, and can describe complex cost issues clearly.

Pilot costs workbook. The evaluation team will use the pilot costs workbook (Attachment U) to collect data on pilot costs. The master workbook is designed to ensure systematic data collection across pilot projects, but will be customized to account for pilot project differences and changes to pilot implementation over time. The pilot projects will likely incur similar types of costs (e.g., staff, facilities, services, and supplies & equipment costs, etc.) but the nature of these costs—particularly service costs—will vary by pilot project, and some pilot projects might incur costs that others do not based on their specific project features. When a tab or field from the master pilot costs workbook does not apply to a particular pilot project, it will be removed from that project’s cost workbook. Workbooks will be further customized to account for pilot project differences as necessary; for instance, the “E. Services” tab of the pilot costs workbook will request data only on those services provided by each specific pilot project. To further avoid duplicating effort and burdening sites, the evaluation team will discuss with FNS and pilot projects the availability of existing data sources that might provide the information needed to complete the workbooks.

The pilot costs workbook will be administered to each pilot on a quarterly basis throughout pilot project implementation. The first round of cost data collection will occur immediately after the pilot planning period (November 2015) and will collect data on start-up costs. Cost data on ongoing pilot implementation costs will be collected on a quarterly basis thereafter (February 2016, May 2016, August 2016, November 2016, February 2017, May 2017, August 2017, November 2017, February 2018, May 2018)—a schedule that is comparable to the grant reporting schedule and which will prevent unduly long respondent recall. After the first round of data collection and to further reduce respondent burden, the evaluation team will pre-fill those workbook fields that are not expected to change over time (e.g., staff names and facilities).

All pilots will receive the same master workbook but all elements of the workbook might not be applicable to all 10 pilots. It is also likely that the grantee agency will be able to report data on some, but not all, pilot costs, and that pilot project service providers will also need to report on their pilot-related costs.

During the planning period, the pilots will be asked to identify a pilot cost liaison, the person in each pilot project responsible for tracking and reporting costs. With the evaluation team’s guidance and support, this cost liaison will complete data collection workbooks, and will assist with necessary data collection from service providers or as necessary. Cost liaisons will submit cost data via the secure FTP site used to collect administrative data, in case personally identified information (PII) is included.

For quality assurance, each member of the pilot evaluation team, especially the cost analyst, will use its knowledge of each pilot and its staff and services to review the workbooks and ensure expected costs are captured. If cost data are reported that were not anticipated or seem out of range for what was expected, we will use these issues as a basis for follow up with the grantee.  As data collection continues, we will check new workbooks against previously collected workbooks to identify anomalies and use these as a basis for follow up with the grantee.  

Staff time-use survey. Because labor will be a major component of the pilot interventions, service cost estimates (e.g., the cost of providing case management) must account for how pilot project staff (particularly frontline staff) spend their time. The evaluation team will therefore administer a staff time-use survey (Attachment V.1) to frontline staff at each pilot project. The survey will ask staff to record how they divided their time between pilot and other activities each day over the course of one week.

The time-use survey will be administered three times during pilot project implementation, once per year for three years. Up to 26 frontline staff at each pilot project will complete the survey during each round of time-use survey data collection. When a pilot project has fewer than 26 frontline staff, the survey will be administered to all frontline staff during each of the three rounds of time-use survey data collection. When a pilot project has more than 26 frontline staff, we will select a different sample of 26 frontline staff to complete the time-use survey during each round of data collection. We will use simple random sampling to select these staff. Selected staff will receive an email with information about the survey, and reminder emails if they do not complete it by the designated due date. Examples of the Time-Use Survey initial and reminder emails can be found in Attachment X.1 and X.2 respectively.

A.3. Uses of improved information technology to reduce burden

Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also, describe any consideration of using information technology to reduce burden.

This study strives to comply with the E-Government Act of 2002 (Public Law 107-347, 44 U.S.C. Ch 36). The consent document (Attachment D.1 – D.4) and registration document (Attachment C.1, C.2) will be collected from SNAP E&T Pilots participants electronically (unless a particular site has logistical circumstances that do not allow for electronic collection) via EPIS, a web-based random assignment system. Burden is also being reduced by using computer-assisted telephone interviewing (CATI) to administer the follow-up surveys (Attachment O) of SNAP E&T pilot program participants. By including programmed skip patterns and consistency and data range checks, this technology reduces data entry error that often necessitates callbacks to respondents to clarify the responses recorded by an interviewer using pencil and paper to conduct an interview. The study will collect 100% of follow-up survey responses electronically using CATI.

To the extent possible, any administrative and cost data requested from programs will be collected using Excel workbooks (Attachment U) which will be sent to sites via email. This format will enable us to systematically collect data across pilot programs while limiting the burden associated with hardcopy completion.

The staff time-use survey administered as part of cost data collection will be administered using Opinio, an internet-based survey software. This format will allow respondents to enter data at their own pace and on their own schedules. No personal data will be maintained in the system. Screenshots of the time-use survey can be found in Attachment V.2.

The total number of responses is 144,983 and the total number of electronic responses is 135,514. The percentage of responses that are electronic is 93.5 percent.

A.4. Efforts to identify and avoid duplication

Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purpose described in item 2 above.

FNS has made every effort to avoid duplication. FNS has reviewed USDA reporting requirements, State administrative agency reporting requirements, and special studies by other government and private agencies. To our knowledge, there is no similar information available or being collected for the current time frame that could be used to evaluate these congressionally mandated pilot programs.

The information in the Baseline Registration document consists of demographic and economic characteristics of the pilot participant and needs to be collected just prior to he or she being randomly assigned. Although some of this information, such as whether the pilot participant is employed, is contained in many states’ management information systems, we need to ask the question the same way across all ten grantees so that the data is consistently defined across grantees in the impact analyses. We also need to obtain these data just before random assignment, rather than using existing information from prior weeks or months, to have the data accurately describe pilot participants’ circumstances at baseline. Finally, for several grantees, the Baseline Information Registration data will be collected at the service provider level, rather than at the SNAP administrative level. Thus, although pilot participants’ Social Security Numbers (SSNs) are available for many grantees’ SNAP administrative data systems, we are requesting SSNs on the Baseline Information Registration to be able to link the Baseline Information Registration data to the SNAP administrative data. Without it, we would not be able to successfully link the two data sources.

The information in the follow-up surveys is not available elsewhere. We are collecting data on quarterly earnings from UI wage records, for example, but those data do not have more detailed employment and earnings information such as measures of job quality, job tenure, and more detailed wage information that is required for the impact analysis.

A.5. Efforts to minimize burden on small businesses or other entities

If the collection of information impacts small businesses or other small entities, describe any methods used to minimize burden.

Information being requested or required has been held to the minimum required for the intended use. Although smaller Grantees otherwise known as State agencies and for-profit and not-for-profit awardee partners are involved in this data collection effort, they deliver the same program benefits and perform the same function as any other State agency or business partner. Thus, they maintain the same kinds of administrative information on file. We estimate one small business or other small entity will serve as a partner to each pilot program (ten total). The same methods to minimize burden will be used with all such entities. The total number of small entities is 960 or 1.8 percent of the total number of respondents. To avoid burdening for-profit contractors and entities playing a minor role in the pilot program, the evaluation contractor will exclude from the data collection those that receive minimal funding or resources from the awardee.

A.6. Consequences of less frequent data collection

Describe the consequence to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles to reducing burden.

The proposed study will evaluate the impact, implementation, and costs of pilot programs that foster employment and reduce reliance on public assistance programs. This is an ongoing data collection and participation is voluntary. Data for the study will be collected from 2016 to 2018 from State agency staff; private sector for-profit and not-for profit partner organization staff; and SNAP E&T Pilot program participants. Without this information, FNS will not be able to produce the required annual Reports to Congress. Moreover, collecting data less frequently would jeopardize the impact evaluation, because the design requires an assessment of change over time, including both a short-term and longer-term assessment. Tracking SNAP E&T participants via a longitudinal survey also will allow FNS to better understand how participants engage in services, acquire job skills, and obtain (and subsequently retain) employment.

A.7. Special circumstances requiring collection of information in a manner inconsistent with Section 1320.5(d)(2) of the Code of Federal Regulations

Explain any special circumstances that would cause an information collection to be conducted in a manner

  • requiring respondents to report information to the agency more often than quarterly;

  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

  • requiring respondents to submit more than an original and two copies of any document;

  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records for more than three years;

  • in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;

  • requiring the use of a statistical data classification that has not been reviewed and approved by OMB;

  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

  • requiring respondents to submit proprietary trade secret, or other confidential information unless the agency can demonstrate that it has instituted procedures to protect the information's confidentiality to the extent permitted by law.”

There are no special circumstances that would cause FNS to conduct this information collection in a manner inconsistent with 5 CFR 1320.5.

A.8. Federal Register comments and efforts to consult with persons outside the agency

If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency's notice, soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments.

1. Federal Register notice and comments

A notice of the proposed information collection and an invitation for public comment was published in the Federal Register, May 20, 2015, volume 80, number 97, pages 28931–28936. FNS received one comment via email for this proposed information collection which can be seen in attachment CC. No response was made to this comment.

2. Consultations outside the agency

In addition to soliciting comments from the public, FNS consulted with the following people for their expertise in matters such as data sources and availability, research design, sample design, level of burden, and clarity of instructions for this collection.

Leslie Smith

USDA National Agriculture Statistical Service (NASS)

Methodology Division

1400 Independence Ave., SW

Washington, DC 20250

(800) 727-9540

See Attachment Y for NASS reviewer comments and responses to these comments.


Bryan Wilson

State Policy Director

National Skills Coalition

1730 Rhode Island Avenue NW, Suite 172

Washington, DC 20036

(202) 223-8991

Brendan C. Kelly

Office of Planning, Research and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

370 L’Enfant Promenade SW

Washington, DC 20447

(202) 401-4535


Harvey Krahn

Department of Sociology

5-21 HM Tory Building

University of Alberta

Edmonton, Alberta

Canada T6G 2H4

(780) 492-5234

Philip Hong

Loyola University Chicago School of Social Work

820 N. Michigan Avenue, Lewis Towers 1238

Chicago, IL 60611

(312) 915-7005


Matthew Rabbit

U.S. Department of Agriculture

Economic Research Service

1400 Independence Ave., SW Mail Stop 1800

Washington, DC 20250-0002

(202) 694-5139


Charlotte Tuttle

U.S. Department of Agriculture

Economic Research Service

1400 Independence Ave., SW Mail Stop 1800

Washington, DC 20250-0002

(202) 694-5139

A.9. Payments to respondents

Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.

Research has long shown that incentives can increase response rates to surveys (thus minimizing non-response bias) without compromising the quality of the data (Singer and Kulka 2002; Singer et al. 1999). Further, sufficient incentives can help obtain a high cooperation rate and minimize the need of field interviewers to locate sample members to complete the survey. Incentives for this information collection are planned for the SNAP E&T Pilots participant focus groups and participant surveys, all of which are voluntary for respondents. Incentives are not planned for data collection activities with Grantee staff because they responded to and were awarded the grant, and therefore, are expected to fully participate in this data collection.

A random sample of participants that were enrolled in the pilot programs (approximately half) will be selected for the household participant surveys. The incentives are an essential component of the multiple approaches used to minimize non-response bias described in Section B.3 of this information collection request, and are especially critical because of the longitudinal design in which household respondents will be contacted for up to 36 months for two follow-up surveys. Participants may be asked to engage in multiple data collection events during specified windows of time, and during a period in their lives when they face competing demands from young children and other family and work obligations. These respondents are exerting unusual effort, and therefore, the potential for response bias among subsets of participants must be avoided proactively to ensure high quality data.

The first incentive, the magnet that is sent with the welcome letter, is non-monetary. The magnet is provided as a small gift and will provide an ongoing tangible reminder about the study. It will also include the study toll-free number that the sample members can call if they have questions, or if they want to update their contact information. The first monetary incentive provided to sample members was the $40 MAX Discover® prepaid card given to pretest respondents as described in the pretest results memorandum (Attachment N).

As mentioned, the longitudinal household surveys will be conducted with a randomly selected subsample of study participants. To achieve the higher range response rate desired to obtain reliable impact estimates, on behalf of FNS, our contractor will offer an incentive of $30 to all households for the 12-month follow-up survey and $40 for the 36-month follow-up survey. The survey incentives proposed for the SNAP E&T Pilots Evaluation are based on the characteristics of the study population and experience with conducting telephone surveys with similar low-income populations:

  • The USDA-sponsored Supplemental Nutrition Assistance Program on Food Security (OMB Control Number 0584-0563, Discontinued September 19, 2011) offered a modest $2 pre-pay incentive and a $20 post-pay upon completing the telephone interview and had a response rate of 56 percent for baseline and 67 percent for a six-month follow-up.

  • Site-specific baseline survey response rates in the USDA-sponsored 2012 SEBTC study (OMB Control Number 0584-0559, Discontinued March 31, 2014) ranged from 39 percent to 79 percent across 14 sites using a $25 incentive. The average unweighted response rate was 67 percent; the rate was 53 percent for passive consent sites and 75 percent for active consent sites (Briefel et al. 2013).  Despite not achieving the target response rate, the increase in incentive for the 2012 full demonstration year from $10 to $25 proved effective in improving the response rate by 12% unweighted and 15% weighted over the 2011 pilot year. The increased incentive for the 2012 survey was successful in addressing respondent fatigue that was evident during the 2011 pilot year.

Both of these recent studies conducted with populations similar to the SNAP E&T Pilots indicate that a $22-25 incentive alone may not be sufficient for reaching the higher end of the target response rate for this study (i.e., 80 percent). As such, we propose a $30 incentive amount for the 12-month survey and a $40 incentive for the 36-month survey. Increasing the amount of the incentive to $40 for the 36-month survey may contribute to keeping more respondents engaged, reduce respondent fatigue with the burden involved in the number of data collection activities required over time, and minimize response bias in the study.

Mercer et al. (2015) conducted a meta-analysis of the dose-response association between incentives and response and found a positive relationship between higher incentives and response rates for household telephone surveys offering post-pay incentives. Singer et al. (1999) found in a previous meta-analysis that incentives in face-to-face and telephone surveys were effective at increasing response rates, with a one dollar increase in incentive resulting in approximately a one-third of a percentage point increase in response rate, on average. Further, sufficient incentives can help obtain a high cooperation rate for both the baseline and follow-up surveys so that at follow-up less field interviewer effort will be needed to locate sample members to complete the survey.

The above discussion summarizes evidence for the effectiveness of incentives for reducing non-response bias, and the response rates associated with offering lower incentive amounts to highly similar target populations. In addition, the justifications for offering incentives and the amounts to be offered are justified for several reasons that address key Office of Management and Budget (OMB) considerations (Office of Management and Budget 2006):

  • Improved data quality. Incentives can increase sample representativeness. Because they may be more salient to some sample members than others, respondents who would otherwise not consider participating in the surveys may do so because of the incentive offer (Groves et al. 2000).

  • Improved coverage of specialized respondents. Some of the populations targeted by the Pilot programs include homeless and ex-offenders, which are considered hard-to-reach (Bonevski et al., 2014). In addition, households in some of the pilot areas are specialized respondents because they are limited in number and difficult to recruit, and their lack of participation jeopardizes the impact study. Incentives may encourage greater participation among these groups.

  • Reduced respondent burden. As described above, the incentive amounts planned for the SNAP E&T Pilots Evaluation are justified because they are commensurate with the costs of participation, which can include cellular telephone usage or travel to a location with telephone service, particularly for the homeless population served by some of the pilot programs.

  • Complex study design. The participant surveys collected for the impact study are longitudinal. Participants will be asked to complete a registration document and two surveys over a period of 36 months. Incentives in amounts similar to those planned for this evaluation have been shown to increase response rates, decrease refusals and noncontacts, and increase data quality compared to a no-incentive control group in a longitudinal study (Singer and Ye 2013).

  • Past experience. The studies described above suggest incentives for surveys fielded to similar low-income study populations may be effective.

  • Equity. The incentive amounts will be offered equally to all potential survey participants. The incentives will not be targeted to specific subgroups, nor will they be used to convert refusals. Moreover, if incentives were to be offered only to the most disadvantaged individuals, such as the homeless or ex-offenders, the differing motivations to participate used across projects will limit the ability to compare results across target populations and sites.

In summary, the planned incentives for the longitudinal household surveys are designed to promote cooperation and high data quality and to reduce participant burden and participant costs associated with the surveys, which are similar in length and will be conducted with similar populations as in other OMB-approved information collections.

The planned $50 incentives for the focus groups and $50 for the case study interviews are also consistent with many of the key OMB considerations described above as well as other OMB-approved information collections. For example, $50 incentives are currently being offered to community members, including parents, participating in one-hour telephone interviews for the Evaluation of the Pilot Project for Canned, Frozen, or Dried Fruits and Vegetables in the Fresh Fruit and Vegetable Program for USDA/FNS (OMB Control Number 0584-0598, Expiration Date September 30, 2017). The study to assess the effect of Supplemental Nutrition Assistance Program on Food Security (OMB Control Number 0584-0563, Discontinued September 19, 2011) offered a $30 incentive to SNAP recipients for completing a 90-minute in-depth interview.

While it will vary, some focus group participants may need to travel long distances to focus group facilities and to the provider offices where the case study interviews will take place. Participants may incur child care costs for the time spent in the discussions/interviews and on travelling. The planned incentive amount is consistent with the costs of participation for some respondents.



A.10. Assurance of privacy

Describe any assurance of privacy provided to respondents and the basis for the assurance in statute, regulation, or agency policy.

1. Privacy

In accordance with the Privacy Act of 1974, our contractor and study team will protect the identifiable information collected for the evaluation from disclosure, to the extent permitted by law, and will use it for research purposes only, except as otherwise required by law. To reduce the risk of disclosure, personally identifiable data collected will not be entered into the analysis file, and data records will contain a numeric identifier only. The terms and protections that will be provided to respondents are discussed in two system of record notices (SORNs) titled 1) FNS-8 USDA FNS Studies and Reports published in the Federal Register on April 25, 1991, Volume 56, page 19078; and 2) USDA/FNS-10 Persons Doing Business with the Food and Nutrition Service, published in the Federal Register on March 31, 2000, Volume 65, and located on pages 17251-17252. Pilot program and partner staff and individual program participants will be notified that the information they provide will not be released in a form that identifies them, except as otherwise required by law. No identifying information will be attached to any reports or data supplied to USDA or any other researchers. The identities of the project directors from the States are known, because their information was included on applications to participate in the pilot program.

When reporting the results, data will be presented only in aggregate form, so that individuals and institutions will not be identified. A statement to this effect will be included with all requests for data. All members of the study team having access to the data will be trained on the importance of privacy and data security. All data will be kept in secured locations. Identifiers will be destroyed as soon as they are no longer required.

FNS staff will never handle or see any of the personal data collected and Mathematica Policy Research’s system does not tie into any of FNS’ data management and analysis systems nor was the Mathematica Policy Research’s data creation and processing system created for this contract agreement. FNS does not have any control over the contractor’s system.

The following safeguards will be employed by FNS’s contractor to protect privacy during the study:

  • Access to identifying information on sample members will be limited to those who have direct responsibility for providing and maintaining sample locating information. At the conclusion of the research, these data will be destroyed.

  • Identifying information will be maintained on separate forms and files that are linked only by sample identification numbers. This cannot be linked back to any one individual.

  • Access to the file linking sample identification numbers with respondents’ IDs and contact information will be limited to a small number of individuals who have a need to know this information.

  • Access to hard copy documents will be rigorously limited to project staff with a need to know. Documents will be stored in locked files and cabinets. Documents containing PII will be shredded when discarded.

  • Computer data files will be protected with passwords and access will be limited to specific users on the research team.

  • Employees must notify their supervisor, the project director, and the Mathematica security officer if secured and private information has been disclosed to an unauthorized person, used in an improper manner, or altered in an improper manner.

A copy of the Confidentiality Pledge in which the employees of the contractor provide assurances to the above safeguards can be seen in Attachment BB.

2. Institutional Review Board (IRB)

The contractor will obtain clearance from the New England Internal Review Board (NEIRB), a recognized IRB for Research Involving Human Subjects. NEIRB is responsible for ensuring that the organization’s research: 1) meets the highest ethical standards; and 2) receives fair, timely, and collegial review by an external panel. NEIRB is part of the contractor’s Federal Wide Assurance (FWA) and is committed to complying with the requirements in the HHS Protection of Human Subjects regulations at 45 CFR parts 46.

For review, the NEIRB requires a summary of the study and several forms describing the pilot sites, as well as all instruments and attachments from the OMB package. After the final instruments are sent to FNS, the contractor will submit the IRB application to the NEIRB. We expect to have NEIRB clearance by the end of October. We also expect that a few states will require local IRB reviews, and the contractor will obtain clearance from these as needed.

A.11. Questions of a sensitive nature

Provide additional justification for any questions of a sensitive nature, such as sexual behavior or attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers the questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.

While the questions in the registration document (Attachments C.1 and C.2) and follow-up surveys (Attachments O.1, O.2, O4, and O.5) are generally not of a sensitive nature, some individuals may be reluctant to provide information on race, family structure, income, food security, and/or mental health and well-being, as well as their social security number. Obtaining answers to such questions is, however, essential to the effective execution of this study. For example, food security is a secondary outcome measure, and serves as the main indicator of household health and well-being. The food security measures being included in the telephone interview are widely used as indicators of adequate food access in a number of major public national surveys, including the Current Population Survey. In addition, race and ethnicity, income, and sources of income are critical background characteristics, both in that they define key subgroups of individuals, and that they are important control variables in assessment of program impacts. Finally, mental health and well-being measures apart from food security, such as questions on whether the respondent experiences depressive symptoms, will be used to define other secondary outcome measures.

FNS cannot obtain this information from existing sources because it is not collected as part of standard program administration (for example, food security and mental health measures) or because it is neither measured consistently across grantees nor measured at the point of random assignment (for example, race and ethnicity and income and income sources). Not having this information would prevent FNS from being able to estimate the impact of the pilots on the full set of outcome measures that the 2014 Agricultural Act specified to be evaluated. It would also compromise the statistical modelling approach by not having key variables for the statistical impact models.

Individuals may also be reluctant to provide SSNs. As stated in section A.4, however, SSNs are required to link the Baseline Information Registration data with SNAP administrative data in order to examine the impact of the pilots on public assistance receipt.

Overall, this information is essential to measure key outcomes of the pilots and are needed as covariates or to form subgroups in the analyses. Prior to random assignment, participants will consent to participate in the study, including the follow-up surveys. At the time the participant is called to complete a survey, he/she will be reminded that participation is voluntary and that they may decline to answer any questions they do not wish to answer and there will be no penalties.

As described in section A.10, in accordance with the Privacy Act of 1974, our contractor and study team will protect the privacy of all information collected for the evaluation and will use it for research purposes only, except as otherwise required by law. This applies to all data including questions that respondents perceive to be of a sensitive nature.

Calculating accurate estimates of program costs requires collecting information on staff salaries. The importance of this information will be explained to study respondents.

A.12. Estimates of respondent burden

Provide estimates of the hour burden of the collection of information.

  • Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated. If this request for approval covers more than one form, provide separate hour burden estimates for each form and aggregate the hour burdens in Item 13 of OMB Form 83-I.

There are a total of 53,830 respondents and non-respondents. The affected public in this study are 190 State Agency staff; 690 business- for- and not-for-profit staff; and 52,861 individuals/households. FNS anticipates 100 percent participation for our grantees (State agencies and Business’). There are also a total of 317,108 responses (300,869 responses from respondents + 16,239 non-respondents). These respondents include those who choose not to participate (known as non-respondents). Attachment X shows sample sizes, estimated burden, and estimated annualized cost of respondent burden for each part of the data collection and for the data collection as a whole. The annual total estimated burden across all data collection components is 49,972.09 hours (49,090.81 hours for respondents + 881.28 hours for non-respondents). Time for reading data collection materials such as letters, postcards, and emails, are included in the time estimate in the burden table. No respondents will be asked to keep records of data as part of this data collection; therefore, no burden hours have been estimated for recordkeeping.

  • Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage rate categories.

Annualized cost of respondent burden is the product of each type of respondent’s annual burden and average hourly wage rate. The total cost of respondent burden is $781,721.47. The total annualized cost of this information collection is calculated as the sum of the annualized costs by respondent category. For each respondent category, the annualized cost is the product of burden hours (including pretest burden and nonresponsive burden) and an assumed wage rate for a corresponding occupation.

The hourly wage rate of $7.25 for individuals/participants is the federal minimum wage rate according to the Department of Labor Wage and Hour Division (http://www.dol.gov/whd/minimumwage.htm).

Remaining wage rates for the other affected publics were determined using the most recent available data, the May 2014 National Occupational Employment and Wage Estimates data from the Bureau of Labor Statistics. (http://www.bls.gov/oes/current/oes_nat.htm). Using this website, the salaries of State, local, or Tribal agency director/manager respondents ($54.08) are the average hourly earnings of government workers in management occupations (11-0000). The salaries of State, local, or Tribal agency direct service staff respondents ($21.79) are the average hourly earnings of workers in community and social services occupations (21-0000).

For the Private sector, the salaries for the for-profit business director/manager respondents ($54.08) are the average hourly earnings of workers in management occupations (11-0000). The salaries of Private sector not-for-profit agency director/manager respondents ($32.56) are the average hourly earnings of social and community services managers (11-9151). The salaries of Private sector not-for-profit agency employer training supervisor respondents and direct service staff ($21.03) are the average hourly earnings of community and social service specialists (21-1099).

Table A.12. Estimates of respondent burden




RESPONDENTS





NON-RESPONDENTS






Affected public

Respondents type

Instrument

Sample size

Estimated number of respondents

Frequency of response

Total responses

Average time per response (hours)

Total Estimated annual burden (hours)

Estimated number of non-respondents

Frequency of response

Total responses

Average time per response (hours)

Total Estimated annual burden (hours)

Grand Total burden estimate

Hourly Wage rate**

Estimate Total Annual Cost to Respondents

Individuals/households
















Individuals

Participant

Registration Document

52,852

52,852

1

52,852

0.20

10,570.40

0

0

0

0

0.00

10,570.40

$7.25

$76,635.40


Participant

Consent Document

52,852

52,852

1

52,852

0.05

2,642.60

0

0

0

0

0.00

2,642.60

$7.25

$19,158.85


Participant

Pretest

18

9

1

9

0.66

5.94

9

1

9

0.05

0.45

6.39

$7.25

$46.33


Participant

Welcome Packet and Study Brochure

25,000

25,000

1

25,000

0.05

1,250.00

0

1

0

0.05

0.00

1,250.00

$7.25

$9,062.50


Participant

Seasonal Postcard

25,000

25,000

1

25,000

0.17

4,250.00

0

1

0

0.05

0.00

4,250.00

$7.25

$30,812.50


Participant

Survey Advance Letter (12-mon follow-up)

25,000

18,240

1

18,240

0.033

601.92

0

1

0

0.05

0.00

601.92

$7.25

$4,363.92


Participant

Survey Reminder Letter (12-mon follow-up)

12,500

12,500

1

12,500

0.033

412.50

0

1

0

0.05

0.00

412.50

$7.25

$2,990.63


Participant

Survey Reminder Postcard (12-mon follow-up)

12,500

12,500

1

12,500

0.0167

208.75

0

1

0

0.05

0.00

208.75

$7.25

$1,513.44


Participant

Survey Refusal Letter (12-mon follow-up)

3,750

3,750

1

3,750

0.033

123.75

0

1

0

0.05

0.00

123.75

$7.25

$897.19


Participant

Telephone survey (12-mon follow-up)

25,000

18,240

1

18,240

0.53

9,667.20

6,760

1

6,760

0.05

338.00

10,005.20

$7.25

$72,537.70


Participant

Seasonal Postcard

18,240

18,240

1

18,240

0.0167

304.61

0

1

0

0.05

0.00

304.608

$7.25

$2,208.41


Participant

Survey Advance Letter (36-mon follow-up)

18,240

18,240

1

18,240

0.033

601.92

0

1

0

0.05

0.00

601.92

$7.25

$4,363.92


Participant

Survey Reminder Letter (36-mon follow-up)

9,120

9,120

1

9,120

0.033

300.96

0

1

0

0.05

0.00

300.96

$7.25

$2,181.96


Participant

Survey Reminder Postcard (36-mon follow-up)

9,120

9,120

1

9,120

0.0167

152.30

0

1

0

0.05

0.00

152.304

$7.25

$1,104.20


Participant

Survey Refusalr Letter (36-mon follow-up)

2,736

2,736

1

2,736

0.033

90.29

0

1

0

0.05

0.00

90.288

$7.25

$654.59


Participant

Telephone survey (36-mon follow-up)

18,240

11,090

1

11,090

0.53

5,877.70

7,150

1

7,150

0.05

357.50

6,235.20

$7.25

$45,205.20


Participant

Focus Group Recruitment Guide

1,200

240

1

240

0.17

40.08

960

1

960

0.08

80.00

120.08

$7.25

$870.58


Participant

Focus Group Confirmation Letter

240

240

1

240

0.03

7.92

0

1

0

0.08

0.00

7.92

$7.25

$57.42


Participant

Focus Group & Information Survey

1,200

240

1

240

1.67

400.00

960

1

960

0.08

80.00

480.00

$7.25

$3,480.00


Participant

Case Study

200

40

1

40

1.67

66.67

160

1

160

0.08

13.33

80.00

$7.25

$580.00

Subtotal of unique individuals/households

52,870

52,861

5.49

290,249

0.13

37,576

-*

-

15,999

-

869

38,445

-

278,725

State, local, and Tribal government















State, local, and Tribal government

State, local, or Tribal agency director/manager

In-person interview (round 1) and Cost/benefit interivews

170

170

1

170

1.00

170.00

0

0

0

0

0.00

170.00

$54.08

$9,193.60


State, local, or Tribal agency director/manager

In-person interview (round 2)

150

150

1

150

1.00

150.00

0

0

0

0

0.00

150.00

$54.08

$8,112.00


State, local, or Tribal agency direct service staff

In-person interview (round 3)

150

150

1

150

1.00

150.00

0

0

0

0

0.00

150.00

$21.79

$3,268.50


State, local, or Tribal agency direct service staff

Case Study

10

10

1

10

1.00

10.00

0

0

0

0

0.00

10.00

$21.79

$217.90


State, local, or Tribal agency direct service staff

Provide documents for review

10

10

4

40

0.25

10.00

0

0

0

0

0.00

10.00

$21.79

$217.90


State, local, or Tribal agency direct service staff

Complete MOU

10

10

1

10

1.00

10.00

0

0

0

0

0.00

10.00

$21.79

$217.90


State, local, or Tribal agency director/manager

Provide wage data

10

10

5

50

2.50

125.00

0

0

0

0

0.00

125.00

$54.08

$6,760.00


State, local, or Tribal agency director/manager

Provide SNAP/Medicaid/TANF data

10

10

20

200

2.50

500.00

0

0

0

0

0.00

500.00

$54.08

$27,040.00


State, local, or Tribal agency director/manager

Provide entry effects data

10

10

2

20

2.50

50.00

0

0

0

0

0.00

50.00

$54.08

$2,704.00


State, local, or Tribal agency director/manager

Cost/benefit interviews (after visit 1)

10

10

19

190

1.00

190.00

0

0

0

0

0.00

190.00

$54.08

$10,275.20


State, local, or Tribal agency director/manager

Provide cost data

10

10

11

110

2.00

220.00

0

0

0

0

0.00

220.00

$54.08

$11,897.60

Subtotal unique State, local, and Tribal government

190

190

5.79

1,100

1.44

1,585.00

0

-

0

-

0.00

1,585.00


79,905

Business for-not-for profit
















Private sector

Private sector for-profit business director/manager

In-person interview (round 1)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$54.08

$4,056.00


Private sector for-profit business director/manager

In-person interview (round 2)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$54.08

$4,056.00


Private sector for-profit business director/manager

In-person interview (round 3)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$54.08

$4,056.00


Private sector for-profit business director/manager

Case Study

55

55

1

55

1.00

55.00

0

0

0

0

0.00

55.00

$54.08

$2,974.40


Private sector for-profit business director/manager

Provide administrative data

50

50

12

600

4.00

2,400.00

0

0

0

0

0.00

2,400.00

$54.08

$129,792.00


Private sector for-profit business director/manager

Cost/benefit interviews

100

100

20

2,000

0.50

1,000.00

0

0

0

0

0.00

1,000.00

$54.08

$54,080.00


Private sector for-profit business director/manager

Provide cost data

10

100

11

1,100

1.00

1,100.00

0

0

0

0

0.00

1,100.00

$54.08

$59,488.00


Private sector for-profit direct service staff

Time Use Survey Initial Email

80

80

3

240

0.0167

4.01

0

0

0

0

0.00

4.008

$21.03

$84.29


Private sector for-profit direct service staff

Time Use Survey Reminder Letter

40

40

3

120

0.033

3.96

0

0

0

0

0.00

3.96

$21.03

$83.28


Private sector for-profit direct service staff

Time Use Survey

80

80

3

240

0.33

79.20

0

0

0

0

0.00

79.20

$21.03

$1,665.58


Private sector not-for-profit agency director/manager

In-person interview (round 1)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$32.56

$2,442.00


Private sector not-for-profit agency director/manager

In-person interview (round 2)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$32.56

$2,442.00


Private sector not-for-profit agency director/manager

In-person interview (round 3)

75

75

1

75

1.00

75.00

0

0

0

0

0.00

75.00

$32.56

$2,442.00


Private sector not-for-profit agency director/manager

Case Study

55

55

1

55

1.00

55.00

0

0

0

0

0.00

55.00

$32.56

$1,790.80


Private sector not-for-profit agency director/manager

Provide administrative data

50

50

12

600

4.00

2,400.00

0

0

0

0

0.00

2,400.00

$32.56

$78,144.00


Private sector not-for-profit agency director/manager

Cost/benefit interviews

100

100

20

2,000

0.50

1,000.00

0

0

0

0

0.00

1,000.00

$32.56

$32,560.00


Private sector not-for-profit agency director/manager

Provide cost data

100

100

11

1,100

1.00

1,100.00

0

0

0

0

0.00

1,100.00

$32.56

$35,816.00


Private sector for-profit direct service staff

Time Use Survey Initial Email

80

80

3

240

0.0167

4.01

0

0

0

0

0.00

4.008

$21.03

$84.29


Private sector for-profit direct service staff

Time Use Survey Reminder Letter

40

40

3

120

0.033

3.96

0

0

0

0

0.00

3.96

$21.03

$83.28


Private sector not-for-profit agency director/manager

Time Use Survey

80

80

3

240

0.330

79.20

0

0

0

0

0.00

79.20

$32.56

$2,578.75


Private sector not-for-profit employer training supervisor

Focus Group Recruitment Email

200

120

1

120

0.0167

2.00

80

1

80

0.05

4.00

6.004

$21.03

$126.26


Private sector not-for-profit employer training supervisor

Focus Group Confirmation Letter

200

120

1

120

0.033

3.96

80

1

80

0.05

4.00

7.96

$21.03

$167.40


Private sector not-for-profit employer training supervisor

Focus Group

200

120

1

120

1.58

190.00

80

1

80

0.05

4.00

194.00

$21.03

$4,079.82

Subtotal unique private/business sector

770

690

13.80

9,520

1.04

9,930

240

-

240

-

12

9,942.30

-

$423,092.15


Grand total


53,830

53,741

5.60

300,869

0.16

49,090.81

- *

1.00

16,239

0.05

881.28

49,972.09


$781,721.47

* Nonrespondents are part of the total individuals who completed the registration document.











** Sources: Department of Labor Wage and Hour Division (http://www.dol.gov/whd/minimumwage.htm). Bureau of Labor Statistics, Occupational Employment Statistics Survey, May 2014. (http://www.bls.gov/oes/current/oes_nat.htm)

Individuals/Participant: Federal minimum wage. State, local, or Tribal agency director/manager: Average hourly earnings of workers in management occupations; State, local, or Tribal agency direct service staff: Average hourly earnings of workers in community and social services occupations; Private sector for-profit business director/manager: Average hourly earnings of workers in management occupations; Private sector not-for-profit agency director/manager: Average hourly earnings of social and community services managers; Private sector not-for-profit direct service staff : Average hourly earnings of community and social service specialists;





A.13. Estimates of other annual costs to respondents

Provide estimates of the total annual cost burden to respondents or record keepers resulting from the collection of information, (do not include the cost of any hour burden shown in items 12 and 14). The cost estimates should be split into two components: (a) a total capital and start-up cost component annualized over its expected useful life; and (b) a total operation and maintenance and purchase of services component.

No capital and start-up or ongoing operational and maintenance costs are associated with this information collection.

A.14. Estimates of annualized government costs

Provide estimates of annualized cost to the Federal government. Also, provide a description of the method used to estimate cost and any other expense that would not have been incurred without this collection of information.

The total cost to the Federal government is $26,329,622.60 over a 72-month period, or $4,388,270.33 on an annualized basis. The largest cost to the Federal government is to pay a contractor $26,326,025 to conduct the study and deliver data files. The information collection also assumes that a total of 80 hours of Federal employee time per year: for a GS-13, step 2 in the Washington-DC locality, at $44.97 per hour for a total of $3,597.60. Federal employee pay rates are based on the General Schedule of the Office of Personnel Management (OPM) for 2015.

A.15. Changes in hour burden

Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-1.

This is a new information collection that will add 41,517.19 burden hours and 160,102 annual responses to the OMB information collection inventory as a result of program changes.

A.16. Time schedule, publication, and analysis plans

For collections of information whose results are planned to be published, outline plans for tabulation and publication.

1. Study schedule

Exhibit A.16.a shows the planned schedule for SNAP E&T Pilots evaluation.

Exhibit A.16.a. Project schedule

Activity

Schedule

Pilot project site selection

01/01/15 - 03/23/15

Develop and refine data collection plan and instruments

02/12/15 - 06/30/15

Execute evaluation

41-197 weeks after OMB approval

Interim evaluation report

196-218 weeks after OMB approval

Final evaluation report

255-281 weeks after OMB approval

Prepare restricted and public-use data files and documentation

182-286 weeks after OMB approval

Dissemination

2-286 weeks after OMB approval


2. Publication of study results

From August to December 2020, we will produce ten final report volumes, one for each pilot project, describing project implementation and participation, costs, and impacts. Each project-specific final report will include appendices that describe the study methodology and provide technical details about the study design, sampling, weighting, response rates, data processing, and analysis. From August 2018 through January 2019, we will also prepare one interim report, which will be a combination of project-specific and topical cross-project reports.

In addition, from January to March 2021, we will prepare an integrated cross-pilot summary report targeted to a policy-focused audience that examines impacts, implementation, and cost-effectiveness across all pilot projects. This report will synthesize findings across pilots, using theme tables and side-by-side comparisons of the services provided by the pilots, impacts, participation patterns, and benefits and costs. This will have a two-page executive summary that provides the findings for those policymakers who do not have time to delve into more details. The final reports will be posted on the USDA FNS website. (http://www.fns.usda.gov/ops/research-and-analysis).

3. Plans for analysis

a. Implementation analysis

The implementation and process analysis will draw on data collected from the site visits, MIS files, and all other interactions with sites. Site visits will occur each year from April 2016 to May 2018. Each visit will have a different focus. Table A16.3a displays the dates of the visit followed by the focus of collecting data for that year.

Table A.16.3a. Site visit schedule and focus

Timeframe

Focus

April-May 2016

Collecting data on planning and early implementation

April-May 2017

Collecting data on operations

April-May 2018

Collecting data on full implementation and closeout


Following each site visit, we will use a structured write-up guide to synthesize collected information by implementation dimension, highlight themes, provide examples and illustrative quotes, and identify discrepancies and areas of agreement among data sources. This information will be uploaded into a database developed for the study, structured around the study questions and protocols. Descriptive tables will also be used to keep track of components of each site and ensure consistency in knowledge across all staff and sites.

Prior to submitting site reports to FNS in October of 2016, 2017, and 2018, the contractor will analyze data by creating tables that identify common themes and outliers across respondents and sites for certain topics or research questions (Yin 1994). These tables enable large volumes of data to be reduced to a manageable number of topics, themes, and categories of interest relevant to the study’s research questions (Coffey and Atkinson 1996). Theme tables will also be used to identify similarities and differences across the 10 pilots.

Using theme tables, site notes, and the standardized descriptive tables, we will describe the environment in which the pilot participants (both treatment and control group members) operate, as well as the implementation process and outcomes for each pilot. The assessment, to be performed from January to August 2018, will identify how the pilot was planned; whether it was implemented as intended; what challenges the sites experienced in planning, implementing, and operating the pilots, and the associated solutions; and how the pilot components and services differ from those available to the control groups and how these differences might affect impacts. In addition to pilot program-level analysis, we will conduct cross-site analysis and subgroup analysis within and between pilots.

b. Impact analysis

Create study database and analysis files. We will prepare restricted and public use SAS data files with variable names keyed to the question numbers of each instrument. Data files and documentation will be provided in February 2019 with the interim reports and again in February 2020 with the final reports. We will clean the data files by checking for consistency, missing values, outliers, and other problem values. Next, we will create and add constructed variables and sampling weights. Complete documentation—including the file structure, codebook, variable definitions and formulation, descriptions of editing and imputation procedures, and SAS code—will accompany each set of data files, to facilitate full replication of all analyses presented in the evaluation report.

Conduct Impact Study Analyses. At the heart of the evaluation is the estimation of the impact of the pilot projects on participants’ outcomes. The primary outcomes in these analyses will be measures of earnings, employment, and receipt of SNAP, TANF, and Medicaid. Key secondary outcomes will include household food security, health status, and self-esteem. All impact analyses discussed below will be conducted in preparation for each annual report to Congress (described in more detail later in this section) from October 2015 through October 2020 and in preparation of the interim evaluation report (conducted from August 2018 to January 2019) and the final evaluation report (conducted from August 2020 through December 2020).

As impacts on some outcomes may emerge or fade over time, we will estimate impacts on outcomes measured regularly throughout the period between random assignment and each follow up survey. For employment and earnings, we will estimate the impact for each quarter since random assignment, whereas for public assistance receipt, we will estimate an impact for each month since random assignment. Impacts on secondary outcomes will be measured at each follow up.

Before estimating the statistical models for the impact analysis, we will complete several contextual analyses from January to August 2018. First, we will assess the equivalence of members of the treatment and control groups to verify that those assigned to the groups have similar average baseline characteristics using data from the Registration Document. Second, to provide guidance on how the findings might generalize to a broader policy setting, we will compare the study sample to target populations within each site, as well as across sites. This analysis will be conducted using data on sample members from the Registration Document as well as geographic and local labor market information in the Area Resource File (ARF) and from the Bureau of Labor Statistics that will be linked to sample members by zip code or county. Third, we will examine data on service receipt to understand differences in outcomes between treatment and control groups. In particular, we will describe intensity, nature, and quality of services that participants receive in each site using information collected in the follow up surveys and from State E&T MIS systems.

After these contextual analyses are completed around January 2018, the impact estimation approach will compare treatment group outcomes with a counterfactual estimate of what those outcomes would have been in the absence of the project. The method used to estimate this counterfactual will depend on the specific evaluation design developed for each project. See Section B.2.1 for a description of the specific econometric models to be estimated.

A critical part of the analysis will be to assess what works and for whom. These analyses can be used to assess the extent to which intervention effects vary across policy-relevant subpopulations. Results from subgroup analyses can help inform decisions about how best to target specific interventions, and possibly to suggest ways to improve the design or implementation of the tested interventions. From January to August 2018, we will estimate impacts among subgroups of participants, defined by the following baseline characteristics:

  • Family composition (e.g., whether single or married, whether a parent, presence of children in the household, and presence of more than one adult in the household)

  • Labor force attachment (e.g., recent employment experiences)

  • Baseline earnings (e.g., whether the person had zero or positive earnings before the pilot)

  • History of SNAP receipt and duration of current SNAP receipt (e.g., whether the person has participated before the current spell)

  • Demographic characteristics (e.g., age, gender, race/ethnicity, education)

  • Extent of barriers to employment (e.g., language, lack of transportation)

  • Income (e.g., less than 100 percent of poverty level)

Examining a large number of outcomes or subgroups increases the risk of finding statistically significant impacts that are due to chance rather than the true effect of the program. We will minimize this multiple comparison concern by identifying key subgroups and a small set of primary outcomes within each outcome domain before beginning the analysis; carefully assessing whether statistically significant impact estimates for the primary analyses are isolated or part of a pattern within their outcome domains; and assessing the strength of impact patterns within the domains by evaluating whether significant findings hold up after statistical adjustments for multiple comparisons within and between domains.

Finally, from January to August 2018, we will use data from the implementation analysis to statistically test whether key measurable program features—such as types of intervention services, program organization and partnerships, the target populations, and the extent to which the interventions were implemented as planned—are associated with cross-site variations in impacts.

c. Participation Analysis

The participation analysis will describe the employment and training services received by SNAP participants for each tested intervention, the variation in service receipt over time within and across pilot sites, and the contrast in the services received by the treatment and control group members. It will also examine the extent to which the pilot E&T initiatives encourage or discourage entry into the SNAP program. These analyses will be conducted concurrently with the impact analyses, in preparation for each annual report to Congress (described in more detail later in this section) from October 2015 through October 2020 and in preparation of the interim evaluation report (conducted from August 2018 through January 2019) and the final evaluation report (conducted from August to December 2020).

Understanding the E&T services received by the treatment and control group members in each pilot will provide important contextual information for the impact analysis. Beneficial program impacts on participants’ longer-term outcomes will be realized only if the treatment groups receive meaningful and high quality services distinguishable from those received by the control group. Furthermore, detailed information on service receipt can help to identify potential reasons why the impact findings vary across subgroups of participants and programs, which can help inform future program improvement and targeting.

For the service receipt analysis, we will use State E&T MIS data and follow-up survey data during the pilot periods from 2016 to 2019 and during the analysis period of January to August 2018 to examine the extent to which people targeted for the treatments participate in the services offered, whether characteristics distinguish participants from nonparticipants, and whether participation differs across sites, interventions, time, and subgroups of participants. We will also describe the nature, amount, and types of services received by SNAP participants in each SNAP E&T activity and differences in services between the treatment and control groups. Analyses will be conducted by site and by groups of sites with similar program features, with an emphasis on identifying factors associated with cross-site variation in service receipt.

We will also examine the extent to which the pilot E&T initiatives encourage or discourage entry into the SNAP program. In particular, are potential enrollment increases due to attractive program features offset by potential decreases due to program mandates and other disincentives? A rigorous analysis of these entry effects is of critical policy importance because these effects could swamp the estimated program effects on SNAP participants’ long-term outcomes. This could occur if sufficient numbers of potential applicants know about the interventions and respond strongly to them, as was the case for welfare reform.

The approach for estimating entry effects will be tailored to the site pilot project designs and, indeed, some designs may preclude the analysis of entry effects. For sites that randomly assign geographic areas such as counties to treatment and control statuses, for example, we will compare county-level participation rates across treatment and control counties during the analysis period from January to August 2018. In some cases it may be necessary to use non-experimental methods to analyze entry effects, such as comparing SNAP participation rates in areas with and without the pilot, before and after the pilot, and between the SNAP E&T-eligible and ineligible populations.

d. Cost-benefit analysis

Future decisions about whether to replicate and expand the SNAP E&T pilots will hinge in part on whether their benefits are large enough to justify their costs. We will conduct a cost-benefit analysis at the end of each follow up period that uses a systematic accounting framework (Ohls et al. 1996) to determine (a) overall, per-participant and per-component costs of providing treatment services relative to those of providing control services; (b) whether the benefits of each pilot exceed its costs; and (c) the extent to which planning, start-up, and early implementation costs are offset by the stream of current and future benefits. All cost and cost-benefit analyses discussed below will be reported in interim evaluation report (conducted from August 2018 through January 2019) and the final evaluation report (conducted from August to December 2020). Analysis included in evaluation reports will also be reported in each annual report to Congress from October 2015 through October 2020.

We will use the ingredient, or resource cost, method (Levin and McEwan 2001) to compute pilot costs. This approach entails estimating pilot costs through itemizing and collecting data on the amounts and costs of the resources (or ingredients) necessary to provide services. Using cost data provided by pilots and estimates of the additional services received by each participant (estimated from the SNAP E&T MIS), we will estimate the total pilot costs, per-component costs (for example, the cost of each type of service), and per-participant costs. For all three types of costs, distributions of costs will be compared within and across grantees.

We will measure benefits from the estimates of the impacts on earnings and receipt of public assistance—both of which are already in dollar amounts. Fringe benefits will be valued using estimates of the cost of fringe benefits as a percentage of earnings obtained from the U.S. Department of Labor National Compensation Survey. The administrative costs of the receipt of SNAP and other public assistance will be obtained from published sources.

We will summarize costs and benefits of each pilot for each stakeholder in terms of the net benefits (the difference between the present value of benefits and costs), and the benefit-cost ratio (the ratio of the present value of benefits to the cost), often also referred to as the return on investment. Each measure will be estimated both including and omitting start-up costs. We will assess the sensitivity of the estimates to key assumptions and variations in determinants of costs.

Reports to Congress. The study will disseminate findings in a series of reports (interim and final) and briefings that address all research questions of interest in each pilot as well as cross-cutting questions of broader concern across pilots. However, Congress has invested significant resources in this study and needs to be kept abreast of its progress and findings. Accordingly, in addition to the interim and final reports mentioned above, FNS will submit an Annual Report to Congress on December of each year of the base contract (7 annual reports), and if the option to fund a 5-year follow-up is exercised, 2 additional annual reports through completion of the 5-year follow-up analyses and reporting. Each annual report will provide summaries of progress on implementation status of pilots and activities of the evaluations during the previous year and planned activities for the next year. In addition to these summaries of progress, depending on availability, these reports will also summarize early results of process, impact, participation/entry-effects, and benefit-cost evaluations of each pilot project. Exhibit 1shows for each report anticipated scope and content beyond the reporting of pilot evaluation progress. In terms of findings, Reports #2 – 4, respectively, will describe findings from the three rounds of site visit interviews with program and provide staff (inclusive of focus groups, in-depth participant interviews, and employer focus groups) and the analyses of program service receipt and entry-effects. On a staggered basis reflecting the different start-up dates for pilots’ operations, Reports #4 - #7 will include findings on short-term (12 months after random assignment) and longer-term (36 months after random assignment) on impacts and benefits-costs. Reports #8 - #9 will include preliminary findings on the 5-year follow-up.

Exhibit A.16.b. Annual congressional reports: scope and content

Report no.

Date

Scope/content

1

12/2015

Summaries of pilot characteristics—location, target populations, services, sample sizes; research designs; planning and technical assistance; and pilot launch dates

2

12/2016

Summaries of random assignment/evaluation processes trainings; summary of pilot start-up and fidelity to service and evaluation processes and procedures; early (first year) implementation experiences and program participation

3

12/2017

Summary of monitoring and TA activities; steady-state (2nd year) implementation experiences and program participation

4

12/2018

Close-out (final year) implementation experiences; program participation; short-term (12-month) impact findings (some not all pilots)

5

12/2019

Short-term (12-month) impact findings and benefit-costs

6

12/2020

Longer-term (36-month) impact findings

7

12/2021

Longer-term (36-month) impact findings and benefit-costs

8

12/2022

5-year impact findings and benefit-costs

9

12/2023

5-year impact findings and benefit-costs


Twelve months after the date that participants are randomly assigned we will have administrative data on their primary outcomes of earnings, employment, and public assistance receipt. We will have data on these outcomes for the entire research samples (treatment(s) and control(s)) in the ten pilots. For a random sample of approximately half of the pilot participants, we will also have survey data that provide richer detail on these primary outcomes, as well as on service receipt and secondary outcomes, such as household food security, physical and mental health, well-being, and housing status. Thus, we will have sufficient data to estimate impacts at 12 months across a variety of outcome measures.

Studies of the impacts of employment and training programs typically find negative or zero impacts on employment outcomes in the short run—when program participants are receiving education, training, and related services (see, for example, Card, et al. (2015). The literature suggests also that impacts tend to only become more positive 2 to 3 years after participants finish the program, obtain stable employment, and obtain better jobs than their counterparts who did not receive program services. When the impacts become positive, however, depends on the type of intervention—impacts typically become positive sooner for short interventions, such as job search assistance, and later for longer interventions, such as those offering education and training.

Mathematica and FNS set the first follow-up survey to occur at 12 months after random assignment to balance the goals of minimizing survey recall error on program-related experiences and being able to obtain a short-term look at impacts on employment-related outcomes. In designing the study, we recognized that examining impacts on employment at 12 months would understate the full, long-term earnings effects of the program, because many treatment group members will have only recently completed their services and some will still be receiving services (for example, those who will enroll in more intensive training programs).

Thus, when preparing the interim reports and making the findings available to Congress in the annual Congressional Reports, impact findings at 12 months will need to be interpreted carefully. Each of the pilot programs is offering a mix of services to participants. The 12-month mark may be less informative for pilots that end up having a larger percentage of participants that receive education or training services, compared to pilots that have a smaller percentage that receive them. Furthermore, there will likely be variation on service receipt and intensity within programs based on individuals’ service paths.

A critical part of the interim report will be to examine services that both treatment and control group members receive through SNAP E&T and elsewhere. This analysis will use survey data on service receipt that will be collected consistently across the research groups as well as pilot MIS data. This analysis will allow us to obtain information about the intensity of services received by the sample (for example, how long they spent in a training programs) and whether the services had been completed. Information on the service differential between the research groups as well as on service duration of will allow us to examine the extent to which short-term impacts in income and employment over the first twelve months will be reflective of longer-term impacts.


A.17. Display of expiration date for OMB approval

If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.

The agency plans to display the expiration date for OMB approval of the information collection on all instruments.

A.18. Exceptions to certification statement

Explain each exception to the certification statement identified in Item 19 “Certification for Paperwork Reduction Act”.

This study does not require any exceptions to the Certification for Paperwork Reduction Act (5 CFR 1320.9).


REFERENCES

Bonevski, B., M. Randell, C. Paul, K. Chapman, L. Twyman, J. Bryant, I. Brozek, and C. Hughes. “Reaching the Hard-to-Reach: A Systematic Review of Strategies for Improving Health and Medical Research with Socially Disadvantaged Groups.” BMC Medical Research Methodology, 2014, vol. 14, no. 42.

Card, David, Jochen Kluve, and Andrea Weber. What works? A meta analysis of recent active labor market program evaluations. No. w21431. National Bureau of Economic Research, 2015.

Coffey, A., B. Holbrook, and P. Atkinson. “Qualitative Data Analysis: Technologies and Representations.” Sociological Research Online, vol. 1, no. 1, 1996. Available at http://www.socresonline.org.uk/1/1/4.html. Accessed August 20, 2010.

Groves, R.M., E. Singer, and A. Corning. “Leverage-Saliency Theory of Survey Participation: Description and an Illustration.” Public Opinion Quarterly, 2000, vol. 64, pp. 299-308.

Levin, Henry M., and Patrick J. McEwan. Cost-Effectiveness Analysis: Methods and Applications. Second edition. Thousand Oaks, CA: Sage Publications, 2001.

Markesich, J., and M.D. Kovac. “The Effects of Differential Incentives on Completion Rates: A Telephone Survey Experiment with Low-Income Respondents.” Presented at the Annual American Association of Public Opinion Research, Nashville, TN, May 16, 2003.

Mercer, A., A. Caporaso, D. Cantor, and R. Townsend, R. “How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys.” Public Opinion Quarterly, 2015, vol. 79 (1), pp.105-129.

Ohls, James C., Dexter Chu, and Michael Ponza. “Elderly Nutrition Program Evaluation Final Report, Volume III: Methodology and Appendixes.” Report submitted to the U.S. Department of Health and Human Services, Office of the Secretary, Administration on Aging, and Office of Assistant Secretary for Planning and Evaluation. Princeton, NJ: Mathematica Policy Research, July 1996.

Questions and Answers When Designing Surveys for Information Collections. Guidance on Agency Survey and Statistical Information Collections. Washington, DC: Office of Management and Budget, January 20, 2006. Available at: http://www.whitehouse.gov/
sites/default/files/omb/assets/omb/inforeg/pmc_survey_guidance_2006.pdf.

Singer, E., R.M. Groves, and A.D. Corning. “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation.” Public Opinion Quarterly, vol. 63, 1999, pp. 251–260.

Singer, E., and R.A. Kulka. “Paying Respondents for Survey Participation.” In Studies of Welfare Populations: Data Collection and Research Issues. Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs, edited by Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro. Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press, 2002, pp. 105–128.

Singer, E., and C. Ye. “The Use and Effects of Incentives in Surveys.” The Annals of the American Academy of Political and Social Science, 2013, vol. 645, no. 112.

Yin, R. Case Study Research: Design and Methods. 2nd edition. Beverly Hills, CA: Sage Publishing, 1994.

1 We are not targeting employers in general that may hire a pilot participant. Only those employers directly involved in pilot activities, like offering on-the-job training, apprenticeships, internships, subsidized employment, etc. will be contacted for the focus groups.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMB Part A_011316
SubjectEvaluation of SNAP Employment and Training Pilots OMB Supporting Statement 0584-NEW Part A: Justification
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy