SNAP RCE_Part B_2.10.23 - OIRA_Rev

SNAP RCE_Part B_2.10.23 - OIRA_Rev.docx

Rapid Cycle Evaluation of Operational Improvements in Supplemental Nutrition Assistance Program (SNAP) Employment & Training (E&T) Programs

OMB: 0584-0680

Document [docx]
Download: docx | pdf

July Supporting Statement – PART B Justification for OMB Clearance

Paperwork Reduction Act of 1995



OMB No. 054-[NEW]



Rapid Cycle Evaluation of Operational Improvements in Supplemental Nutrition Assistance Program (SNAP) Employment & Training (E&T) Programs





January 2023



Project Officer: Anna Vaudin

Social Science Research Analyst

Office of Policy Support

Food and Nutrition Service

United States Department of Agriculture

1320 Braddock Place

Alexandria, Virginia 22314

Phone: 703-305-0414

Email: [email protected]





TABLE OF CONTENTS



TABLES





APPENDICES

A. Legal Authority

B1. Summary of Intervention Designs

B2. Intervention Outreach Materials

B3. Intervention Assessment Materials

C. Data Collection Summary Table

D1. SNAP Administrative Data Request

D2. Administrative Data: RAPTER Screenshots

E1.1. Colorado Participant Survey Specifications

E1.2. Colorado Participant Survey Specifications: Spanish

E1.3. Colorado Participant Survey Screenshots

E1.4. Colorado Participant Survey Screenshots: Spanish

E2.1. Massachusetts Participant Survey Specifications

E2.2. Massachusetts Participant Survey Specifications: Spanish

E2.3. Massachusetts Participant Survey Screenshots

E2.4. Massachusetts Participant Survey Screenshots: Spanish

E3.1. Connecticut Participant Survey Specifications

E3.2. Connecticut Participant Survey Specifications: Spanish

E3.3. Connecticut Participant Survey Screenshots

E3.4. Connecticut Participant Survey Screenshots: Spanish

E4.1. Rhode Island Participant Survey Specifications

E4.2. Rhode Island Participant Survey Specifications: Spanish

E4.3. Rhode Island Participant Survey Screenshots

E4.4. Rhode Island Participant Survey Screenshots: Spanish

E5.1. Participant Survey Advance Letter

E5.2. Participant Survey Initial Email/Text

E5.3. Participant Survey Reminder Email/Text

E5.4. Participant Survey Reminder Letter

E5.5. Participant Survey Reminder Postcard

F1. Participant Focus Group Discussion Guide

F1.1. Participant Focus Group Information Form

F2.1. Participant Focus Group Recruitment Email/Text

F2.2. Participant Focus Group Confirmation and Reminder Email/Text

F3. Participant FAQ

G1. Participant In-depth Interview Discussion Guide

G2.1. Participant In-depth Interview Recruitment Email/Text

G2.2. Participant In-depth Interview Confirmation and Reminder Email/Text

H. Process Guide Map

H1. Staff Semi-structured Interview Guide

H2. Staff Semi-structured Interview Invitation Email

I1. Staff Questionnaire Specifications

I1.1 Staff Questionnaire Screenshots

I2. Staff Questionnaire Advance Email with FAQ

J. 60-Day Federal Register Notice

K1. Public Comment 1

K2. FNS Response to Public Comment 1

L. NASS Comments

M. Participant Survey Pretest Memo

N. Incentive and Response Rates

O. FNS-8 USDA/FNS Studies and Reports

P. FNS-10 USDA/FNS Persons Doing Business with the Food Nutrition Service

Q1. Confidentiality Pledge

Q2. Data Security Plan

R. Connecticut Evaluation Consent Form

S. Institutional Review Board (IRB) Approval

T. Burden Table

U. Site-specific MDI Tables

V. Site-specific Intervention Design Diagrams

B1. Respondent Universe and Sampling Methods

Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

Rapid Cycle Evaluation of Operational Improvements in Supplemental Nutrition Assistance Program (SNAP) Employment and Training (E&T) Programs (SNAP E&T RCE) will test eight small-scale interventions in SNAP E&T operations or service delivery in eight different sites comprised mostly of State and county SNAP agencies and, within one site, the community college system. Each site will implement separate interventions. Intervention designs, target populations, and target sample sizes are summarized in Table B1.1. In each site, individuals will be randomly assigned to a control group that has access to existing recruiting, outreach, or engagement processes or to a treatment group that has access to new processes designed to increase enrollment or engagement in SNAP E&T. Six sites (Colorado, Kansas, Massachusetts, Rhode Island, Minnesota-Hennepin, and Minnesota-Rural) will test whether electronic messaging (texts or emails), with content informed by behavioral science, increases enrollment and engagement in SNAP E&T; three sites (Connecticut, Massachusetts, and Rhode Island) will test whether new assessments of SNAP participants’ work readiness lead to improved alignment with service providers, increased receipt of services, and greater engagement in SNAP E&T activities. One site (Massachusetts) will test whether improving case managers’ referral processes to local career centers increases participants’ engagement with career center staff and, ultimately, their participation in SNAP E&T activities. One site (Washington DC) will test whether a new assessment tool and enhanced case management approach for staff to use with participants when setting goals, identifying employment barriers, and identifying employment strategies leads to increased engagement in SNAP E&T. Six sites (Colorado, Connecticut, Kansas, Massachusetts, Minnesota-Hennepin, and Rhode Island) will implement multi-arm designs in which individuals are randomly assigned to one of several treatment groups or a control group; two sites (Washington DC and Minnesota-Rural) will implement a single-arm design in which individuals are randomly assigned to a single treatment group or to a control group.

In all sites, FNS will evaluate the effects of interventions on distal outcomes related to SNAP E&T enrollment and engagement in program activities. Where possible, FNS will evaluate intervention effects on more proximal outcomes related to whether SNAP participants affirmed their interest by text message to learn more about a SNAP E&T program, or whether they visited a local service provider’s office to express interest in participating.


Table B1.1. Summary of intervention designs

Site and agency

Intervention

Target population (respondent universe)a

Intervention locations

Target number of participants

Type of evaluation design

Colorado



Colorado Department of Human Services

The site will use a suite of marketing materials consisting of text messages, emails, and postcards to conduct outreach to referred work registrants in place of making outbound phone calls. Four treatment groups will receive different combinations of text messages and emails at different frequencies. Three treatment groups will get text messages and emails one week after a SNAP certification or recertification interview: one group will receive three text messages one week apart, a second group will receive two text messages two weeks apart, and a third group will receive two text messages and two emails two weeks apart. A fourth treatment group will receive the same series of messages as the third group except the first text and email will be sent two weeks after the certification or recertification interview. A fifth treatment group will receive two postcards one week apart, starting one week after the certification or recertification appointment. A control group will continue to receive telephone recruiting calls from SNAP E&T (Employment First) case managers. The intervention is designed to increase engagement in an initial phone call or in-person meeting with a case manager to enroll in SNAP E&T services.

SNAP-participating work registrants

Four counties (Larimer, Montrose, Denver, Broomfield)

9,000

Multi-arm RCT

Four treatment groups and one control group in Larimer, Denver, and Broomfield counties; one treatment group (receiving postcards) and one control group in Montrose County


Connecticut



12 Connecticut community colleges

Coaches at the site will administer a new type of assessment to SNAP E&T participants on community college campuses. This assessment will use a strengths-based approach1 to identify a range of barriers students may be facing by asking students to identify both areas where they need help and areas in which they are thriving. This approach will include a goal-setting process intended to increase participants’ motivation to achieve their goals. Coaches will use the assessment to determine the participant reimbursements or support service referrals each student needs. Students who get a referral to receive support services will receive a behaviorally informed text message to remind them of the referral information and to motivate them to pursue the service. Individuals will be randomly assigned to one of three research groups: a treatment group that receives the new participant-driven assessment and, if referred to a provider, receives behaviorally informed text message reminders for the referral; a treatment group that receives the new participant-driven assessment and, if referred to a provider, does not receive text message reminders; and a control group that receives the existing assessment that coaches currently use. The intervention is designed to increase participation in and completion of SNAP E&T services.

SNAP participants who are also work registrants attending community colleges

12 community colleges

808

Multi-arm RCT

Two treatment groups and one control group

District of Columbia



District of Columbia

Department of Human Services, Economic Security Administration (DHS)

The site will create a participant-driven assessment tool that guides case managers’ conversations with participants around goal setting, identifies barriers to employment, and describes steps involved in finding and maintaining meaningful employment. Staff also will receive training to provide enhanced case management to participants. The new assessment and enhanced case management will be provided to all individuals in the intervention; individuals will not be randomly assigned to receive these components. Individuals will be randomly assigned to receive behaviorally informed text message reminders for appointments. The intervention is designed to strengthen case management approaches to motivate participants to remain engaged in SNAP E&T and focus on their long-term goals.

SNAP participants ages 16 and older

District wide

300

Quasi-experimental pre-post design to evaluate change in assessment and case management approach

Single-arm RCT to evaluate effectiveness of electronic message reminders

Kansas



Kansas Division of Children and Families (DCF)

The site will send a series of text messages informed by behavioral science to SNAP E&T participants to increase engagement in SNAP E&T activities. Appointment reminder messages and behavioral nudges will be sent to E&T participants after completing an assessment when SNAP E&T appointments and events are scheduled. Behavioral nudge messages use behavioral science strategies, like reducing barriers or emphasizing the cost of not acting, to promote a behavior. There will be four research groups: three treatment groups and one control group. One treatment group will receive text messages reminding them of appointments. A second treatment group will receive behavioral nudges designed to promote engagement in the program. A third treatment group will receive both appointment reminder messages and behavioral nudges related to engagement. The control group will receive neither reminder messages nor nudges. The intervention is designed to increase appointment attendance, engagement with case managers, and participation in activities. Effects on broader measures like remaining actively enrolled in SNAP E&T, failing to demonstrate good cause, and being sanctioned will also be assessed.

SNAP E&T participants who are Able-Bodied Adults Without Dependents (ABAWDs)

Statewide

1,200

Multi-arm RCT

Three treatment groups and one control group

Massachusetts



Massachusetts

Department of Transitional Assistance (DTA)

The site will use a new referral process with four components. A text message will be sent inviting potential SNAP E&T participants to learn about E&T services. Individuals who affirm they are interested in learning more will receive an online, self-administered screening form to assess their work readiness. Based on their level of work readiness, individuals will receive a one-on-one assessment by an E&T worker to assess participant fit and readiness for a referral to a local career center. Those deemed to be work ready in the one-on-one assessment will receive a referral and warm handoff to a career center. There will be two treatment groups that are differentiated only by the behaviorally informed content used in the initial text message. Individuals will be randomly assigned to one of those two groups or to a control group that has access to a website to independently learn more about E&T services available at the local career center. Within each treatment group, individuals who pass the online, self-administered screener will be randomly assigned to receive a one-on-one assessment by an E&T worker. Individuals deemed to be work-ready in the one-on-one assessment by the E&T worker will then be randomly assigned to receive a warm handoff to the career center.

The intervention is designed to increase enrollment in SNAP E&T and improve assessment of work readiness for SNAP E&T services.

Adult SNAP participants who do not receive Supplemental Security Income benefits and who have agreed to receive communication from DTA through text messages

Five counties, purposively selected based on the presence of a DTA office and at least one career center location

30,000

Multi-arm RCT

Two treatment groups and a control group to evaluate the effectiveness of electronic messaging

One treatment group and one control group to evaluate effectiveness of one-on-one assessment

One treatment group and one control group to evaluate effectiveness of warm handoff and referral to career center

Minnesota-Hennepin



Hennepin County Department of Human Services

The site will send a series of behaviorally informed text messages to Able-Bodied Adults Without Dependents (ABAWDS) to encourage enrollment in SNAP E&T to help them meet their work requirements and avoid losing SNAP benefits. SNAP participants will receive up to three text messages at three different time points: immediately after SNAP certification; 30 days after SNAP certification; and 30-45 days before losing SNAP benefits if they do not comply with the work requirements. The content of each of the three messages will vary across each point in time: the first message will try to increase awareness (mere-exposure effect), the second message will encourage the participant to reach the goal of enrolling (endowed progress effect and endowment effect), and the third message will emphasize that inaction could lead to the loss of benefits (loss aversion). The timing and behavioral message content will be the same for each of the three treatment groups; the groups will only differ according to whether they include the participant’s name (first treatment group), exclude the participant’s name (second treatment group), or include the participant’s name but send only the first two messages (third treatment group). The control group will continue to receive a system-generated letter with information about the program and will not receive text messages.

The intervention is designed to increase the number of participants who contact the county SNAP E&T team and enroll in SNAP E&T.

SNAP participants who are newly certified ABAWDs

Hennepin County

4,700

Multi-arm RCT

Three treatment groups and one control group

Minnesota-

Rural



Minnesota Department of Human Services on behalf of E&T providers in 33 rural counties

The site will send a series of behaviorally informed text messages to SNAP participants to encourage enrollment in SNAP E&T. Individuals will be randomly assigned to either a treatment group that receives up to three text messages or a control group that receives providers’ existing recruiting materials, consisting of word-of-mouth approaches, flyers, or community partner referrals that are not specifically targeted to SNAP participants. Treatment group members will receive the first text message within three weeks after SNAP certification, and the second and third message approximately one and two months later, respectively. The content of each message will vary across each point in time: the first message will try to increase awareness (mere-exposure effect), the second message will emphasize that SNAP E&T staff want to support participants without the need for them to take many additional steps to enroll (endowed progress effect), and the third message will aim to increase the feeling of entitlement to or ownership of the E&T program and reiterate that staff want to support them in getting employment (endowment effect). The intervention is designed to increase the number of participants who contact SNAP E&T provider staff, enroll in SNAP E&T, and engage in a program component.

SNAP participants who are work registrants

33 counties

4,500

Single-arm RCT

One treatment group and one control group

Rhode Island



Rhode Island Division of Human Services (DHS)

Local Initiatives Support Corporation (LISC)


The site will send a series of behaviorally informed text messages and emails to eligible SNAP participants to encourage enrollment in SNAP E&T. The intervention will target two groups separately: (1) work registrants who are not also an ABAWD and (2) ABAWDs. The follow up messages for ABAWDs will focus on loss aversion, as Rhode Island anticipates the ABAWD waiver will expire in July 2022 and many of those they are targeting will have a time limit. Individuals will be randomly assigned to one of five research groups: a treatment group in which individuals receive the text message containing a link to the SNAP E&T website to learn more about the services available; a treatment group in which individuals receive an email containing a link to the SNAP E&T website to learn more about the services available; a treatment group in which individuals receive a text message requesting them to reply directly to the text for more information; a treatment group in which individuals receive an email requesting them to reply directly to the email for more information; or a control group that receives the existing, standard outreach materials. In the treatment groups that request a reply from participants to learn more about available services, individuals who affirm their interest will be randomly assigned again either to receive an existing assessment or an enhanced, provider-informed assessment that determines participant skills and interests, and matches them with providers that accommodate the participant’s background, skill level, and interests. The intervention is designed to determine whether the format and content of outreach messaging, along with an optimized assessment process, increases SNAP E&T enrollment and engagement in program components.

SNAP participants: both ABAWDs and work registrants who are not also ABAWDs


Statewide

5,000

Multi-arm RCT

Four treatment groups and one control group to evaluate effectiveness of electronic messaging

One treatment group and one control group to evaluate effectiveness of new, enhanced assessment

Note: Challenges that sites are attempting to address through the interventions are described in Appendix B1 Table A.1.

a Unless otherwise noted, target populations consist of individuals who are not already enrolled in SNAP E&T and are not otherwise meeting work requirements.

FNS did not select sites to be representative of SNAP E&T agencies nationwide or based on type or other characteristics of agencies. In January 2021, FNS sent an informational email to Regional and State SNAP directors informing them of the project and inviting them to a webinar at which they could learn more about the study goals, expected forms of engagement, and the project schedule. The email encouraged State directors to share the information with county administrators and large SNAP E&T providers in their States.

Over 300 individuals from State agencies, Regional offices, and local providers, as well as from FNS’s National office attended the webinar. After the webinar, attendees were asked to submit to the study team expressions of interest to participate in the evaluation. The study team had unstructured phone conversations with several of the agencies that expressed interest. Based on the expressions of interest and subsequent phone conversations, the study team recommended sites to be included in the evaluation based on the number of people that might participate in the evaluation, the types of challenges sites were interested in addressing, whether the proposed interventions could be rigorously evaluated, the types of risks associated with the project schedule and OMB approval, and whether the site had the capacity to participate in the RCE process and evaluation. FNS used this list and the risks identified by the study team to select the eight sites for the evaluation from 15 sites that expressed interest to participate.

Each site is distinct. Although some sites are addressing similar challenges related to trying to increase enrollment or engagement in SNAP E&T, sites identified their challenge and the intervention they will use to address that challenge on their own, separate from other sites. As a result, findings from the evaluation of each site’s intervention will not be synthesized or compared across sites in a combined or cross-site summary report.

Each site selected the target population for its intervention (Table B1.1). Some sites such as Minnesota-Rural and Colorado will target work registrants, or SNAP participants who do not meet a federal exemption from general work requirements, and who are not enrolled in SNAP E&T. Other sites such as Minnesota Hennepin and Rhode Island will target a subset of work registrants who are able-bodied adults without dependents (ABAWDS). One site (Connecticut) will target SNAP participants attending community college, while another (District of Columbia) will target all SNAP E&T participants. Some sites will test the intervention state-wide; others will target specific counties in the state or select geographic areas in which specific types of providers are operating.

The study team will conduct random assignment within each site and within blocks defined by geography, administrative office, or provider. Sites will provide the study team with administrative data. The team will use these data to verify that individuals have not previously been randomly assigned, assign them to a research group, and communicate the individuals’ assignments to the site. Site staff will then implement the next step in the intervention design for each research group, such as sending a behaviorally informed text message to the treatment group and sending a more traditional communication to the control group. The team has worked with each site to define the random assignment probabilities that help regulate the flow of participants into different types of services, such as assessments or referrals. As described in Section B2, although some interventions will use 50 percent probabilities to assign individuals evenly to treatment and control groups, designs with multiple interventions or treatment arms will use unequal assignment probabilities. The study team will also customize other random assignment components, such as tailoring the process to smaller service areas to ensure that research groups are balanced even among smaller groups of participants or to reflect the caseloads that providers handle.

Respondent universe, sampling, and intended use of data

The respondent universe, presence and type of sampling used, and planned use of the information in the analysis differs by type of data (administrative, survey, and qualitative data).

Administrative data. SNAP administrative data and SNAP E&T administrative data will be collected in all eight sites. SNAP administrative data will provide information (name, address, telephone numbers, and email addresses) for use during random assignment, as well as demographic and household information to serve as control variables in impact estimation models, to define subgroups, and to perform survey nonresponse analyses. Depending on the intervention, the study team will use SNAP E&T administrative data to construct outcomes measuring whether individuals contacted SNAP E&T agencies and providers, enrolled in SNAP E&T, participated in a caseworker assessment, received a support service, or engaged in program activities. SNAP agency and provider staff participating in the evaluation will collect this information either from their existing management information systems or by entering the data into a centralized system specifically designed for the evaluation.

The respondent universe for the collection of administrative data will consist of all individuals in each site’s intervention. Sites determined the number of individuals to include in their interventions based on the type of intervention and capacity constraints of SNAP E&T agency and provider staff who will be offering services. For example, sites require more resources to have case managers perform new types of work readiness assessments with SNAP participants than to send behaviorally informed text and email messages to SNAP participants. As a result, sites considered the capacity of their frontline staff when determining the number of individuals to include in the intervention.

Sites also determined the number of individuals to include in the intervention based on the size of the target population and by considering statistical power of analyses of outcome measures based on SNAP E&T administrative data (Table B1.1). In some sites, the number of individuals included in the intervention (and the respondent universe for the administrative data collection) will be exhaustive of the target population. For example, because there are only about 800 students who will enroll in SNAP and attend programs in Connecticut’s community colleges during the study period, the intervention in Connecticut will attempt to include all of these individuals. In some sites, however, a random sample of the target population will constitute the group of individuals included in the evaluation. For example, in Massachusetts, there are approximately 1 million individuals in the target population across the state (adult SNAP participants not receiving Supplemental Security Income benefits who have agreed to receive SNAP communications through text messages), but about 30,000 of these individuals will be selected for the intervention from several geographic areas that will take part in the study. In these cases, sites will randomly sample individuals from their target populations to include in the interventions because the analyses will have sufficient statistical power even with fewer individuals.

Participant survey data. In four sites, participant survey data will supplement the administrative data. The participant survey will collect information on receipt of SNAP E&T recruitment and outreach materials, assessments, case management, and referral services. It also will assess barriers to engaging with services and seeking employment, program satisfaction, and reasons for engagement decisions (for those who engaged in the E&T programs, and those who either never engaged or disengaged). This information will be used as outcomes in the evaluation’s impact analysis that cannot be obtained in the SNAP E&T administrative data. It also will be used to describe participants’ experiences in the intervention to provide context for those analyses.

FNS purposively selected the four sites in which to conduct the participant survey to obtain a range of interventions that FNS can evaluate: behaviorally informed electronic messaging to increase SNAP E&T recruitment and outreach (Colorado), case worker assessments of work readiness and referrals to career centers (Massachusetts), participant-driven assessments of barriers to participation in E&T activities (Connecticut), and behaviorally informed electronic messaging, case worker assessments, and referrals (Rhode Island).

In the four sites in which the survey will be administered, the survey will include either all study participants (Connecticut) or a stratified random sample of all study participants (Colorado, Massachusetts, and Rhode Island). In Connecticut, because of the limited number of SNAP participants available to participate in the intervention, the participant survey sample will include all SNAP E&T participants enrolled in Connecticut’s community colleges to ensure sufficient statistical power with which to identify intervention effects. In Colorado, Massachusetts, and Rhode Island—which will have much larger numbers of intervention participants—the study team will use stratified random sampling to select individuals to participate in the survey. For these three sites, the study team determined the number of individuals to include in each survey sample using statistical power analyses based on comparing the analyses’ minimal detectable impacts with ranges of impacts of related interventions in the field (see Section B2). In the four sites in which the survey will be administered, the findings will represent all study participants in the target population of the intervention. This includes all SNAP E&T participants enrolled in Connecticut’s community colleges and all SNAP participants in Colorado, Massachusetts, and Rhode Island who were eligible to receive a text message as part of the intervention.

Table B1.2 presents the number of individuals each participant survey site plans to include in its intervention, the number of individuals that will be included in the sample for the participant survey, and the expected response rate to the participant survey. The study team will target an 80 percent response rate for the participant survey with a minimum rate of 60 percent.

Table B1.2. Population and sample sizes for participant survey, by site

Site

Number assigned to intervention (universe)

Sample size

Expected number of respondents

Expected number of nonrespondents

Response rate

Colorado

9,000

800

640

160

80%

Massachusetts

30,000

1,200

960

240

80%

Connecticut

808

808

646

162

80%

Rhode Island

5,000

1,200

960

240

80%


Several aspects of the data collection design and additional efforts undertaken by the team administering the survey will help to ensure success in gaining participants’ cooperation and a high response rate. These include the following, which are described in greater detail in Section B3:

  • Administering a short, 15-minute survey

  • Offering the survey in multiple formats (both web and telephone) and using trained interviewers for the telephone survey to obtain buy-in and cooperation from potential respondents

  • Using contact information of potential respondents that they recently provided to SNAP agencies and SNAP E&T agencies and providers, making it more likely to be valid

  • Obtaining contact information from vendor databases for individuals whose telephone or email information is no longer valid

  • Leveraging established relationships between intervention staff at program sites and potential respondents to promote the survey and secure buy-in

  • Offering the survey in multiple languages (English and Spanish) so individuals can respond in their preferred language

  • Sending advance letters to all potential respondents before the survey and sending reminder letters, emails, and texts to nonrespondents

  • Making multiple calls to nonrespondents at different times of the day and different days of the week to increase the likelihood of reaching potential respondents when they are available

  • Utilizing interviewers trained in general and study-specific interviewing skills, probing, establishing rapport, avoiding refusals, eliminating bias, and being sensitive to at-risk and special populations. Providing study-specific training to interviewers that reviews study goals, provides a question-by-question review of the instrument, and conveys best practices for interviewing for the specific study, and conducting mock interviews prior to conducting real interviews.

  • Having interviewers skilled in refusal conversion make second attempts to address concerns and complete the interview where needed

  • Offering a $30 incentive (in the form of a gift card) to complete the survey

  • Monitoring survey performance and analyzing paradata to make real-time improvements in outreach and administration.

Staff questionnaire.

The study team will administer a staff questionnaire to SNAP agency and provider staff in each of the eight sites. The team will purposively select 20 State, local, or business staff per site (Table B1.3) who participated in the design, planning, and implementation of the intervention in each site.



Table B1.3 Number of staff questionnaires that will be completed by agency and provider staff, by site


Colorado

Connecticut

District of Columbia

Kansas

Massa-chusetts

Minnesota Hennepin

Minnesota Rural

Rhode Island

Staff questionnaire for SNAP E&T agency and provider staff

20

20

20

20

20

20

20

20


Qualitative data. The study team will collect qualitative data consisting of participant focus groups in each of the eight sites, participant in-depth interviews in four sites, and staff semi-structured interviews administered to SNAP agency and provider staff in each of the eight sites. The team will purposively select individuals, State/local government staff, and provider staff to participate in these information collection activities. The four sites that will administer the participant in-depth interviews will be the same as the sites that will administer the participant survey, allowing the study team to provide additional context for the impact analyses across the widest range of interventions in the study.

In three sites, the study team will select focus group participants according to whether they were offered a specific component in the intervention.

  • In Massachusetts, the team will conduct one focus group with participants who received a referral to the career center to understand their experience through all of the components of the intervention leading up to that final stage. The team will conduct a second focus group with individuals who were not deemed to be work-ready in the assessment phase of the intervention to understand the experiences of this group who have barriers to work that are not well addressed through this intervention.

  • In Connecticut, the two focus groups will correspond to the two treatment groups—one that receives a new, enhanced assessment and behavioral nudge reminders for appointments, and the other that receives the new, enhanced assessment but no behavioral nudge reminders.

  • In Rhode Island, one focus group will consist of participants who received the full intervention (text message or email and the enhanced assessment). The other focus group will be conducted with participants who received a text message or email with a website link, and who either selected a provider on their own or received a current assessment after requesting more information, to understand the other pathway and potential obstacles to reaching a provider.

In five sites, the study team will select focus group participants according to whether they decided to participate in the intervention.

  • In Colorado and Minnesota-Hennepin, the team will conduct one focus group with individuals who received text or email messages and decided to participate in the program, and one group with those who received the messages but decided not to participate.

  • In Kansas, the team will conduct one focus group with individuals who received text message reminders, and one group with individuals who received a behavioral nudge.

  • In Minnesota-Rural, one focus group will consist of SNAP E&T participants who received the text message intervention, and the other focus group will consist of individuals who did not participate in SNAP E&T after having received the text message intervention.

  • In the District of Columbia, we will conduct one focus group with participants who remain engaged in the program for longer than average time spent in the program by participants and one group with those who exited the program earlier than the average time spent in the program.

The study team will invite individuals who participated in most or all of the components of the intervention in four sites to complete participant in-depth interviews. For example, in Massachusetts the team will select individuals who responded to the text and email messages, completed the work readiness screener and assessment, and received a referral to the career center. In Connecticut, the team will select individuals who received the new assessment and received behaviorally informed reminders for referral appointments. These individuals also will vary in key characteristics, such as gender, race and ethnicity, employment status or work experience, and length of time receiving SNAP.

The team will select agency and provider staff to complete staff semi-structured interviews according to who was most closely involved with the planning and execution of the intervention in each site, while seeking a diverse range of roles to provide greater perspective on intervention operations and participants’ experiences. This will include a mix of State and local government staff and provider staff.

Table B1.4 presents the number of estimated respondents in the qualitative data collection for each of the data collection instruments.

Table B1.4 Expected number of respondents to qualitative data collection instruments, by site

Data collection instrument

Colorado

Connecticut

District of Columbia

Kansas

Massa-chusetts

Minnesota Hennepin

Minnesota Rural

Rhode Island

Participant focus groups with SNAP participants

16 to 20

16 to 20

16 to 20

16 to 20

16 to 20

16 to 20

16 to 20

16 to 20

Participant in-depth interviews with SNAP participants

15

15

0

0

15

0

0

15

Staff semi-structured interviews with SNAP E&T agency and provider staff

30

30

30

30

30

30

30

30

B2. Procedures for the Collection of Information

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection;

  • Estimation procedure;

  • Degree of accuracy needed for the purpose described in the justification;

  • Unusual problems requiring specialized sampling procedures; and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

B2.1. Statistical methodology for stratification and sampling.

Sites will provide SNAP administrative data for all individuals in the interventions via a secure File Transfer Protocol (FTP), so there will not be any sampling in that information collection effort. Upon receipt, the study team will review the administrative data for completeness and follow up with site staff to understand any missing or anomalous data. As described in Section B1, the study team will use SNAP administrative data to randomly assign individuals to one of several research groups in each site. In six of eight sites, equal probabilities will be used to randomly assign individuals to one or more treatment groups and a control group. For example, in the District of Columbia and Minnesota-Rural, 50 percent of individuals will be assigned to the treatment group and 50 percent to the control group, while in Minnesota-Hennepin, 25 percent will be assigned to each of the three treatment groups and the control group.

In two sites (Massachusetts and Rhode Island), the evaluation design is more complex, consisting of two to three stages of random assignment. In Rhode Island, for example, individuals in two of the four treatment groups will be randomly assigned to receive an assessment later in the client flow. To ensure sufficient numbers of individuals are able to be randomly assigned at those later stages, the random assignment probabilities in the first stage will be unequal (0.20 for the control group, 0.15 in each of two treatment groups not leading to an assessment, and 0.25 in each of two treatment groups leading to an assessment). Similarly, in Massachusetts, the random assignment probabilities in the first stage will be 0.20 for the control group and 0.40 for each of two treatment groups to ensure a sufficient number of individuals will be eligible to be randomly assigned to receive assessments and referrals at a later stage. Equal probabilities will be used in later stages of random assignment. Although the interventions in Massachusetts and Rhode Island have multiple stages of random assignment, these are not nested designs (such as those that randomly assign program sites to research conditions and then randomly assign participants within program sites to research conditions). In contrast, these are multi-arm designs that are sequential, with participants in each stage randomly assigned to research conditions. As a result, the tests for the later stages of random assignment are not meant to generalize to all individuals who were initially randomly assigned in the first stage; the results of those tests will represent the effectiveness of the specific component of the intervention among participants who were eligible to receive that component. In this way, the tests of the effectiveness of the intervention components randomly assigned at later stages can be thought of as strata or subgroups in analysis, rather than byproducts of a nested design.

The study team will sample individuals from the intervention for the participant survey, which four sites will administer. For three of the four sites, the team will use stratified random sampling to select individuals (with equal probability selection within strata). Strata will be formed based on research group and, if applicable, geographic area (such as county):

  • In Colorado, strata will be formed (1) based on research groups (four treatment groups and the control group) and county for three of four counties and (2) based on research groups (two treatment groups and the control group) within the fourth county. (There are fewer treatment groups in this county due to reduced cell phone coverage and inability for SNAP participants to receive text messages.)

  • In Rhode Island, five strata will be formed based on research groups (treatment groups 1 to 4 and the control group). The team will also create strata based on the assessment treatment and control groups within two of the initial-stage treatment groups to ensure sufficient numbers of respondents who will be eligible for an assessment.

  • In Massachusetts, three strata will be formed based on research groups (treatment group 1, treatment group 2, and the control group) and the five geographic locations where the intervention will be implemented. The team will also create strata based on the assessment and referral treatment and control groups within treatment groups 1 and 2 to ensure sufficient numbers of respondents who will be eligible for an assessment or a referral to a career center.

This stratification will help to ensure that individuals from all research groups are adequately represented in the participant survey sample. As described in Section B1, sampling will not be used in the fourth survey site (Connecticut). The study team will attempt to field the survey with all individuals included in the intervention due to its size to ensure sufficient statistical power.

The team does not foresee any unusual problems requiring specialized sampling procedures beyond the stratified methods proposed above.

B2.2. Estimation procedures.

With an experimental design, unbiased impact estimates can be obtained using a t-test comparing the mean outcomes of treatment and control groups. Using regression procedures that control for predictive covariates improves the precision of estimates and adjusts for small baseline differences between groups that may arise by chance or from survey nonresponse or missing administrative data.

The study will estimate impacts using a regression model of the general form:

where Y is an indicator for whether individual i enrolled in SNAP E&T during the follow-up period, engaged in a SNAP E&T activity, or other outcome, and through are a series of binary treatment indicators corresponding to the k treatment groups. X represents a vector of baseline covariates, including selected demographic characteristics. The regression coefficient on each treatment indicator ( through ) represents the average effect of receiving treatment on the outcome compared to the control group. For single-arm treatment designs, only will be estimated. For multi-arm treatment designs, the study will also assess the impact of the treatments relative to each other, by examining the differences in regression coefficients across treatment indicators.

The study will estimate these regressions using a linear probability model for each site but will test the sensitivity of these results by estimating impacts on outcomes using logistic regression as an alternative approach. The team will use t-tests and F-tests to assess statistical significance for the parameters of interest.

In some sites (Massachusetts and Rhode Island; Table B1.1), we will examine the impact of receiving an assessment from a caseworker or a referral to a career center or other organization, conditional on being eligible for the referral. These analyses will have the same estimation procedures as the unconditional treatment effects presented above, except they will be based on a restricted sample of individuals who were eligible to receive the assessment or referral, rather than all individuals participating in the intervention. A second and third stage of random assignment in Massachusetts and a second stage of random assignment in Rhode Island (see Table B.1) will occur immediately before these points in the program flow, resulting in unbiased estimates of the impacts of these intervention components for the relevant subpopulations.

In addition to estimating impacts for the full sample within each site, the analysis will assess whether impacts vary across key subgroups defined by demographic characteristics such as age, presence of children in the household, and geography. In some sites such as Rhode Island, the analysis will also consider subgroups defined by work registrant and ABAWD status. To perform this analysis, the study team will add subgroup-treatment interaction effects to the model presented above and will use F-tests to assess statistical significance of subgroup impacts.

Accounting for random assignment and survey nonresponse. Analytic weights will be constructed for all analyses using SNAP and SNAP E&T administrative data or participant survey data. In analyses based on administrative data, the weights will reflect that random assignment will be carried out within each site and within blocks defined by geography, administrative office, or provider. Within each block, the actual allocation of individuals to each research group (such as to a treatment group or a control group) may not equal the rate of assignment due to randomly assigning slightly less or more than the targeted number of individuals in the block. For example, a block with equal assignment rates for two research groups may have had 50.3 percent and 49.7 percent of individuals assigned to the treatment and control groups, respectively. Randomization weights will correct for these issues within each block. Weights will be inversely proportional to the research group’s actual assignment rate. This approach ensures that the sum of the weights for each research group in a block equals the number of all randomized individuals in the block. Stated differently, this approach accounts for the fact that the research groups are random samples from the same population universe within the site.

The randomization weights can be generalized for an individual h in block j and research group i in site k using the following formula:



where Ehijk = 1 if individual h is a member of research group i in block j in site k, and equal to 0 otherwise, and is the number of research groups in site k.


In analyses based on participant survey data, the weights will reflect the random assignment process and survey nonresponse. These weights will adjust the randomization weights based on administrative data using a factor based on the survey nonresponse analysis, so that the participant survey respondents represent the individuals in each intervention’s target population. We will use propensity scoring, first estimating statistical models to predict the likelihood that a person will respond to the survey and then using that propensity score to construct nonresponse weighting adjustment factors. We will use logistic regression models, estimated separately by site and research group, to model individuals’ likelihood of completing the survey. Propensity scores ( will be estimates of

Which represents the probability that an individual responded to the survey, conditional on a series of baseline covariates ( through ), including indicators for study groupings such as geographic unit within the site, as well as SNAP administrative data variables representing demographic characteristics that are available for both respondents and nonrespondents. The nonresponse weighting adjustment will be calculated as the inverse of the response propensity score . We will trim and normalize final weights to minimize the influence of outlier weights.

Statistical power. As described in Section B1, the numbers of individuals needed for each intervention were partly determined by focusing on minimum detectable impacts (MDIs), or minimum changes in percentages of individuals who respond to electronic recruiting messages, enroll in SNAP E&T, or engage in E&T activities, depending on the site. Each intervention will be sufficiently powered to detect impacts for the site-specific outcomes based on SNAP E&T administrative data and to provide descriptive context for impacts based on participant survey data. Table B2.1 shows the anticipated MDIs in each site. (Site-specific MDI tables are available in Appendix U.)

Table B2.1 presents many comparisons between different research groups. The study team’s primary focus in each site will be on impacts measured by comparing the outcome of one or more treatment groups with the outcome for the control group. This comparison will answer the key question of whether the intervention, or a component of the intervention, improves the status quo. However, in sites with one or more treatment groups, rather than pool the treatment groups within each site and compare the outcome for the pooled treatment group to the outcome for the control group, the study team plans to estimate separate impacts for each treatment group and use an F-test to test the hypothesis of whether any of the contrasts differ from the others. The team also will compare outcomes among different treatment groups. For example, in Minnesota-Hennepin, the study team primarily will compare each of the three treatment groups, which will receive messages with different behaviorally informed content, and the control group, but also will consider how the outcomes vary among the three treatment groups to assess whether the content of the message matters.

Table B2.1. Minimum detectable impacts for outcomes based on administrative data

Site and research objective

Number of individuals per group

MDI (percentage points)

Colorado (based on starting sample size of 9,000 across all research groups)



Estimate effect of receiving behaviorally informed, weekly text messages on percentage of individuals that enroll in SNAP E&T in Broomfield, Denver, and Larimer counties

1,800 and 1,800

2.7

Estimate effect of receiving behaviorally informed postcards on percentage of individuals that enroll in SNAP E&T in Montrose County

833 and 833

4.0

Connecticut (based on starting sample size of 808 across all research groups)



Estimate effect of receiving provider-driven assessment and receiving text message reminders for case management appointments on percentage of individuals that complete E&T program

269 and 269

11.8

District of Columbia



Estimate the effect of new assessment and enhanced case management approach on percentage of individuals who engage in SNAP E&T among pre- and post-intervention groups

300 and 240

11.8

Estimate effect of reminders and text message nudges on percentage of individuals who attend their case management appointment among RCT post-intervention groups

120 and 120

17.7

Kansas (based on starting sample size of 1,200 across all research groups)



Estimate effect of receiving behaviorally informed text message appointment reminders compared to not receiving them on percentage of individuals who remain engaged in SNAP E&T activities

300 and 300

11.2

Estimate effect of receiving behaviorally informed text message nudges compared to not receiving them on percentage of individuals who remain engaged in SNAP E&T activities

300 and 300

11.2

Estimate effect of receiving behaviorally informed text message appointment reminders and nudges compared to not receiving them on percentage of individuals who remain engaged in SNAP E&T activities

300 and 300

11.2

Estimate effect of receiving behaviorally informed text message reminders or nudges compared to not receiving them on percentage of individuals who remain engaged in SNAP E&T activities

900 and 300

9.1

Massachusetts (based on starting sample size of 30,000 across all research groups)



Estimate effect of message content on percentage of individuals expressing interest in learning more about E&T services

12,000 and 12,000

1.8

Estimate effect of outreach message on percentage of individuals who enroll in SNAP E&T

9,000 and 6,000

2.3

Estimate effect of assessment on percentage of individuals who enroll in SNAP E&T

2,100 and 3,000

3.9

Estimate effect of warm handoff referral on percentage of individuals who enroll in SNAP E&T

900 and 900

6.4

Minnesota-Hennepin (based on starting sample size of 4,700 across all research groups)



Estimate effect of receiving behaviorally informed set of text messages on percentage of individuals that enroll in SNAP E&T

1,175 and 1,175

5.6

Minnesota-Rural (based on starting sample size of 4,500 across all research groups)



Estimate effect of receiving behaviorally informed set of text messages on percentage of individuals that enroll in SNAP E&T

2,250 and 2,250

4.1

Rhode Island (based on starting sample size of 5,000 across all research groups)



Estimate effect of message content on percentage of individuals who complete and submit online contact form to express interest in learning more about E&T services

1,000 and 1,000

6.1

Estimate effect of replying to outreach messages on percentage of individuals who enroll in SNAP E&T

560 and 1,000

7.2

Estimate effect of enhanced assessment on whether individuals are a “better fit” with providers, measured using the percentage of individuals who start intake at a provider

360 and 360

10.2

Note: MDIs are based on a two-tailed test with 0.80 power at a 0.05 significance level and assume the mean value of the binary outcome is 0.50 (for all sites except Colorado, which assumes a mean value of 0.1 based on baseline site data), baseline variables explain 5 percent of the variation in the outcome, and response rates are 100 percent for outcomes based on SNAP E&T administrative data. The design effect for outcomes based on administrative data is 1.0 due to the absence of weighting.

MDIs for outcomes based on administrative data are based on the following formula:



Where is the significance level, is the probability of type II error, and is the inverse of the t distribution with degrees of freedom (df) equal to the total sample size minus 1. Deff is the assumed design effect, is the share of variation in the outcome that can be explained by baseline covariates, is the variance of the outcome, and and are the treatment and control group sample sizes, respectively.

The MDI calculations presented in Table B2.1 use the following parameter values: , 80, , deff=1, and for all states except Colorado, for which . and are as listed in each row of Table B2.1.

Based on SNAP E&T administrative data, MDIs are generally less than 5 percentage points in larger sites such as Colorado and Massachusetts and between 5 and 12 percentage points in smaller sites such as Connecticut, Kansas, and Minnesota-Hennepin. In sites with multiple treatment arms such as Kansas, combining treatment arms for specific analyses, such as treatment arms based on text messages for reminder appointments and those based on behavioral nudges to promote engagement, and comparing outcomes between the aggregated treatment groups and the control group would allow us to detect smaller MDIs, where appropriate (Table B2.1).

Some sites such as Rhode Island and Massachusetts have multiple stages of random assignment in their intervention design. Analyses examining the impact of components that will be randomly assigned at later stages will be based on fewer people than the analyses examining the impact of the full intervention design. For this reason, the MDIs are larger in Massachusetts and Rhode Island for the analyses examining the impact of receiving an assessment and are larger in Massachusetts for the analyses examining the impact of receiving a referral to the career center (Table B2.1).

The MDIs in Table B2.1 are in line with impacts detected by other studies of similar interventions that have shown large impacts, ranging from about 10 to 25 percentage points. For example, Finkelstein and Notowidigdo (2019) found that an effort to increase participation in SNAP by letting households know they are eligible for benefits and helping them apply led to a 12-percentage point increase in participation rates. Darling et al. (2017) found that behaviorally informed messages encouraging participation among those eligible for an Unemployment Insurance (UI) program led to an increase in participation of 14 to 15 percentage points. Castleman and Page (2016) found sending community college freshmen text messages reminding them to renew their financial aid and offering one-on-one assistance from an advisor increased the share of students who persisted through their sophomore year by nearly 25 percent. McCay et al. (2018) found that sending electronic messages to low-income parents encouraging them to attend program orientations and meet with case managers increased attendance rates by up to 13 percentage points.

Most existing studies have estimated intervention effects by comparing outcomes for treatment and control groups. Some of the analyses proposed for the current study consist of comparing outcomes among one or more treatment groups. These impacts may be smaller than those measuring differences relative to the control group, so comparisons between treatment groups may have less statistical power. However, these serve as important tests for sites and for FNS to understand the range of intervention options available to SNAP E&T agencies.

Participant survey data generally will not be used to estimate impacts of the interventions on outcomes.2 Instead, these data will be used to describe participants’ experiences in the intervention and to provide additional information for some of the outcomes based on SNAP E&T administrative data. For example, in Colorado, the SNAP E&T administrative data will indicate enrollment in SNAP E&T, while the participant survey will describe why individuals did or did not enroll in the program and how these reasons differed across research groups. In some sites like Rhode Island and Massachusetts, the participant survey asks questions about experiences specific to the intervention components offered to the treatment group, such as reactions to the behaviorally informed text messages. In these cases, the minimal detectable differences (MDDs) measure how outcomes differ across multiple treatment groups or how they differ across subgroups of treatment group members defined by demographic or household characteristics. The MDDs in analyses based on participant survey data are larger than the MDIs based on administrative data due to the smaller sample sizes and larger design effects from weighting in the survey data. Because all of these comparisons are intended to provide contextual information and do not serve as the primary measure of effectiveness of the intervention, the MDDs in the analyses based on the survey data are acceptable to FNS and the study team.

The MDDs based on the participant survey data range from 8.9 to 16.3 percentage points (Table B2.1). Because the MDDs are larger than the MDIs, the estimates in Table B2.2 reflect how the study team will pool treatment groups within a site wherever possible and compare the pooled estimate to the estimate for the control group to increase statistical power.

Table B2.2. Minimum detectable differences for contextual outcomes based on survey data

Site and research objective

Number of respondents per group

MDD (percentage points)

Colorado (based on starting sample size of 800 across all research groups)



Estimate effect of receiving behaviorally informed, weekly text messages on percentage of individuals that enroll in SNAP E&T

512 and 128

8.9

Connecticut (based on starting sample size of 808 across all research groups)



Compare between treatment and control groups the percentage of individuals that experience a barrier to employment or participation in SNAP E&T activities, such as lack of childcare

433 and 214

12.5

Compare between two equally sized subgroups of individuals who took the participant-driven assessment the percentage of individuals reporting that the assessment helped them better understand their own needs or goals

217 and 217

14.4

Massachusetts (based on starting sample size of 1,200 across all research groups)



Compare between two equally sized subgroups the percentage of treatment group members who received but did not respond to text messages and report that they were not interested in participating in the program

384 and 384

10.8

Compare between two equally sized subgroups the percentage of treatment group members who completed the assessment and report that the interview helped them better understand their own needs or goals related to their career and employment

96 and 96

21.7

Compare responses for the combination of both treatment groups with the control group for whether individuals are receiving services from any providers to help them further their education or training or help them prepare for or find a job

768 and 192

12.1

Rhode Island (based on starting sample size of 1,200 across all research groups)



Compare responses among two equally sized subgroups for a combination of the two treatment groups who were sent links to the website of providers, regarding whether they understood how to navigate the website

240 and 240

13.7

Compare responses for the combination of all treatment groups with the control group for whether individuals are receiving services from any providers to help them further their education or training or help them prepare for or find a job

768 and 192

12.1

Note: MDDs are based on a two-tailed test with 0.80 power at a 0.05 significance level and assume the mean value of the binary outcome is 0.50, baseline variables explain 5 percent of the variation in the outcome, and response rates are of 80 percent. The design effect is 1.2.

The MDD calculations for survey-based outcomes presented in Table B2.2 use the following parameter values: , 80, , for Connecticut, Massachusetts, and Rhode Island, and for Colorado, deff=1.2, and and as listed in each row of Table B2.2, assuming a response rate of 80 percent.

FNS does not plan to adjust for multiple comparisons in this evaluation. Multiple comparison adjustments are appropriate when there are many tests of an intervention on the same or similar outcomes. However, for most of the sites, the analysis consists of conducting a single test of whether the intervention increased the percentage of SNAP participants who enroll or engage in the SNAP E&T program. In two sites (Massachusetts and Rhode Island) in which there are multiple phases of random assignment corresponding to different components of the intervention, the analysis will test whether each intervention component affects different outcomes. For example, in Massachusetts, one analysis will test whether outreach messages assigned at the first point of random assignment increase the percentage of participants who express interest in learning more about SNAP E&T services (a proximal outcome). Another component will test whether the outreach messages ultimately lead to higher rates of enrollment in SNAP E&T (a more distal outcome). Two additional analyses will test the impact on the rate of SNAP E&T enrollment of offering a work readiness assessment or providing a warm handoff to career center staff. Because these latter tests are not evaluating the same component of the intervention as the first test, a multiple comparisons adjustment is not needed.

B2.3 How information will be collected and the data collection instruments will be implemented

Collection of administrative data. In all eight sites, state SNAP agencies will provide SNAP administrative data to the study team via a secure File Transfer Protocol (FTP) site at the start of the intervention so individuals can be randomly assigned to research groups. State SNAP E&T agencies and providers will provide participants’ outcome information at the end of the intervention either using a secure FTP site or using an information system designed specifically for the study’s evaluation. The study team will review these data to check for missing information and to review distributions of variables to ensure there are no unrealistic or unexpected values of variables in the data. The team will follow up with site staff to understand any missing or odd patterns in the data.

Collection of participant survey data. The study team will administer the 15-minute participant survey in four sites (Appendix E1.1 – E4.4) by offering sample members a choice of two response options—via the web or computer-assistance telephone interview. The field period for each site will be 16 weeks. During the first three weeks, respondents can complete the survey by web or by calling into the team’s data collection call center to conduct the interview by phone. After that time, trained interviewers will make outbound calls to nonrespondents to complete the survey by phone.

Two weeks before data collection begins in each site, the team will send advance letters (Appendix E5.1) to sample members, reminding them of the importance of the SNAP E&T program and encouraging them to complete the survey. At the start of data collection, individuals will receive an email invitation and text to complete the survey (Appendix E5.2). Email invitations and texts will be written in both English and Spanish, below an 8th grade reading level to ensure comprehension. They will include a web address for the web-based survey and a toll-free number they can call to ask questions or complete the survey. The letter also will describe the $30 gift card incentive for completing the survey. The team will send email reminders and texts (Appendix E5.3) each week for the first month and will make reminder calls to nonrespondents after that time, as well as send a reminder letter (Appendix E5.4) first-class with address service to update incorrect addresses. During recruitment, the team will also send a postcard encouraging participants to complete the survey (Appendix E5.5).

Collection of staff questionnaire data. The study team will ask all staff directly associated with the interventions to respond to a 15-minute web-based questionnaire (Appendix I1), which will enable the team to collect more structured information than a semi-structured interview, and from a broader set of staff. These questionnaires will be deployed before we collect qualitative data from staff and participants, so the team can use the information collected to inform those discussions. The study team will send an email invitation to staff to complete the questionnaire (Appendix I2). The email will describe the questionnaire, its purpose, and its length, and will provide the user-specific URL. It will also inform potential respondents that the questionnaire is voluntary.

Collection of qualitative data (staff semi-structured interviews, participant focus groups, and participant in-depth interviews). The study team will conduct a four-day in-person site visit to each of the eight sites to collect qualitative information about the intervention from the perspectives of SNAP participants and SNAP E&T agency and provider staff.

During the site visit, two-person teams who worked with the site leadership through the intervention design phase of the study will conduct semi-structured interviews with intervention staff from the State SNAP agency, local SNAP offices, E&T service providers, and relevant partner organizations (Appendix H1). Types of respondents will vary by intervention, but the team will target conducting interviews with all staff involved in the intervention (for example, about 15 State administrators and 15 local, frontline staff in each of the 8 sites). Each interview will last approximately 60-90 minutes. The interviews will be scheduled in close consultation with the site leadership to ensure minimal disruption of program operations and will be conducted by trained interviewers.

During each of the eight site visits, the study team also will conduct two 90-minute focus groups with 8 to 10 participants each (Appendix F1 and F1.1). Focus group participants will be identified using the contact information in the SNAP administrative data obtained at the start of the intervention. Every focus group will be conducted by two trained staff members. Each session will last approximately 90 minutes. One person will moderate the focus group while the other takes notes on a laptop. At the beginning of the focus group session, the study team will provide an overview of the purpose of the study and of the focus group, and read text describing the informed consent process described in Part A. Participants will be asked to sign a release agreeing to the digital recording of the discussion and to the use of the recording for research purposes, as long as respondents are not identified by name. When the focus group session is complete, the trained moderator and note taker will debrief, review the notes for major themes, and ensure that all research questions were answered. Topical areas found to be lacking in coverage will be emphasized during the next focus group and question probes will be tailored accordingly. Participants will each receive a $50 gift card for attending a focus group.

In site visits to four of the eight sites, the team also will conduct in-depth interviews (IDIs) with 15 participants per site using ethnographic techniques (Appendix G1). The team will use the same process for recruiting IDI participants as for focus groups but will exclude focus group participants from the pool of individuals with whom IDIs are conducted. Study team members will use the contact information obtained from the administrative data to email and text participants to schedule the interviews (Appendix G2.1). Recruiters will send a confirmation email and text after confirming a time; participants will also receive a reminder email and text a few days before the appointment (Appendix G2.2).

The one-on-one interviews will occur in person with the participants at a location convenient for them. Prior to the start of the interview, interviewers will ask participants for their consent to participate, obtain their written consent, and record their verbal consent to the interview being recorded. The interviews will last about 90 minutes. Individuals will receive a $50 gift card for participating in the interview.

B3. Methods to Maximize the Response Rates and to Deal with Nonresponse

Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.

The study team will employ a variety of methods to maximize response rates and deal with nonresponse. This section describes these methods for the participant survey and for qualitative data collection efforts.

Participant survey. Several aspects of the data collection design and additional efforts undertaken by the team administering the survey will help to ensure success in gaining participants’ cooperation and a high response rate:

Administering a shorter survey. Designed to take 15 minutes to complete, the participant survey is relatively shorter than several recent FNS participant-level surveys, including those in the SNAP E&T Pilots evaluation (25 to 30 minutes) and the SNAP Food Security evaluation (30 minutes), making it more likely individuals will start and complete the survey.

Offering the survey in both web and CATI formats. The participant survey will primarily be administered as a web survey. Respondents will be notified of the survey by mail and email (Appendix E5.1-E5.5). They will be encouraged to complete the mobile-optimized online instrument, but can call into a toll-free number to complete it by telephone if they prefer. Respondents will also receive mail and email reminders. If the survey is not completed after four weeks from initial contact, trained telephone interviewers will follow up with nonrespondents to complete the instrument by phone.

Using recently obtained contact information. In three of the four sites (Colorado, Massachusetts, and Rhode Island), the survey will be administered within only a few months after SNAP participants initially certify or recertify for the program, meaning participants’ contact information will be more reliable than in surveys that allow for more time between obtaining baseline contact information and interviewing. Similarly, in Connecticut, SNAP participants will have enrolled into the community college program at most one or two months before the launch of the survey, making it more likely their contact information will be reliable. For telephone numbers that are invalid or no longer in service at the time of the survey, the team will attempt to obtain this information from vendor databases. For more difficult cases, the team will conduct more intensive searches using physical and email addresses provided in the SNAP administrative data collected for all individuals.

Building on established relationships between site staff and potential respondents. In Connecticut, the study will be able to leverage the relationships between potential respondents and the community college coaches who regularly communicate with their students. These coaches often check on students to promote class and program attendance, and will be point people for the intervention. They will be able to encourage students involved in the intervention to respond to the survey.

Offering the survey in multiple languages. The survey will be available in both English and Spanish to engage more respondents in their preferred language.

Notifying and sending reminders to nonrespondents. Advance letters will be mailed to sample members before the survey, and the team will send reminder emails and texts throughout the fielding period. The team will also mail a reminder postcard, reminder letter, and refusal conversion letters to those who have not completed interviews or who mildly refuse to participate. Nonrespondents will receive calls at different times of the day and days of the week to increase the likelihood of reaching participants when they are available.

Utilizing trained interviewers. To develop the skills necessary to encourage participation among low-income individuals, all telephone interviewers receive general interviewer training before being assigned to a study. This training involves essential interviewing skills, probing, establishing rapport, avoiding refusals, eliminating bias, and being sensitive to at-risk and special populations. In addition, all interviewers will receive study-specific training that reviews study goals, instruments, and conveys best practices for interviewing for the specific study.

Leveraging experience in refusal conversion. The team has developed and refined methods to build rapport and overcome the reluctance of sample members to participate in interviews. Trained interviewers will use multi-pronged approaches to focus on preventing and converting refusals when conducting the telephone survey portion of the data collection. The strategies aim to convince sample members that (1) the study is legitimate and worthwhile, (2) their participation is important and appreciated, and (3) the information provided will be held private and will not affect their job or their eligibility for SNAP or other benefits.

Offering an incentive. Respondents will be offered a $30 incentive (in the form of a gift card) to complete the survey. Previous studies have demonstrated that providing incentives can help increase response rates in full-scale data collection effort, reduce non-response bias, and improve population representativeness (Singer and Ye 2013).

Monitoring survey performance to make real-time changes in outreach. Throughout the data collection period for the participant survey, the team will use reports to monitor response rates and missing data for each site and adapt the data collection appropriately to yield a high response rate. For example, the team will determine the most productive calling windows based on monitoring of completed surveys during the early part of the data collection period and adapt the staffing plan to maximize interviewer efficiency.

If we do not achieve an 80 percent response rate, we will use SNAP administrative data to conduct a nonresponse analysis per OMB guidelines. To assess whether nonresponse bias exists, we will obtain SNAP administrative information for the study participants at the time of random assignment, including name; contact information; demographic characteristics such as age, gender, and education level; household characteristics such as the presence of children and household income; and employment and income information collected in SNAP certification or recertification processes. Using the baseline data, we will (1) compare survey respondents and nonrespondents within the treatment and control groups, (2) test the significance of differences between respondents’ and nonrespondents’ characteristics, (3) look at whether these differences are the same across treatment and comparison groups, and (4) compare characteristics of the respondent and nonrespondent samples with those of the frame using a propensity score estimation model.

Staff questionnaires. Staff who are directly associated with the interventions will be asked to respond to the 15-minute questionnaire on their experiences implementing the intervention (Appendix I1). A purposive sample of staff will be invited to complete the questionnaire, so respondents will not be representative of all agency and provider staff in a statistical sense. The invitation to complete the survey will be sent over email with a customized URL (Appendix I2), and the survey will be administered by web to allow flexibility in completing it. Respondents will be able to start, stop, and return to the survey as they wish in an effort to provider greater flexibility. Site leaders will help to promote completion of the survey among their staff. In addition, the study team will send reminder emails encouraging staff to complete the survey.

Qualitative data collection. For the qualitative data collection, including focus groups and in-depth interviews with participants and semi-structured interviews and questionnaires with staff, a variety of methods will be used to maximize individuals’ and program staff’s voluntary participation and deal with nonresponse.

Participant in-depth interviews. The study team will conduct up to 15 in-person in-depth interviews with individuals in four of the eight sites (60 in total). In recognition that some scheduled interviews may not occur due to cancellations and no-shows, the study team will overschedule the number of appointments needed to reach the target interview goal of 15 interviews per site and attempt to reschedule interviews while on-site. The team will send out an invitation email and text to potential interviewees (Appendix G2.1). Scheduling interviews will be flexible to accommodate participants’ schedules and needs. Recruiters and study team members will schedule interviews at times convenient to respondents, including daytime and evening. Those who agree to participate will be mailed a confirmation letter with appointment information. Respondents will receive a reminder email and text 24–48 hours before the scheduled appointment (Appendix G2.2). Those who participate in the in-depth interviews will receive a $50 gift card at the end of the 90-minute interview.

Participant focus groups. Because some participants will not show up for the focus group, the study team will overrecruit participants to obtain the target number (typically recruiting 15 participants to ensure 8 to 10 attend the session). Focus groups (Appendix F1) will be held at times convenient to most respondents, such as in the evening. Those who agree to participate will be sent a confirmation email and text with appointment information and will receive a reminder email and text 24–48 hours before the focus group (Appendix F2.2). Those who participate in the in-depth interviews will receive a $50 gift card at the end of the 90-minute focus group.

Staff semi-structured interviews. Since the initial stages of the intervention design process, each site has been led by a SNAP E&T site leader and a study team leader. These leaders will work with program staff before each visit to ensure that the timing of the semi-structured interviews (Appendix H1) is convenient. Because the visits will involve several interviews and activities each day, flexibility will be built into the scheduling of specific interviews and activities to accommodate the needs of respondents and site operations. If a respondent is unable to meet at the scheduled time, the study team will schedule an alternative time while on-site or a follow-up call at a more convenient time or arrange to meet with an alternate respondent in a similar position.

B4. Test of Procedures or Methods to be Undertaken

Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

The study team pretested the participant surveys in April and May of 2022. The participant surveys will assess the same topics for all sites (barriers to engaging with site services, program satisfaction, and reasons for engagement decisions) but include different questions based on the site’s services and intervention (Appendices E1 – E4). Each site’s participant survey was pretested with 5 participants that were available to participate within the pretest timeframe across four intervention sites. The study team conducted telephone interviews ranging from 30-60 minutes with each pretest respondent to administer the pretest and solicit feedback on the site-specific survey. The interviews focused on asking respondents to identify and share concerns about unclear questions or suggestions to response options, questions that took too long to answer, burden, and the flow of the survey. The team used cognitive methods to gauge respondents’ understanding of the intent of questions and response options, focusing on survey items that asked about topics that are complex or difficult to measure. As a result of the pretest, the team made several revisions to the surveys including adjusting response options for clarity and adding in additional examples. The team also dropped or significantly shortened several questions in response to pretest feedback. After these revisions, the burden estimate of 15 minutes accurately represents the length of each participant survey. Pretest findings are summarized in the SNAP E&T RCE Pretest Findings Memo (Appendix M).

B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Table B5.1 lists staff consulted on statistical aspects of the design. The same staff will be responsible for collecting and analyzing the study data.

Table B5.1. Individuals consulted on statistical aspects of study design

Mathematica staff

Title

Phone

Email

James Mabli

Project Director

617-301-8997

[email protected]

Gretchen Rowe

Principal Researcher

202-484-4221

[email protected]

Daniela Golinelli

Sr. Statistician

202-838-3597

[email protected]

Kim McDonald

Survey Researcher

312-585-3311

[email protected]

Jonah Deutsch

Senior Researcher

312-994-1018

[email protected]

Peter Schochet

Senior Fellow

609-936-2783

[email protected]

Pamela Holcomb

Principal Researcher

202-250-3573

[email protected]

Leah Shiferaw

Researcher

510-285-4686

[email protected]

Kelley Monzella

Sr. Program Analyst

312-585-3308

[email protected]

Dan Friend

Senior Researcher

202-264-3474

[email protected]

USDA staff

Title

Phone

Email

Mehreen Ismail, FNS

Social Science Research Analyst

703-305-2960

[email protected]

Anna Vaudin, FNS

Social Science Research Analyst

703-305-0414

[email protected]

Peter Quan

NASS Reviewer

202-720-5269


[email protected]



1 A strengths-based approach focuses on the participant’s areas of strengths rather than exclusively discussing areas of need. For example, asking students to identify both areas in their life where they need help and areas in which they are thriving.

2 One exception is Connecticut, where it is possible that the site will not be able to provide all outcome measures based on SNAP E&T administrative data. In the event this occurs, survey data will inform impact estimates of the intervention on outcomes of engagement in the SNAP E&T program, barriers to participation or employment, and attendance in community college classes.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKim McDonald
File Modified0000-00-00
File Created2023-08-18

© 2024 OMB.report | Privacy Policy