L. Responses to NASS comments_2022 12 19

L. Responses to NASS comments_2022 12 19.docx

Rapid Cycle Evaluation of Operational Improvements in Supplemental Nutrition Assistance Program (SNAP) Employment & Training (E&T) Programs

L. Responses to NASS comments_2022 12 19

OMB: 0584-0680

Document [docx]
Download: docx | pdf

Appendix L. FNS Responses to NASS Comments









This page has been left blank for double-sided copying

In the following document, the text in black is from NASS, and FNS replies are in red:

Attached is the detailed writeup, comments, and recommendations from members of NASS’ Sampling and Frames Development Section (SFDS) for the SNAP E&T ICR.  Peter Quan, head of NASS’ SFDS was the lead reviewer.

Please note that the writeup focused on a limited number of the sampling plans (DC, Mass, Minnesota-Rural).  NASS does not often include multiple sampling plans for multiple reports in one ICR maintained by NASS. 

From the Supporting Statement A, question 16, it appears that each sampling plan will result in its own publication with its own summary results – ie, there will not be combined US-level summary tables or indications from the multiple sampling plans.  Please confirm.

<Response> FNS confirms that each sampling plan will result in its own publication with its own summary results. The design does not consist of multiple samples with which to assess the effectiveness of a single intervention on participants’ outcomes. Although all interventions attempt to improve SNAP E&T program processes through increasing enrollment or engagement in SNAP E&T programs, the evaluation consists of using data to assess the effectiveness of eight separate and distinct interventions. FNS does not plan to create combined US-level summary tables nor combine results across any of the eight sites.

Sampling Frame

Table B1.1 defines the sampling entities that comprise the eight sampling frames. However, it is not clear how these eight sites were selected for the study. It is also unclear why the sampling frame elements are not similar and what the specific stratification are for each of the eight sites.

<Response>FNS has added a paragraph to Section B1 that describes how sites expressed interest in participating in the study, were purposively selected by FNS, and then created their design. FNS did not select sites to be representative of all geographies or providers within a state. As such, the findings are not meant to generalize to all SNAP E&T participants nor programs nationwide.

Sample Size Determination

Sample size determination processes were not quite clear: sample size formulas were not included in the document and sample size parameters were briefly mentioned in Table B2.1 footnote. It will be helpful if the document contained specific method/formula/code for sample size determination.

In absence of specific formula/methodology SFDS attempted to reproduce certain results. After several attempts, SFDS was successful using the following formula to derive sample sizes only for District of Columbia and Minnesota-Rural sites:

n = (𝑍1−𝛼2+ 𝑍1−𝛽) 2 * (𝑝1(1−𝑝1)+𝑝2(1−𝑝2))/(𝑝1−𝑝2) 2

These specific SFDS’s sample size determination results were similar to FNS’s results. Sample size results for District of Columbia were 279 per group or 558 total; and MN-Rural were 2,324 per group or 4,648 total. It was not clear why the DC site sample size for the two groups was unbalanced and Minnesota Rural site it was balanced; and how the sample was allocated between the two DC site groups. Also, it was not clear, when multiple comparisons were made (in case of Massachusetts), whether the sample size was adjusted for type I error (since multiple testing increases type I error rate).

<Response>FNS has added the MDI formulas used to produce the MDIs in each site to Section B2 as well as to all tables in Appendix U (Tables B2.1a through B2.1h).

FNS does not plan to adjust for multiple comparisons in this evaluation. Multiple comparison adjustments are appropriate when there are many tests of an intervention on different outcomes. However, for most of the sites, the analysis consists of conducting a single test of whether the intervention increased the percentage of SNAP participants who enroll or engage in the SNAP E&T program. In two sites (Massachusetts and Rhode Island) in which there are multiple phases of random assignment, the analysis tests whether each component of the intervention affects different outcomes. For example, in Massachusetts, one analysis tests whether outreach messages assigned at the first point of random assignment increase the percentage of participants who express interest in learning more about SNAP E&T services. Another component tests whether the outreach messages ultimately lead to higher rates of enrollment in SNAP E&T. The difference between these tests is that the first focuses on a proximal outcome and the second on a distal outcome. Two other analyses test the effectiveness of offering an assessment or a warm handoff to career center staff on the rate of SNAP E&T enrollment. These are not testing the same component of the intervention as the first test. As a result, a multiple comparisons adjustment is not needed.

There are subgroup analyses planned for each site which will replicate the main estimation models for policy-relevant subgroups such as those based on geography, age, gender, or education. FNS views these as exploratory analyses, rather than confirmatory ones. This will be noted in the evaluation reports when presenting results so the readers know why multiple comparison adjustments were not used when conducting subgroup analysis. These analyses will provide policy-relevant but less rigorous evidence about the intervention effects, which is valuable for continuous program improvement and identifying potential hypotheses for more rigorous examination in the future.

In response to NASS’s comment, we have noted FNS’ decision regarding multiple comparison adjustment in Section B2.2.

Sampling Weights

It is not clear what the sampling element probabilities of selections are for each of the eight sites. It was not clear how weights will be adjusted for non-response. Hence, weights were not evaluated.

<Response>FNS has added text to Section B2.2 describing the weight construction process for analyses based on administrative data and the nonresponse analysis and weight construction processes for analyses based on survey data.


Study Designs

  1. The document would have been greatly enhanced by addressing each site individually. It was unclear why all evaluations were presented as a whole. There was no indication that any analysis would be conducted across all sites. Many details were obscured by synthesizing and not fully developing all analysis at each site and how it addressed the mission or goal of the site.

<Response>The design and evaluation plans for each of the eight sites were presented as a whole because this is a single evaluation study of a range of program improvement solutions implemented in different SNAP E&T agencies. There will not be a common or cross-site summary report given the variation in challenges and proposed interventions across sites. The evaluation will result in a short report for each site describing its challenge, intervention, implementation experience, cost, effectiveness, and scalability. All sampling plans, analysis methods, and information collection strategies were described for each site in the corresponding section of the OMB package. In response to NASS’s comment, FNS substantially revised Appendix B1 to synthesize these plans for each site, allowing the reader to learn about these components in a single place.



  1. Mathematical formula for all calculations, models, and analysis should be fully disclosed. This greatly enhances the ability to review and evaluate the program conduct and estimation.

<Response>FNS has added formulas for all MDI calculations to Section B2 (following Tables B2.1 and B2.2) as well as to all tables in Appendix U (Tables B2.1a through B2.1h). FNS has added models for the nonresponse analysis and weight construction in Section B2 as well.



  1. Many sites appear to have selected subgroups for testing – like Massachusetts has a population of 1 million, but the subgroup in the test is 30,000. For the experiments there may need to be blocks to ensure subgroups are evaluated, and we are not inadvertently skewing results. For the surveys there may need to be weights applied. This process is mentioned but not well defined which makes it difficult to do proper assessment.

<Response>FNS has clarified that the 1 million participants mentioned in Section B1 consists of all adult SNAP participants eligible for the intervention across the state, but the intervention will only be offered in several geographic areas in the state. The findings from each site’s analyses are not meant to be representative of the full target population in the site or state. Sites selected the number of people to participate in the intervention based on service providers’ capacity constraints and producing acceptable MDIs. Geography and other characteristics will be used as strata when randomly assigning SNAP participants to research groups to ensure that the design does not lead to skewness based on geographic area. The random assignment blocks are described in more detail in Section B2 on page 19.



FNS has added text to describe in greater detail the weight construction for analyses based on administrative data and those based on survey data in Section B2.



  1. Some designs are clearly hierarchical and are presented as such, but the suggested analysis does not appropriately address the design and the error associated with nested treatments.



<Response>In Massachusetts and Rhode Island, there are several stages of random assignment, but these are not nested designs. They are sequential designs where each stage of random assignment can be considered a new stratum. Nested designs typically start with randomly assigning a larger unit such as a program location to a treatment condition and then randomly assigning participants to research groups within each program location, with the goal of having the impact estimates for participants represent the effectiveness of the intervention in all program locations. In contrast, each stage of random assignment in FNS’ evaluation of Massachusetts and Rhode Island’s interventions consists of randomly assigning participants to research groups. As a result, each stage can be considered strata or subgroups of participants. The tests for the second and third stage of random assignment in Massachusetts and the second stage of random assignment in Rhode Island are not meant to generalize to all individuals who were initially randomly assigned in the first stage; the results of those tests represent the effectiveness of that specific component of the intervention among participants who were eligible to receive that component.



For example, SNAP participants in Massachusetts will be sent a text message inviting them to learn about E&T services. Individuals who affirm they are interested in learning more will receive an online, self-administered screening form to assess their work readiness. Based on their level of work readiness, individuals will receive a one-on-one assessment by an E&T worker to assess participant fit and readiness for a referral to a local career center. Those deemed to be work ready in the one-on-one assessment will receive a referral and warm handoff to a career center. In this case, there will be two treatment groups in the first stage that are differentiated only by the behaviorally informed content used in the initial text message, resulting in three research groups (treatment group 1, treatment group 2, and control group). Individuals will be randomly assigned to one of those two groups or to a control group that has access to a website to independently learn more about E&T services available at the local career center. Within each treatment group, individuals who pass the online, self-administered screener will be randomly assigned to receive a one-on-one assessment by an E&T worker. Individuals deemed to be work-ready in the one-on-one assessment by the E&T worker will then be randomly assigned to receive a warm handoff to the career center.

As described in Appendix U Table B2.1.e, FNS will evaluate the impact of the assessment in the second stage by comparing (1) the combination of individuals in the assessment treatment group who were deemed not to be work ready and those in the assessment treatment group who were deemed to be work ready but were assigned to the career center control group and (2) the assessment control group, to estimate the effect of the assessment on the percentage of individuals who enroll in SNAP E&T. All of these individuals will originate from the first-stage treatment group T1. These impacts will be representative not of all SNAP participants in the target population in Massachusetts, but only those who responded affirmatively to the screener in the first stage that they were interested in receiving more information about SNAP E&T services.



In response to NASS’ question, we have added text to Section B2 describing that the designs in Massachusetts and Rhode Island are not nested designs.

5. Some interventions and endpoint evaluations are directly related. However, other treatment evaluations are program enrollment or job placement despite the intervention not directly impacting the endpoint. It the treatment does not adequately address and directly impact the end goal, the treatment could be deemed as inadequate when it performed very well or vice versa.

<Response>Where possible, FNS will evaluate the effect of the intervention on proximal, process-based outcomes such as whether a SNAP participant responded to a text message to affirm they are interested in learning more about SNAP E&T or whether a SNAP participant showed up to an assessment appointment. In all sites, however, FNS will evaluate the effect of the intervention on distal outcomes focused on SNAP E&T enrollment and engagement. FNS will be able to identify whether there are impacts on the proximal outcomes but not the distal ones and vice versa. The data FNS will collect from sites will allow for both sets of tests, where possible. In several sites (MN-rural, MN-Hennepin, Kansas, Colorado, and Connecticut), there are no proximal outcomes; these designs are created solely to measure impacts on the distal outcome of increased SNAP E&T enrollment and engagement in activities. In Massachusetts, DC, and Rhode Island, however, FNS will be able to measure whether participants responded to the intervention by affirming interest in learning about SNAP E&T or whether they expressed interest by presenting themselves at specific service providers. In response to NASS’ comment, FNS added a paragraph to section B.1 explaining how effects on distal and proximal outcomes will be measured.

6. The survey analysis also had sample size calculations. However, the population size was unknown and the purpose was often difficult to grasp. If the evaluation was presented on a site-by-site basis, the experimental design and survey analysis could have been presented as complementing each other to ensure that the treatment ultimately impacted the primary endpoint, but the approach was difficult to grasp.


<Response>FNS provided the purpose of the survey on page 13 when the survey data was introduced, following the description of the administrative data. “In four sites, participant survey data will supplement the administrative data. The participant survey will collect information on receipt of SNAP E&T recruitment and outreach materials, assessments, case management, and referral services. It also will assess barriers to engaging with services and seeking employment, program satisfaction, and reasons for engagement decisions (for those who engaged in the E&T programs, and those who either never engaged or disengaged). This information will be used as outcomes in the evaluation’s impact analysis that cannot be obtained in the SNAP E&T administrative data. It also will be used to describe participants’ experiences in the intervention to provide context for those analyses.” The population of the surveys will consist of all individuals in the target population of the intervention. This was discussed on page 14 to describe how the samples were selected in Colorado, Massachusetts, and Rhode Island (and why a sample was not used in Connecticut): “In the four sites in which the survey will be administered, the survey will include either all study participants (Connecticut) or a stratified random sample of study participants (Colorado, Massachusetts, and Rhode Island). In Connecticut, because of the limited number of SNAP participants available to participate in the intervention, the participant survey sample will include all SNAP E&T participants enrolled in Connecticut’s community colleges to ensure sufficient statistical power with which to identify intervention effects. In Colorado, Massachusetts, and Rhode Island—which will have much larger numbers of intervention participants—the study team will use stratified random sampling to select individuals to participate in the survey. For these three sites, the study team determined the number of individuals to include in each survey sample using statistical power analyses based on comparing the analyses’ minimal detectable impacts with ranges of impacts of related interventions in the field (see Section B2).



FNS has added text in the paragraph preceding Table B1.2 that describes the survey population in each state, and revised Appendix B1 to summarize this information for all sites that will collect survey data.


7. For the surveys, non-response adjustment was said to be based on propensity scores, which may or may not have addressed the strata. It was unclear as to the inputs to the propensity scores. The weighting may lead to obscure results.



<Response> FNS has clarified the specification for the non-response weight creation, along with details on the inputs for the propensity score models (section B2.2). Propensity score models will be estimated separately by site and research group, and will include independent variables accounting for study groupings, including geography.


Study Designs only for District of Columbia and Massachusetts


District of Columbia

The selection and weighting of the initial pre- and post-samples is unclear. The target number of participants/sample size for pre- and post-samples is 375. However, the population is unknown as well as any adjustments that may be needed in using blocks or weighting to reflect the global population represented.

<Response>Page 9 in Section B1 states that the District of Columbia intervention will be administered to all SNAP E&T participants in the District. We have clarified this by revising Appendix B.1 describing each site’s design and target number of participants. This is a simple pre-post design with all individuals who will be participating in SNAP E&T at any point from July 2023 through December 2023.


1. Are these participants weighted to reflect the general population characteristics - weights might include gender, dependents, age, etc.? <Response>We have added text to Section B2 to describe the use of weights in the analysis. The intervention component in the District of Columbia site that is randomly assigned will use weights to account for random assignment. This is described in Section B2. The pre-post analyses will not use weights because it is based on a census of all SNAP E&T participants.

2. Are the weights applied before or after the evaluation of continued service clients? It was difficult to understand if the 375 was the population or a selection and how characteristics that impact employment stability, such as dependents/childcare, transportation, age, etc. that may impact engagement were controlled. <Response>The 375 participants referenced in the text make up the population of all SNAP E&T participant in the District. FNS has clarified this in Appendix B1. The weights will be applied during the analysis phase of the study once the interventions are completed.


For the pre/post analysis, the primary endpoint is listed as engagement with SNAP E&T. However, the definition of engagement was not presented.


1. Does missing one appointment with a case manager constitute a lack of engagement or does missing multiple (2 or 3) appointments? <Response>The outcome measures are based on administrative data in which case managers track the number of months a SNAP participant is active in SNAP E&T. This information has been added to Appendix B1.


i) Perhaps the metric is percentage of missed appointments/ job interviews/other career development steps? Depending on how the term is defined, the association with the endpoint may be indirect. <Response>The measure of engagement that will be used as an outcome is based on SNAP E&T administrative data. There are indicators for whether a SNAP participant is “active” in E&T, which typically reflects they showed up for at least one appointment or meeting in the month.


The randomized experimental design intervention is a text message reminder. Text messages are sent - prior to appointments, after assessments, and after missed appointments. The endpoint is attendance of the case management appointment, but it is unclear as to whether there is one or multiple appointments. The data could be percent of appointments attended or the binary outcome of attending or not attending the primary case management appointment. <Response>The outcome will measure whether the appointment for which a reminder was sent was attended by the participant.


For all experiments it mentions that random assignment will be within each site and within blocks defined by geography, administrative office, or provider. Other administrative data will be provided to the team to ensure that the individual was not previously assigned a treatment. Blocks appear to be part of the design. It is less clear if this is a complete randomized block or incomplete randomized block. <Response>All designs are complete randomized block designs (all treatments will be offered in each block).

Modeling may be an alternative to effectively evaluate the treatment. In addition to treatment and demographic covariates, the interactions between treatment and demographic variables may be an important aspect of the modeling. The comments regarding the new assessment and case management tools indicated coaching on job flexibility, best fit, and benefits evaluation contrasting with pure salary driven decisions. It was not clear if there was an opportunity to incorporate the data to evaluate the benefits and interactions of these characteristics via additional variables in the models. The sample size for the evaluation is small, so the assessment is limited. <Response>Evaluations using additional variables will be challenging for the District of Columbia site due to its small sample size.


Massachusetts

It appears that counties or local offices have been selected to participate in the study, so a selection from the population has been conducted prior to starting the evaluation. It is unclear how the selection took place and if the results are intended to reflect the potential outcome if interventions were to be applied across the state. This might be clarified as well as any weighting or blocks applied in the analysis if applicable. <Response>This information has been clarified in Table B1.1. MA DTA purposively selected the five geographic areas based on the locations of DTA offices and career centers.


The analysis appears to be in stages. Ultimately, they intend on increasing enrollment and targeting enrollment to correct candidates. However, it is hierarchical, but it does not appear to be analyzed as a hierarchical design. If the study design is hierarchical the analysis must be hierarchical to properly capture the error terms. <Response>This is a sequential design, not a nested design. We describe this in more detail in the response to Question #4 above. Each stage of random assignment can be considered strata or subgroups of participants; those findings are not meant to be representative of the SNAP participants randomly assigned at the first stage of random assignment. FNS has added text to Section B2.1 to describe this further.

In the first stage, the analysis appears to evaluate the difference between the text message with content A and the text message with content B. In addition, the effect of the outreach message versus no message is assessed. These are both assessed at the first stage of randomization on the same group. The endpoints are different. The first is the direct endpoint of completing the screener assessment. The second assessment evaluates enrollment. The endpoint captures that the control group might enroll without the screener, but it is not the direct call to action from the message. While the message may contribute to eventual enrollment, the endpoint measurement is indirect and extraneous sources of variation may prevent evaluation. <Response>The evaluation will assess whether the types of outreach messages yield higher percentages of SNAP participants who express interest in learning more about SNAP E&T services, as well as higher rates of enrollment in the SNAP E&T program. The comparisons described in Appendix U, Table B2.1.e show that individuals who complete the assessment or receive a warm handoff to the career center are not included in the evaluation of the outreach message on SNAP E&T enrollment. Only the control groups for later stages of random assignment are included in that assessment specifically for the reason the reviewer has raised in this comment—to avoid extraneous sources of variation when evaluating the effect of the outreach message on E&T enrollment. For example, Appendix U, Table B2.1.e states that the estimator will compare “(1) the combination of individuals who do not pass the survey screener and those who pass the screener and are placed in the control group for the assessment and (2) the text message control group to estimate the effect of the outreach message on the percentage of individuals who enroll in SNAP E&T. (All originating from treatment group T1.)”


Following the screener assessment, only participants that meet the criteria for full assessment will be randomized to receive a second treatment – the FEW Assessments. The endpoint evaluated is enrollment in SNAP E&T. Sample sizes would need to reflect the participants that were not ready for the second step of the process. It was unclear what data were used to approximate the screener assessment results. The FEW assessments assisted participants in recognizing their strengths and needed services in preparing for the workforce. Since only participants that completed the screener as workplace ready had the opportunity of being selected for this assessment, the universe to which the results are attributable are restricted to those potential participants that would be screened as potential workforce ready candidates. The same comments could be made of the third and final trial which is the hand-off in the employment agency.<Response>NASS is correct that the findings for later stages of random assignment (namely the assessment and career center handoffs) are not representative of all SNAP participants eligible for the intervention. They are representative only of those participants who affirmed interest in learning about SNAP E&T services and who passed the initial screener. The screener data will be obtained from DTA.

The last 3 components of the hierarchical design use SNAP E&T enrollment as their endpoint. Since 1) the same endpoint is evaluated at each of the 3 stages and 2) the participants for each successive evaluation are a sub-group of the former evaluation participants, the p-value must be adjusted to reflect the multiple evaluations. It was unclear if this adjustment was considered.<Response>There is only a single test of whether the assessment increases enrollment in SNAP E&T (comparing those who received the assessment and those who did not). The same is true for the effect of the warm handoff referral (comparing those who received a warm handoff and those who did not). Because FNS is not performing the same test multiple times, multiple comparison adjustments will not be used in the analysis. This is described in greater detail in response to the NASS comment on multiple comparisons above.


In addition, Appendix U. Table B2.1.e and Table B2.1 seem to address slightly different analysis. It was unclear if all comparisons were to be conducted or only those provided in the original document, not the Appendix. .<Response>We have corrected the values in Table B2.1 to be consistent with Appendix U, Table B2.1.e.


The participant surveys that evaluate interest in program participation and the quality of assessments in assisting the participants have small sample sizes, but appear to be very focused on a single metric of interest. There are many other questions asked on the questionnaires, but the data collected may be of limited use due to the sample size. <Response>Page 13 of Section B1 states: “The participant survey will collect information on receipt of SNAP E&T recruitment and outreach materials, assessments, case management, and referral services. It also will assess barriers to engaging with services and seeking employment, program satisfaction, and reasons for engagement decisions (for those who engaged in the E&T programs, and those who either never engaged or disengaged). This information will be used as outcomes in the evaluation’s impact analysis that cannot be obtained in the SNAP E&T administrative data. It also will be used to describe participants’ experiences in the intervention to provide context for those analyses.” The survey will allow FNS to conduct descriptive analysis of barriers to engaging in SNAP E&T services as well as program satisfaction.


The third participant survey data collection instrument is unclear and does not appear to be in the appendix, so it cannot be evaluated. <Response>Appendix E contains all four participant surveys: E1.1 (Colorado), E2.1 (Massachusetts), E3.1 (Connecticut), and E4.1 (Rhode Island).


For the participant survey, the non-response adjustment factor is based on propensity scores as opposed to explicit non-response adjustment. Likely, the models for the propensity scores are using same or similar data to adjust initial weights, which seems a little unusual. <Response>The weight adjustments will be based on propensity score analysis of nonresponse. These will be based on geographic location and demographic characteristics. This adjustment will not double-count the adjustment made within geographic location to balance numbers of treatment and control group members. We have added details about the weight construction and propensity score process to Section B2.

Data Collection

Data collection methods vary for each of the eight sites. The following only discusses the Massachusetts site data collection plan: some of the following overlap with the Massachusetts section (above). A text message will be sent to potential SNAP E&T participants inviting them to learn about E&T services. Those who indicate they are interested with be instructed to fill out an online work readiness survey. Based on their level of work readiness they will receive a one-on-one assessment with a E&T worker and referred to a local career center. There will be two treatment groups and a control group to evaluate the effectiveness of electronic messaging. The two treatment groups are only differentiated by the behaviorally informed content


1. It was unclear what is meant by “behaviorally informed content.” <Response>The text messages will be constructed based on best practices in behavioral science for maximizing response. For each intervention that will use behaviorally informed messaging, the types of messages are outlined in Table B1.1. For example: a mere-exposure effect, which sends an initial message to increase the awareness of an offer before sending information about a formal offer, or an endowment effect, which highlights a participant’s ownership of something and may increase the value they place on it and the likelihood of a positive response.

2. It was unclear how sampling elements will be assigned to treatment groups and what treatment they will receive. <Response> Section B2.1 describes the random assignment probabilities at each stage of random assignment. FNS added text to highlight that later stages of random assignment are based on equal probabilities.


Regarding the participant survey, page 14 of section B1 states that a stratified random sample will be used in Colorado, Massachusetts, and Rhode Island. Strata will be based on geography such as county. This information has also been added to the revised Appendix B1 table. In the fourth site (Connecticut), sampling will not be necessary since the study team will attempt to conduct the survey with all individuals enrolled in the intervention.


Next, there will be one treatment group and one control group to evaluate effectiveness of one-on-one assessment. After this, there will be one treatment group and one control group to evaluate effectiveness of the handoff and referral to career center.

1. It is unclear how the next two treatment and control groups will be determined. <Response>Section B2.1 describes the random assignment probabilities at each stage of random assignment in Massachusetts.


Finally, a participant survey will be given at the end of this process described above.

1. It is unclear if all participants will receive a participant survey. <Response>Section B2.1 describes that the survey in Colorado, Massachusetts, and Rhode Island will be administered to stratified random sample. This information has also been added to the revised Appendix B1 table.


The participant survey goals are to determine whether the intervention process has enhanced recruiting, outreach and engagement causing enrollment to increase.


1. It seems the participant survey does asks specific questions on this specific topic, and

2. There are several instances for open ended questions to get specific responses to determine if the survey goals were reached.

<Response>The participant survey goals are described on page 13 in Section B1. “In four sites, participant survey data will supplement the administrative data. The participant survey will collect information on receipt of SNAP E&T recruitment and outreach materials, assessments, case management, and referral services. It also will assess barriers to engaging with services and seeking employment, program satisfaction, and reasons for engagement decisions (for those who engaged in the E&T programs, and those who either never engaged or disengaged). This information will be used as outcomes in the evaluation’s impact analysis that cannot be obtained in the SNAP E&T administrative data. It also will be used to describe participants’ experiences in the intervention to provide context for those analyses.”





File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMathematica Report
Subjectreport
AuthorJames Mabli
File Modified0000-00-00
File Created2023-12-12

© 2024 OMB.report | Privacy Policy