SUPPORTING STATEMENT PART B FOR
Evaluation of the Food Insecurity Nutrition Incentives (FINI)
Office of Research and Analysis
Food and Nutrition Service
US Department of Agriculture
3101 Park Center Drive
Alexandria, VA 22302
Phone: 703-305-2640
Fax: 703-305-2576
E-mail: [email protected]
Section Page
B Collections of Information Employing Statistical Methods 1
B.1 Respondent Universe and Sampling Methods 1
Respondent Universe 1
Sampling Design for the SNAP Participant Survey 2
Sampling Design for Key Informant Interviews with SNAP Participants 5
Sampling Design for Key Informant Interviews with Grantee Administrators 5
Sampling Design for the Core Program Data 6
Sampling Design for the Outlet Survey 6
B.2 Procedures for the Collection of Information 6
Statistical Methodology for SNAP Participant Survey Sample Selection 6
Methodology for SNAP Participant Key Informant Interview Sample Selection 8
Methodology for Program Administrator Key Informant Interview Sample Selection 8
Methodology for Core Program Data Sample Selection 8
Methodology for Outlet Survey Sample Selection 8
Degree of Accuracy in the SNAP Participant Survey 9
Minimum Detectable Difference 9
Estimation Procedures for the SNAP Participant Survey 13
Sample Weights 13
Sampling Error Estimation 14
Unusual Problems Requiring Specialized Sampling Procedures 14
Any Use of Periodic (Less Frequent Than Annual) Data Collection Cycles to Reduce Burden 15
B.3 Methods to
Maximize Response Rates and to Deal With
Issues of
Nonresponse 15
Nonresponse Bias Analysis 16
B.4 Test of Procedures or Methods to be Undertaken 16
B.5 Individuals
Consulted on Statistical Aspects and
Individuals Collecting
and/or Analyzing Data 17
Tables Page
B2.1 Sample size of SNAP participants for an intervention cluster and a comparison site 8
B2.2 Minimum detectable difference (MDD) for one-tailed test with alpha = .05, power = .80 11
B2.3 Minimum detectable difference (MDD) between groups of clusters and the comparison group for individual intake, one-tailed test with alpha = .05, power = .80 12
B2.4 Minimum detectable difference (MDD) between groups of clusters and the control group for food insecurity status, one-tailed test with alpha = .05, power = .80 12
B2.5 Minimum detectable difference (MDD) between groups of clusters and the control group for selected secondary outcome prevalences (P), one-tailed test with alpha = .05, power = .80 13
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, state and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
Four types of respondent universes are specified for the proposed evaluation of the Food Insecurity Nutrition Incentives (FINI) Grant Program:
The first group of respondents includes State/local government agencies, i.e. SNAP administrative staff. We will request the SNAP administrative files from States that host FINI outlets; the information contained in these files will be used to develop a sampling frame of SNAP participants (the second respondent group, described below). To reduce the burden on State agencies, we will review the number of FINI outlets in all States and identify States with more than 5 FINI outlets1. For purpose of estimating burden, we have assumed that SNAP administrative data will be obtained from 51 State/local agencies. All SNAP administrative files will be obtained either using secure methods such as a secure FTP site or encrypted email.
The second group of respondents includes individuals/households: i.e. Supplemental Nutrition Assistance Program (SNAP) participants assigned to the intervention and comparison groups. Survey data collected from the SNAP participants will be used to support the outcome evaluation. We will work with the local/State SNAP office to obtain SNAP case files and select participants from the intervention and comparison areas. To gather data prior to incentive program implementation, we will identify the SNAP participants several months before the Pre-SNAP Participant Survey (SPS) data collection. The SPS target sample sizes are based on the sample needed to support minimum detectable differences (MDD) between subgroup estimates. Key informant interviews will also be conducted with a small number of SNAP participants who completed the post-SPS and are willing to participate in a follow-up interview.
The third group of respondents includes business-not-for profit (i.e., Food Insecurity Nutrition Incentives (FINI) Grantees). All large-scale and multiyear FINI grantees’ program administrators will complete the outlet characteristics form which will be used to identify the location and sample SNAP households in the treatment clusters. These grantees will also participate in key informant interviews and provide contextual and process data for the programs included in the evaluation. FINI grantees will also provide annual and quarterly Core Program Data about their program operations and performance. The Core Program Data collection is the minimum needed to fulfil the congressional requirement of using “rigorous methodologies capable of producing scientifically valid information regarding the effectiveness of the project.”
The fourth group of respondents includes business-not-for-profit and for-profit businesses (i.e., FINI Retail Operators). The quarterly Core Program Data files will be the source used to identify the FINI Retail Operators. All FINI retail operators will complete a survey about their experience with implementing FINI at their outlet (Outlet Survey). Data collected from FINI retail operators is the minimum needed to fulfill the congressional requirement of using “rigorous methodologies capable of producing scientifically valid information regarding the effectiveness of the project.”
This evaluation will utilize a quasi-experimental design with clustered intervention groups and matched comparison groups. In total, we will have six clusters—four intervention clusters and two comparison clusters. Based on a review of the 2015 grantee programs and consultation with U.S. Department of Agriculture (USDA) staff, FINI outlets will be included in one of four clusters, based on outlet type and size of the incentive match. The types of outlets to be included are farmers markets and farm stands combined into one intervention group, and grocery stores as a second intervention group.2 These two intervention groups will be subdivided into outlets that offer dollar-for-dollar incentive matches and those that offer less than dollar-for-dollar matches.3 Thus, the resulting four intervention clusters will include (1) farmers markets and farm stands with a dollar-for-dollar match rate, (2) farmers markets and farm stands with less than a dollar-for-dollar match rate, (3) grocery stores with a dollar-for-dollar match rate, and (4) grocery stores with less than a dollar-for-dollar match rate.
We will construct two matched comparison groups, one consisting of farmers markets and farm stands to serve as the comparison group for clusters 1 and 2, and the other consisting of grocery stores to serve as the comparison group for clusters 3 and 4.
Separate sampling frames will be created for the four intervention two comparison clusters.
Four data sources will be employed as building blocks for creating the SNAP participant samping frame:
(1) SNAP administrative data from the State agencies, which includes SNAP household records with contact information, SNAP status and benefit, and basic demographic information;
(2) Anti-Fraud Locator Using Electronic Benefit Transfer (EBT) Retailer Transactions data (ALERT), maintained by FNS, which includes the SNAP participants’ transaction records that can be matched to the SNAP participant records by EBT card number, and to the outlet records by FNS number;
(3) Store Tracking and Redemption System data (STARS), maintained by FNS, which includes a list of SNAP authorized farmers markets and grocers with store type, status, characteristics, and geographic information that can be matched to the outlet records by FNS number; and
(4) FINI outlet characteristics (e.g., retailer type, incentive structure, operating schedule) obtained from the FINI grantees.
By linking the data sources and by using geocoding, we will develop a list of SNAP participants living within an outlet’s catchment area (e.g., 2 mile radius for urban outlets and 8 miles for rural outlets) for each of the intervention and comparison clusters. Then, we will create a frame of SNAP participants by pooling the lists for all catchment areas in each cluster or group. Thus, we will develop six SNAP participant frames, four for the intervention clusters and two for the comparison groups.
Before selecting the sample, we will sort the SNAP participant records on the frame by State, urbanicity, and outlet ID, and then, from the sorted frame, a SNAP participant sample will be selected with equal probability systematic sampling method.
It is likely that some catchment areas may overlap, especially in densely populated areas or states. In such instances, a SNAP participant may be included in more than one sampling frame. To address this issue, we will investigate the overlap in outlet locations and if an overlap is identified, we will employ random selection method to retain the SNAP participant in only one of the overlapping catchment areas.
Based on the findings from the Healthy Incentive Pilot (HIP) evaluation,4 we have estimated that the minimum sample size required to detect a 1/4 cup difference in daily fruit and vegetable intakes is 385 (Table B2.1). Since each intervention cluster will be compared with a comparison group, the total number of cases (SNAP participants) required for analysis at the end of the second cycle of data collection would be 385 x 6 = 2,310 in order to ensure that a 1/4 cup difference in targeted fruit and vegetable consumption can be detected between each intervention cluster and the comparison group. Note that the sample size refers to the number of responding individuals who provide useable intake data at two points in time. We will begin with a higher sample count to obtain the final sample size, to account for the expected sample losses due to nonresponse and attrition (e.g., participants who move or leave SNAP after their initial recruitment into the study).
The universe for the intervention group is defined as all SNAP participants in neighborhoods in which the incentive programs operate. Such a definition is both practical and efficient since lists (sampling frames) of SNAP participants can be constructed from SNAP administrative files maintained by State agencies (obtained using secure file transfer method such as FTP or encrypted email), without the need to employ expensive listing or other frame construction methods. By not restricting the evaluation population to those persons who receive the incentive, the study design also permits an evaluation of the penetration of the program in the target population.
Using the response to the post-SPS question “Would you be willing to participate in a telephone survey?,” we will select 60 SNAP participants who agreed to a telephone interview. We will schedule interviews with 50 SNAP participants to ensure that we complete 40 interviews.5 The interviews will be allocated as 10 interviews per treatment cluster and will be conducted after the post-SPS. Although 10 interviews per treatment cluster is not a representative sample, we will purposely choose respondent-types that will provide the best context for the results of the Outcome Evaluation. Answers to particular questions in the post-SPS will be used to determine eligible candidates for participation in the additional telephone interview. The goal is to interview approximately five participants who have perceived “positive” outcomes from program participation and five participants who have perceived “negative” outcomes from program participation.6 The timing of the interviews is estimated as Winter 2018.
We will contact all grantees to obtain basic information about their outlet, such as type (farmers market/farm stand, grocery store, etc.), FINI operation structure (incentive value; year round/seasonal), and months FINI incentive will be offered in 2017. This information will be used to identify FINI outlets eligible for sampling. We will also interview a program administrator from each FINI grantee. The program administrator interviews will take place twice in the fall – once in grant Year 2 and once in grant Year 3. Any grantee whose grant ends before the Year 3 round of interviews will only have the Year 2 interview. Within the Cycle 2015 of grantees, there are four whose grant ends before the late interview. The total number of grantee interviews is estimated to be up to 70: 35 grantees over Cycles 1 and 2, each interviewed twice.
All FINI grantees are required to collect Outlet Quarterly Core Program Data (for all outlets operating FINI) and Grantee Annual Core Program Data.
The sample for the Outlet Survey will include all outlets operating FINI at the time of time of data collection (Fall 2016 for 2015 grantees and Fall 2017 for 2016 grantees). Westat will extract the outlet name and contact information from the grantees’ Core Program Data and generate the universe (sampling frame) of participating outlets.
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
The SPS sample for the outcome evaluation will be selected prior to the start of the incentive program at participating outlets. The sampled SNAP households will be invited to provide pre-intervention data on food purchase behavior, food security, fruit and vegetable intake, and household socio-demographics. The post-intervention survey will also include questions on experience with FINI. As indicated above, the respondents will be selected from sampling frames constructed from SNAP administrative files where applicable. Eligible participants will be restricted to those who have been participating in SNAP for at least 3 months to ensure households are past the initial learning curve on how to manage their benefit. There is also some evidence that being on SNAP for less than three months has only marginal impact on food insecurity and diet quality.7 Including these cases could exaggerate outcome estimates if some of the gains achieved are due to SNAP alone. Sample households who completed the pre-SPS will be re-contacted approximately 6 months after program implementation to complete the post-SPS. Using a 6-month window between the pre- and post-data collection allows sufficient time for the programs to have an impact, while limiting sample losses due to the cumulative effects of attrition from SNAP over time.
We expect some loss of the sampled households between sample selection and pre-SPS as well as between pre-SPS and post-SPS, due to ineligibility and nonresponse. Ineligibility losses occur because the administrative data lags; by the time we use the lists, some SNAP participants will no longer be eligible for benefits or have moved outside of the target area (neighborhood). We have assumed a pre-SPS ineligibility rate of 15 percent (this is roughly consistent with the experience in the Healthy Incentive Pilot [HIP] study), and a pre-SPS response rate of 80 percent. As indicated above, we expect sample attrition between pre- and post-SPS, due to both nonresponse and ineligibility. In this case, nonresponse applies to those sample persons who completed the pre-SPS and were still eligible for but did not complete the post-SPS. Ineligibility applies to sample persons who completed the pre-SPS but were determined to be out of scope of the study by the post-SPS (e.g., no longer in SNAP, moved out of area, deceased, or institutionalized). Further, we assume a 6‑month cumulative ineligibility rate of 30 percent based on the newly released SNAP participation data,8 and a response rate of 80 percent for the post-SPS.
Table B2.1 summarizes the sample sizes proposed for the outcome evaluation based on the assumptions described above. The sample sizes shown are for a single intervention cluster and a single comparison cluster. Because of the potential for sample losses between the pre- and the post- SPS, we plan to oversample at baseline to ensure we have adequate sample for the post-SPS.
Table B2.1. Sample size of SNAP participants for an intervention cluster and a comparison site
Intervention |
Pre-SPS |
Post-SPS |
|||||||
No. in pre samplea |
No. in reserve sample |
No. in primary sample |
Eligibility rate |
Response rate |
No. of pre-survey respondents |
Eligibility rate |
Response rate |
No. of post-survey respondents |
|
1 Intervention cluster |
1,265 |
253 |
1,012 |
0.85 |
0.80 |
688 |
0.70 |
0.80 |
385 |
|
|
|
|
|
|
|
|
|
|
1 Comparison cluster |
1,265 |
253 |
1,012 |
0.85 |
0.80 |
688 |
0.70 |
0.80 |
385 |
|
|
|
|
|
|
|
|
|
|
Total sampled (4 intervention clusters and 2 comparison clusters) |
7,590 |
1,518 |
6,072 |
0.85 |
0.80 |
4,128 |
0.70 |
0.80 |
2,310 |
*Includes a reserve sample and an “oversample” to account for ineligibility and nonresponse losses in the pre and post periods.
SNAP participants are considered to be a hard-to-reach population because locator information from State agencies such as mailing address and telephone numbers are frequently incomplete, out of date, missing, or incorrect. SNAP participants also are more likely to use temporary (pre-paid) telephones, and they may change numbers more often than the population average. Considering the difficulties in reaching this population, we will also include a reserve sample, to be released if it appears that the respondent count in Round 2 will not reach the expected numbers. Thus, the pre-SPS will include an “oversample,” of which 80 percent will be allocated to the primary sample and the remaining 20 percent allocated to the reserve sample. If the 30 percent ineligibility rate and the 80 percent response rate hold, we will not release the reserve sample for the post-SPS.
With four intervention clusters and two comparison clusters, the numbers corresponding to a single intervention cluster in Table B2.1 would be multiplied by 6 (e.g., 1,265 x 6 = 7,590) to obtain the sample for the pre-SPS. The large number of SNAP households to be sampled for pre-SPS will ensure that the minimum of 385 respondents needed in each intervention cluster and comparison group by the post survey is achieved.
The interviewers will contact the sampled SNAP participants and invite them to participate in the interview right then or schedule for a later time. If not conducted immediately upon initial contact, on the designated day, the interviewer will set up a time convenient to the participant at a later date to conduct the semi-structured interviews.
Westat will identify the program administrator, who will participate in the grantee program administrator interview, based on the information provided in the grant applications. Program administrators will be contacted to set up a time convenient to them for an interview.
FINI grantees will report quarterly Core Program Data for all outlets operating FINI. All FINI grantees will also report annual grantee-specific information. A link to the Core Program Data forms will be provided on the FINI website and grantees will be able to able to access it through a hyperlink. Access to the electronic forms will require an ID and password for authentication. A hard-copy form will be available on the FINI website for grantees and their outlets that prefer to complete the form on hard copy.
The Outlet Survey will be mailed to the entire universe of participating outlets. Following procedures will be used to mail the Outlet Survey:
Mail Invitation Letter and the Survey. Westat will mail an invitation letter and the hard copy survey to all operating outlets. The invitation letter will include the background to the study and the importance of their participation. The mailing will also include a postage-paid envelope.
Reminder Postcard. Two weeks after the invitation letter mailing, Westat will mail a reminder postcard to nonresponding outlet operators. The postcard will also include a toll free number to call, to request another copy of the survey in the event they have lost or misplaced the survey included in the invitation letter mailing.
Grantee Followup With Outlet Operators to Complete the Survey. Finally, 5 weeks after the invitation letter mailing, grantees will be provided a list of nonresponding outlets and requested that they follow up with the outlet operator to complete the survey.
For purposes of the outcome evaluation, a 1/4 cup change in fruit and vegetable consumption is considered analytically meaningful. For FINI grants, USDA defines “fruits and vegetables” as “any variety of fresh, canned, dried, or frozen whole or cut fruits and vegetables without added sugars, fats, or oils, and salt (i.e., sodium).” The results from the HIP study indicate that the underlying variation in the consumption of fruits and vegetables—excluding fruit juice, white potatoes, and dried legumes9—is relatively low.10 Thus, the ability to detect relatively small differences in fruit and vegetable consumption with statistical reliability can be accomplished within the relatively small samples of respondents planned for this study.
Table B2.2 summarizes the MDD for sample sizes ranging from 100 to 600 per intervention/comparison group under certain assumptions about the underlying variation of fruit and vegetable intake. As shown, a sample size of 385 respondents in each of the intervention and comparison groups makes it theoretically possible to detect a 1/4 cup difference in consumption between the intervention and comparison groups for fruits and vegetables—excluding fruit juice, white potatoes, and dried legumes. Including all fruits and vegetables, the same sample would be able to detect differences of approximately 1/3 cup or greater.
If it is appropriate to combine (pool) certain intervention clusters for comparison with the comparison group, the resulting power of the statistical tests will be greater. Table B2.3 summarizes the range of MDDs that can be achieved under the proposed sample design for intervention groups consisting of a minimum of two to a maximum of four clusters. As shown in this table, the MDDs can be reduced appreciably depending on the number of intervention clusters that are combined.
Table B2.4 shows the range of MDDs for food insecurity status under various cluster-pooling schemes. From the table it can be seen that, in comparison to no pooling (the bottom row), the MDD for total food insecurity estimate (low or very low) can be reduced from 9.4 percent to 5.9 percent if the four treatment clusters and the two intervention groups are combined, respectively.
To assess the range of MDDs for secondary outcomes, Table B2.5 presents the MDDs for selected prevalences (P) with the same cluster-pooling schemes in the previous two tables. For example, the MDD for a 20 percent prevalences would be 7.5 percent when there is no pooling (bottom row), but it would be reduced to 4.7 percent if the four treatment clusters and the two intervention groups are combined, respectively.
Table B2.2. Minimum detectable difference (MDD) for one-tailed test with alpha = .05, power = .80
Daily individual intake (NHANES) |
Total |
Intervention sample |
Comparison |
MDDb |
Fruit and vegetables (excluding fruit juice, white potatoes, and dried legumes)c |
200 |
100 |
100 |
0.49 |
400 |
200 |
200 |
0.35 |
|
600 |
300 |
300 |
0.28 |
|
700 |
350 |
350 |
0.26 |
|
770 |
385 |
385 |
0.25 |
|
1,000 |
500 |
500 |
0.22 |
|
1,200 |
600 |
600 |
0.20 |
|
|
|
|
|
|
|
200 |
100 |
100 |
0.71 |
400 |
200 |
200 |
0.50 |
|
600 |
300 |
300 |
0.41 |
|
700 |
350 |
350 |
0.38 |
|
770 |
385 |
385 |
0.36 |
|
1,000 |
500 |
500 |
0.32 |
|
1,200 |
600 |
600 |
0.29 |
a Number of responding individuals providing pre- and post-intervention data.
b MDDs are for the difference between the intervention and comparison groups in the change in intakes from pre- to post-intervention. The calculations assume a weighting design effect (DEFF) of 1.1 and a correlation between pre- and post- intervention measurements among the same respondents of 0.50.
c Assumes an underlying standard deviation of approximately 1.3 based on NHANES tabulations.
d Assumes an underlying standard deviation of approximately 1.9 based on NHANES tabulations.
Table B2.3. Minimum detectable difference (MDD) between groups of clusters and the comparison group for individual intake, one-tailed test with alpha = .05, power = .80
Daily individual intake (NHANES) |
Number of clusters in combined intervention group |
Number of groups in combined comparison group |
Combined intervention group respondentsa |
Comparison group respondentsa |
MDDb |
|
4 |
2 |
1,540 |
770 |
0.15 |
|
|
|
|
|
|
4 |
1 |
1,540 |
385 |
0.20 |
|
|
|
|
|
|
|
2 |
1 |
770 |
385 |
0.22 |
a Number of responding individuals providing pre- and post-intervention data.
b MDDs are for the difference between the intervention and comparison groups in the change in intakes from pre- to post-intervention. The calculations assume a weighting design effect (DEFF) of 1.1 and correlation between pre- and post- intervention measurements among the same respondents of 0.50.
c Assumes an underlying standard deviation of approximately 1.3 based on NHANES tabulations.
Table B2.4. Minimum detectable difference (MDD) between groups of clusters and the control group for food insecurity status, one-tailed test with alpha = .05, power = .80
Number of clusters in combined intervention group |
Number of groups in combined comparison group |
Combined intervention group respondentsa |
Comparison group respondentsa |
Food Insecurity Status and MDD (%)b |
|||||
All |
MDDc |
Low |
MDDc |
Very Low |
MDDc |
||||
4 |
2 |
1,540 |
770 |
53.7 |
5.9 |
27.8 |
5.3 |
25.9 |
5.2 |
4 |
1 |
1,540 |
385 |
53.7 |
8.1 |
27.8 |
7.3 |
25.9 |
7.1 |
2 |
1 |
770 |
385 |
53.7 |
8.3 |
27.8 |
7.5 |
25.9 |
7.3 |
1 |
1 |
385 |
385 |
53.7 |
9.4 |
27.8 |
8.4 |
25.9 |
8.2 |
a Number of responding individuals providing pre- and post-intervention data.
b Food insecurity status are 2014 estimates for households that received SNAP benefits in previous 12 months.
Source: Alisha Coleman-Jensen, Matthew P. Rabbitt, Christian Gregory, and Anita Singh. Household Food Security in the United States in 2014, ERR-194, U.S. Department of Agriculture, Economic Research Service, September 2015. P.30.
c MDDs are for the difference between the treatment and control groups in the change in food insecurity percentages from pre- to post-intervention. The calculations assume a weighting design effect (DEFF) of 1.1 and correlation between pre- and post- intervention measurements among the same respondents of 0.50.
Table B2.5. Minimum detectable difference (MDD) between groups of clusters and the control group for selected secondary outcome prevalences (P), one-tailed test with alpha = .05, power = .80
Number of clusters in combined intervention group |
Number of groups in combined comparison group |
Combined intervention group respondentsa |
Comparison group respondentsa |
MDD (%)b |
|||
|
|
|
|
||||
P = 20% |
P = 30% |
P = 40% |
P = 50% |
||||
4 |
2 |
1,540 |
770 |
4.7 |
5.4 |
5.8 |
5.9 |
4 |
1 |
1,540 |
385 |
6.5 |
7.5 |
8.0 |
8.1 |
2 |
1 |
770 |
385 |
6.7 |
7.7 |
8.2 |
8.4 |
1 |
1 |
385 |
385 |
7.5 |
8.6 |
9.2 |
9.4 |
a Number of responding individuals providing pre- and post-intervention data.
b MDDs are for the difference between the treatment and control groups in the change in intakes from pre- to post-intervention. The calculations assume a weighting design effect (DEFF) of 1.1 and correlation between pre- and post- intervention measurements among the same respondents of 0.50.
It should be noted that a year-to-year correlation of 0.50 is assumed in the calculation of the MDDs in Tables B2.2 and B2.3. This correlation represents the correlation between the pre- and post-intervention reports of fruit and vegetable intakes for the same set of respondents in the two time periods. This correlation is unique to the longitudinal design and stems from the fact that the pre and post data are from the same individuals. Because this correlation reduces the variation in estimates of change between the pre- and post-intervention time periods, using a relatively high correlation for sample planning purposes could severely understate the sample size needed to detect a given level of change. We have used a somewhat conservative correlation of 0.50 in our sample size calculations to ensure that we do not overstate the advantages of the longitudinal design. Assuming a larger correlation for sample design purposes would lead to smaller sample sizes, but would incur the risk that the resulting analytic samples will be too underpowered to provide statistically meaningful results in the event that the correlations are observed to be lower than those assumed.
To derive unbiased estimates of outcome measures and other descriptive statistics from the pre- and post-SPS data, sample weights will be computed for SNAP participants responding to the survey. Each sampled SNAP participant will be assigned a base weight reflecting the SNAP participant’s chance of selection. To compensate for unit nonresponse, appropriate adjustment factors will be computed within selected weighting classes, where the adjustment factor is the ratio of the sum of the weights (using the base weights) of both respondents and nonrespondents to the sum of the weights for respondents alone in each weighting class. These factors will be used to inflate the base weights, so that estimates from responding SNAP participants can be used to make appropriate inference to the SNAP participant population. The weighting classes may be defined based on relevant characteristics that are available for both responding and nonresponding units. Analyses of differential nonresponses (e.g., using a software package such as CHAID) will be used to identify classes with important differences in response propensity, a critical criterion in the formation of useful nonresponse adjustment classes. The sum of the weights of the respondents, after the adjustment, will equal the sum of the weights of the respondents and nonrespondents before the adjustment.
When a survey is conducted using a complex sample design, the design must be taken explicitly into account to produce unbiased estimates and their standard errors. The calculation of standard errors is accomplished by dividing the complete sample into a number of subsamples known as replicates. Each replicate sample, when properly weighted, will provide appropriate estimates of population characteristics of interest, and these replicate estimates are then used to compute the variance of the full-sample estimate. In general, replicate samples are formed to mirror the original sampling of primary sampling units. In this study, replicate weights using the jackknife methodology will be developed as part of the weighting process. Software packages such as SAS, SUDAAN, STATA, and WESVAR can be used with replicate weights to take the sample design into account when calculating point estimates, correlations, regression coefficients, and their associated standard errors. A series of jackknife replicate weights will be created and attached to each data record for variance estimation purposes.
No specialized sampling procedures are involved.
All data collection activities will occur within a 36-month period. The study design requires that respondents be surveyed at multiple times, as described in Section B.1.
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
By explaining the importance and potential usefulness of the study findings in the introductory letters from FNS, and by implementing a series of follow-up reminders and a final attempt to complete the survey by telephone, we expect to achieve an overall survey response rate of 80 percent for the SNAP Participant Survey. These procedures will be used to maximize response rates from SNAP Participants:
Mail an introductory letter stating the importance of the study and their participation [Appendixes D, M, S, and AF].
For the SPS, include a $2 cash incentive with the introductory letter and instructions to complete a web survey.
For the SPS, discuss a $20 cash incentive (to be mailed after survey is completed) in the introductory letter.
Mail reminder postcards and letters to SPS participants who have not completed the survey within one week of the introductory letter to complete the Web survey, the first survey mailing and second survey mailing [Appendixes F through I and P through Q]. Mail reminder letter to outlet operators who have not completed the survey in 2 weeks [Appendix AH].
Provide two primary data collection modes (web or mail) for participants’ convenience. A third mode (telephone) will be used if the response rates are low.
Make multiple call attempts to a number without reaching someone before considering whether to treat the case as “unable to contact.”
Provide a toll-free number for respondents to call to verify the study’s legitimacy or to ask other questions about the study.
Implement standardized training for telephone data collectors.
Although efforts will be made to achieve as high a response rate as practicable with the available resources, nontrivial nonresponse losses are likely to occur. OMB requires that a nonresponse bias analysis (NRBA) be conducted if the overall response rate falls below 80 percent.11 In this case, a nonresponse bias analysis will be conducted to assess the impact of nonresponse on the survey estimates and the effectiveness of the weight adjustments to dampen potential nonresponse biases. Nonresponse bias analysis will be performed for variables that are available from the State SNAP administrative data files – as these data are available for both, respondents and non-respondents. The types of analyses to be conducted to evaluate nonresponse will include:
Comparing sociodemograhic characteristics and geography of respondents and nonrespondents. The SNAP administrative casefiles will include information on age, gender, race/ethnicity, and household size;
Modeling response propensity using multivariate analyses;
Evaluating differences found in comparisons between survey respondents and comparable data from extant outside sources;
Comparing cases completed at different levels of data collection effort (e.g., cases completed with limited follow-up compared to those requiring considerable follow-up);
Comparing weighted estimates of characteristics available for both respondents and nonrespondents using unadjusted (base) weights versus nonresponse-adjusted weights; and
Comparing weighted survey estimates using unadjusted (base) weights versus nonresponse-adjusted weights.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
The SPS was developed using items from instruments developed for similar evaluations with the exception of questions about the types of fruits and vegetables usually purchased by SNAP participants and experience with FINI. The item on types of fruits and vegetables usually purchased by SNAP participants was tested with six SNAP participants who shopped at a farmers market in Massachusetts, to assess clarity, intent, and completeness [Appendix AO]. The format and wording of these items was revised per participants’ input. Specifically, rows were added to capture up to five additional fruits (or vegetables), and a response option of “do not purchase this item” was added.
The annual and quarterly Core Program Data forms specify items that are required from all grantees on a quarterly basis; the form was reviewed by eight large-scale FINI project grantees, for feasibly of data collection, clarity, and completeness. To improve clarity, the question wording was revised per grantee recommendations. Specifically, the terminology to reference locations where the incentive was being offered was standardized to “outlet”; instructions were added to clarify “check all that apply” or “check only one”; revisions were made to the response options on types of outlets where an incentive was offered and education activities offered at the outlet; and two new questions were added to obtain the minimum denomination of incentive offered to SNAP participants and the number of days the outlet operated each quarter.
The key informant interview guides for the SNAP participant and grantee interviews were reviewed by Westat’s senior qualitative researchers, who have developed guides for similar projects.
Finally, the Outlet Survey was developed using items from instruments that were cognitively tested and used in Healthy Incentive Pilot (HIP) evaluation. Additional questions were developed specifically for the FINI evaluation. The Outlet Survey was pretested with 4 FINI outlets – three farmers market and one grocery store - operated by two FINI Pilot Project grantees. Participating outlet managers were asked to assess difficulty in understanding questions, response options, and completion time. All outlet managers indicated that the survey was quicky and easy to complete—taking them less than 10 minutes. No revisions were required for items or responses on the Outlet Survey [Appendix AP].
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
The sampling plans were reviewed by Hongsheng Hao, Laurie May, and Tracy Vericker at Westat. In addition, Sarah Goodale of the National Agricultural Statistical Service (NASS) has reviewed this supporting statement; this supporting statement was revised per comments from the Westat team and NASS. All data collection and analysis will be conducted by Westat.
Name |
Affiliation |
Telephone number |
|
Laurie May |
Principal Investigator, Westat |
301-517-4076 |
|
Tracy Vericker |
Project Director, Westat |
301-251-4242 |
|
Hongsheng Hao |
Senior Statistician, Westat |
301-738-3540 |
|
Thomas Bosworth |
Senior Study Director, Westat |
301-610-5542 |
1 We will not set a minimum threshold for number of outlets for grantees that operate FINI in only one State. This will ensure that all grantees have the potential to be represented in the national evaluation.
2Based on a review of Cycle 2015 grant applications, farmers markets and farms stands comprise about 70 percent of all FINI outlet types; grocery stores comprise roughly 15 percent of FINI outlet types. The remaining 15 percent are split between community supported agriculture programs (CSAs) and mobile markets.
3 A small subset of outlets will offer match rates that exceed dollar-for-dollar. All of these outlets, however, will be part of a randomized control trial where SNAP participants—and not outlets—will be randomized to receive different levels of incentives. These outlets will not be included in the sampling frame because we will not know a priori what level of incentives sampled SNAP receipients received.
4Bartlett, Susan; Jacob Klerman, Lauren Olsho, et al. (2014). Evaluation of the Healthy Incentives Pilot (HIP): Final Report. Prepared by Abt Associates for the U.S. Department of Agriculture, Food and Nutrition Service, September 2014. OMB control # 0584-0561, Expiration Date: 08/31/2014. Retrieved from http://www.fns.usda.gov/healthy-incentives-pilot-final-evaluation-report.
5We plan to schedule 50 interviews in the event of cancellations. In the event that we complete 40 interviews, we will cancel the remaining interviews and provide SNAP participants with whom interviews were scheduled (excluding no-shows) with the incentive, to thank them for being available for the interview.
6Positive outcomes include increased frequency of shopping, change in types of fruits and vegetables purchased, increased expenditure on fruits and vegetables, increased consumption of fruits and vegetables. Negative outcomes include issues with receiving or redeeming incentives, and reduced purchase of fruits and vegetables because of issues with incentive receipt, redemption, or outlet staff.
7Leung, C.W., Cluggish, S., Villamor, E., Catalano, P.J., Willett, W.C., & Rimm, E.B. (2014). Few changes in food security and dietary intake from short-term participation in the Supplemental Nutrition Assistance Program among low-income Massachusetts adults. Journal of Nutrition Education and Behavior, 46(1), 10.1016/j.jneb.2013.10.001. http://doi.org/10.1016/j.jneb.2013.10.001.
8Mabli, J., Godfrey, T., Wemmerus, N., Leftin, J., & Tordell, S. (2014). Dynamics of Supplemental Nutrition Assistance Program (SNAP): Participation from 2008-2012 Final Report. December 2014.
9 Many farmers markets participating in FINI may exclude these items.
10Bartlett et al., (2014).
11Office of Management and Budget Standards and Guidelines for Statistical Surveys. Retrieved from http://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Chantell Atere |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |