SSB- final after passback

SSB- final after passback.docx

Summer Meal Study (PC MAQ)

OMB: 0584-0635

Document [docx]
Download: docx | pdf







SUPPORTING STATEMENT - PART B for


OMB Control Number 05840-NEW

Summer Meals Study

February 5, 2018



Alice Ann Gola

Social Science Research Analyst

Office of Policy Support

USDA, Food and Nutrition Service

3101 Park Center Drive, Room 1014

Alexandria, VA 22302




Table of Contents










B.1 Respondent Universe and Sampling Methods


Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


Respondent Universe

The Summer Meals Study—a study of the Summer Food Service Program (SFSP) and Seamless Summer Option (SSO)—requires data collection from four sampling units: (1) States; (2) SFSP/SSO sites; (3) SFSP/SSO sponsors; and (4) participant and nonparticipant households in the SFSP/SSO programs. To reach these sampling units for data collection, we will use a multi-stage sample design. The first stage involves sampling the 50 States and the District of Columbia—we will refer to these as “States” collectively. The second stage involves sampling approved sites and their associated sponsors, and the third stage involves sampling households (participating and non-participating).



Sampling Methods

States


In order to enroll 20 States1 in the study, with an expected response rate of 83%2, the study team will use systematic sampling to select an initial sample of 24 States after sorting the State list by FNS region (see Table B1 - 1) and a measure of size (MOS). To select a sample of sites for the site survey, the number of sites is a better MOS, whereas for the participant household survey, the number of meals served is better. Instead of using one or the other, we will define a new MOS by combining the two. Selecting a systematic sample of States from the sorted list provides a sample evenly spread over the FNS region and the combined MOS. After the State sample has been selected, we will contact them to obtain State-specific lists of SFSP/SSO sponsors and sites, which will be used to develop a sample frame for the selection of SFSP/SSO sites.


Table B1 - 1. FNS Regions

FNS Region

States

Mid-Atlantic Regional Office (MARO)

Delaware, District of Columbia, Maryland, New Jersey, Pennsylvania, Virginia, West Virginia

Midwest Regional Office (MWRO)

Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin

Mountain Plains Regional Office (MPRO)

Colorado, Iowa, Kansas, Missouri, Montana, Nebraska, North Dakota, South Dakota, Utah, Wyoming

Northeast Regional Office (NERO)

Connecticut, Maine, Massachusetts, New Hampshire, New York, Rhode Island, Vermont

Southeast Regional Office (SERO)

Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, Tennessee

Southwest Regional Office (SWRO)

Arkansas, Louisiana, New Mexico, Oklahoma, Texas

Western Regional Office (WRO)

Alaska, Arizona, California, Hawaii, Idaho, Nevada, Oregon, Washington


Sites and Sponsors

The aim of this study is to select a nationally representative sample of summer meals sites operating in the summer of 2018. In order to accommodate the data collection field schedule (see Table A.16-A of this Supporting Statement), we will request lists of approved sponsors and sites from States at two points in time. In early 2018, we will request the 2017 list of sponsors and sites, from which the majority of the 2018 site sample will be selected.3 Then, in June 2018, we will request the lists of new sites approved as of June 1st. The list of new sites will be incomplete because it will not contain sites approved by State agencies after June 1, 2018; however, because most States require sponsor applications to be submitted in April or May, we expect that the list will cover at least 75 percent of all new sites. Thus, we estimate the total coverage rate of the sample of continuing and new sites to be 91 percent of the 2018 site population.4

The precision requirements for the overall populations and important subgroups are as follows: 5 percent margin of error5 for overall estimates to estimate a population proportion of 50 percent and 10 percent for important subgroups.6 Assuming a design effect of 1.5, we require a site sample of 150 for each subgroup and 600 for overall SFSP sites (see Table B2 - 6). SFSP is considered a primary group of interest, requiring precision at 5 percent, and SSO is considered a subgroup of interest, requiring precision at 10 percent.

To select sites, we will first stratify by the sites’ continuing status (i.e., continuing unit or new), then by program type (SFSP vs. SSO). Finally, we will further stratify only SFSP sites by the site type (open vs. closed) and other subgroup defining variables. The target site sample size is 750, which is the sum of 600 SFSP sites and 150 SSO sites. Table B1 - 2 shows the sample allocation for the site sample to the overarching strata defined by the continuing status and the program type along with the estimated population size7. Based on the response rate realized for a survey on the SFSP program8, we assume a 75% response rate for both new and continuing sites. However, we will contact their sponsors first to secure their current status and expect to lose some sites due to their lack of cooperation. Therefore, we lower the site response rate to 70%. Then, the overall response rate for continuing sites is 45 percent, which accounts for both the percentage of sites expected to continue operating in 2018 (64 percent) and the assumed response rate of 70 percent.

To reach the target respondent sample size of 525 continuing sites (assuming an overall response rate of 45%), we will need an initial sample of 1,168 continuing sites. To obtain 225 respondent new sites, we will need an initial sample of 321 new sites, assuming a 70 percent response rate. The total field sample size is then 1,489.

Table B1 - 2. Site Population Size and Sample Allocation by Program Type for Respondent States

Program Type

Pop Size

Cont Site

New Site

Target Sample

Response Rate1

Field Sample

Cont Site

New Site

Total

SFSP

47,286

420

180

600

0.45/0.7

934

257

1,191

SSO

11,574

105

45

150

0.45/0.7

234

64

298

Total

58,860

525

225

750

0.45/0.7

1,168

321

1,489

1 The first rate is the eligibility and response combined rate for continuing sites, and the second rate is the response rate for new sites.


We will identify the sponsor sample based on their sponsorship of the sampled sites. Based on 2017 data from the National Hunger Clearinghouse9, which provides a searchable list of organizations that participate in SFSP and other food assistance programs, we expect about 580 operational sponsors that will have a sponsoring relationship with the initial sample of 1,489 sites.10 Assuming an 83 percent response rate11, we expect to have 481 respondent sponsors, of which we expect 80 percent of them actually linked to the respondent site sample of 750, resulting in 385 sponsors for analysis.

The overall response rates for sites and sponsors are presented in Table B1 - 3. The site response rate is for eligible sites only. Therefore, it does not include the eligibility rate (64%) of the sample selected from the 2017 frame.

Table B1 - 3. Overall Response Rates for Sponsors and Sites

Unit Type

State Response rate (%)

Within State Sample Response rate (%)

Overall Response rate (%)

Sponsor

83.3

83.0

69.1

Site

83.3

70.0

58.3


Households

To select participant and nonparticipant household (caregiver) samples for open sites, we plan to define the catchment area for each site by drawing a circle around the site with a fixed radius (1 mile for urban sites and 5 miles for rural sites).12 Then, we will treat all households with children ages 18 or under as eligible for the study. We will use three sources of data to construct the household frame: (1) SNAP households with children living in the site catchment area; (2) Postal residential addresses in the site catchment area; and (3) On-site (SFSP/SSO) participant households attending the summer meal sites. SNAP household data allows us to identify low-income households with children in the catchment area; it does not capture those who do not participate in SNAP, but who are eligible for SFSP/SSO. The second source—postal data—can provide full coverage, but poses the highest burden on the public and is resource intensive. The third source—participants at SFSP/SSO sites—is most efficient to capture participants, but is limited in the coverage of participants and cannot be used to identify nonparticipants. Using these three sources combined, we seek to minimize burden on the public, while maximizing coverage.

For closed sites, we will also use three sources: (1) Enrollment lists, which provide the full list of participants; (2) SNAP lists deduplicated of the enrolled children in the closed site to select nonparticipants; (3) Postal residential addresses to select nonparticipants. Source (1) does not require screening, whereas sources (2) and (3) do require screening for eligible households with children for closed sites. Closed sites are more restrictive for participation than open sites (e.g., age restriction for sport camps or gender restriction for gender-specific programs). We assume that 50 percent of households with children are eligible for closed sites. However, it is not feasible to screen households from the SNAP and postal frame according to site-specific restrictions but feasible only after the caregiver survey because we can then determine their eligibility based on the collected survey data.13 Therefore, the sample size for the nonparticipant survey for closed sites is doubled.

For the selection of households in the catchment areas of continuing sites, we will rely on all three data sources with deduplication of the frame lists before sampling. However, for the selection of households in the catchment areas of the new sites in 2018, we will not deduplicate the frames before sampling. To save time, we will deduplicate after sampling. The reason for this different deduplication procedure is due to the limited time between when new site lists are received (mid-June 2018) and when we will begin data collection (June – August 2018). We can use frame-level deduplication only for the continuing sites since we will be receiving continuing site lists in early 2018 and have enough time to complete such a time-consuming operation. However, we will not receive new site lists until mid-June 2018 and will have a much more compressed processing schedule for new sites. The deduplication strategy after sampling for the new sites is less efficient because units in the overlapped frames have multiple chances of selection. Therefore, the new sites in 2018 will require a larger starting household sample size than the continuing sites.

Household response rates largely depend on frame type (whether screening is needed or not14) and the survey type (whether participant or nonparticipant).

Table B1 - 4 presents sampling stage-level response rates and overall response rates by these two types.


Table B1 - 4. Sampling Stage-level and Overall Response Rates of Caregiver Surveys by Frame and Survey Types

Frame Type

Survey Type

Sample Size

Target Completes

Household Response Rate

Site Response Rate

State Response Rate

Overall Response Rate

SFSP Open and SSO

Onsite

Participant

1,833

550

30.0%

70.0%

83.3%

17.5%

SNAP

Participant

6,823

1,100

16.1%

70.0%

83.3%

9.4%

Nonparticipant

12,695

1,650

13.0%

70.0%

83.3%

7.6%

Postal

Participant

3,168

550

17.4%

70.0%

83.3%

10.1%

Nonparticipant

3,931

550

14.0%

70.0%

83.3%

8.2%

SFSP Closed and SSO

Enrolled

Participant

1,600

800

50.0%

70.0%

83.3%

29.2%

SNAP

Nonparticipant

7,480

1,200

16.0%

70.0%

83.3%

9.4%

Postal

Nonparticipant

3,096

400

12.9%

70.0%

83.3%

7.5%

Total

Participant

13,424

3,000

22.3%

70.0%

83.3%

13.0%

Nonparticipant

27,202

3,800

14.0%

70.0%

83.3%

8.1%

Overall


40,626

6,800

16.7%

70.0%

83.3%

9.8%


The sample sizes given in Table B1 - 4 are the numbers of eligible (and subsampled for nonparticipants) households for participants estimated from the screener sample. All eligible participants who respond to the screener will be asked to complete a survey. However, for nonparticipants, subsampling is necessary because the majority of households in a catchment area are nonparticipants and too many will be screened. Therefore, only a subsample of eligible nonparticipants living in the catchment area of a summer meals site will be invited to complete a survey.

Low response rates for the caregiver surveys are of concern because respondent samples with low response rates are more susceptible to produce biased estimates. However, this cannot be avoided because we have to rely on screening of address-based household samples, and it is very difficult to achieve a high response rate for household screener surveys. After the survey data are collected, we will perform nonresponse adjustment weighting to reduce the nonresponse bias and will conduct a nonresponse bias study to assess whether the adjusted sample would produce estimates with negligible bias. In this context, we interpret sample “representativeness” as the ability of the sample with proper weighting to produce unbiased estimates of the population parameters of interest.

Qualitative Interviews

Current sponsors, site supervisors and caregivers may indicate a willingness to participate in the qualitative interviews at the end of their respective quantitative survey instruments. From the list of willing sponsors and site supervisors, we will select respondents for the qualitative interviews to ensure that we include respondents from all types of sponsors (i.e., SFA, government, nonprofit) and sites (open/closed, SSO/SFSP, school/other, etc.). We will randomly select 38 sponsor-site dyads within the categories of respondents of interest to complete 19 sponsor-site dyad interviews. Former sponsors will represent both SFSP and SSO, and different site types, using 2017 and 2018 site and sponsor lists obtained from State agencies. Sponsors that are on the 2017 list but not on the 2018 list will be considered former sponsors. We will randomly select 20 former sponsors within the categories of respondents of interest to complete 10 interviews. From the list of interested caregivers, we will select respondents for the qualitative interviews to achieve diversity in the ages of children/teens who participate or are in the household; the types of sites they attend or are closest to their residence (i.e., the sampled site); and levels of satisfaction with the program based on survey results. We will randomly select 96 caregivers of participants and nonparticipants (divided evenly) within the categories of respondents of interest to complete 48 interviews (divided evenly).

B.2 Procedures for the Collection of Information

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Stratification and Sample Selection

For the selection of States, we use implicit stratification by FNS region and combined MOS as we select the sample by the systematic method from the sorted list of States by FNS region and combined MOS (see B.1 discussion).

Stratification of sites within respondent States will be driven by two factors: (1) separate sampling of continuing sites and new sites – we call the indicator variable a site’s “continuing status” of whether a site is continuing or new; and (2) the precision requirements (5 percent margin of error for overall estimates and 10 percent margin of error for important subgroups). Assuming a design effect of 1.5 for the site sample, we will need 150 respondent sites for subgroups and 600 for overall SFSP sites (see Table B2 - 6).

Stratification by the continuing status is automatic because we will develop the sample frames separately and select samples separately. Within each of the continuing and new site frames, we will first stratify the list of sites compiled from the respondent States by the program type (SFSP vs. SSO). The SSO sites as a whole are a separate subgroup from the SFSP sites, and we will allocate a minimum sample size of 105 to the continuing SSO stratum and 45 to the new SSO stratum, based on a split ratio of the site frame between continuing and new sites (7:3), altogether 150. However, we will further stratify SFSP sites to ensure adequate sample sizes for important subgroups, which are defined by site type (open vs. closed), site location (urban vs. rural15) determined through geocoding of the site address, and school food authority (SFA) status. About 45 percent of sites are under SFAs, and thus, we are not concerned about the sample sizes for the subgroups defined by the SFA status because any reasonable allocation will ensure the minimum subgroup sample sizes for both subgroups. However, closed sites are much less frequent than open sites, by a one to four ratio. Rural sites account for only 15 percent based on the 2017 National Hunger Clearinghouse data, and we need to oversample them (at least 25 percent is required to have 150 minimum sample size without oversampling). Therefore, we will stratify the SFSP site frame by the two subgroup-defining variables, site type and site location. We depict the entire stratification based on the four-dimensional cross-classes in Table B2 - 1.

Table B2 - 1. Study Stratification Groups

Site Program

Site Type

Site Location

Continuing Status

Continuing

New

SFSP

Open

Urban



Rural



Closed

Urban



Rural



SSO

NA

NA



There are eight stratum cells for SFSP sites (open/closed, urban/rural, continuing/new) and two stratum cells for SSO sites. We set the marginal sample sizes of the stratification defining variables as follows (see Table B2 - 2):

Table B2 - 2. Marginal Respondent Sample Sizes of Stratification Defining Variables for Sites

Stratification Variable

Site Category

Marginal Sample Size

Total Sample Size

Continuing

New

Total

Site Program Type

SFSP

420

180

600

750

SSO

105

45

150

SFSP Site Type

Open

280

120

400

600

Closed

140

60

200

SFSP Site Location

Urban

315

135

450

600

Rural

105

45

150


For the SSO sites, we will use the allocated sample sizes shown in Table B2 - 2, but for SFSP sites, we need to determine the sample sizes for eight SFSP stratum cells, for which we will use the raking technique16 to determine the cell sample sizes, while maintaining the three-dimensional marginal cell sizes shown in Table B2 - 2, starting from the proportional allocation. The proportional allocation is optimal in the sense that the entire SFSP sample is an equal probability sample but it does not guarantee the minimum sample sizes for important subgroups. The raking technique will achieve this, while ensuring the minimal deviation of the sample allocation from the proportional allocation.

We will then inflate the allocated sample sizes by the combined response rate of 45 percent (resulting from 64 percent eligibility rate and 70 percent response rate) for continuing sites and by the response rate of 70 percent for new sites. We will use simple random sampling to select the inflated number of sites from each stratum cell. Table B2 - 3 presents the inflated (field) sample sizes. The total field sample size for the entire survey is 1,489 sites, and for the SFSP sites is 1,191.

Table B2 - 3. Marginal Field Sample Sizes of Stratification Defining Variables for Sites

Stratification Variable

Site Category

Marginal Sample Size

Total Sample Size

Continuing

New

Total

Site Program Type

SFSP

934

257

1,191

1,489

SSO

234

64

298

SFSP Site Type

Open

623

171

794

1,191

Closed

311

86

397

SFSP Site Location

Urban

700

193

893

1,191

Rural

234

64

298


We will not select a sponsor sample directly from the sponsor list, but will instead identify the sample of sponsors associated with the respondent sites – this is called a network sample. We expect 580 operational sponsors in the sponsor sample associated with the site sample, 83 percent of which will respond, resulting in 481 sponsors. Out of these responding sponsors, we expect that 80 percent will be associated with 750 respondent sites. The final sample size for linked analysis with the site data is then 385 respondent sponsors.

To select participants and non-participants, we will use three frame sources, SNAP data, postal addresses, and onsite participants. After defining the catchment area for each site, we will prepare the three sources of frame with deduplication before sampling for continuing sites but use the multiple frame approach for new sites (deduplication will be done after sampling). We will select a simple random sample of designated sample size for each site as shown in Table B2 - 4 (continuing sites) and Table B2 - 5 (new sites) except for the onsite participants. These tables present household sample sizes by program type, site type and frame source.

Based on our experience of postal surveys, we assume a 20 percent response rate for the household screeners. The percentage of households with children is assumed to be 40 percent based on Census data. The summer meals participation rate of 15 percent is based on a SFSP study (Gordon et al., 2003)17.

When we use the multiple frame approach for new sites, units in the overlapped frame have a higher chance of selection than others, and this will lower the sampling efficiency. However, we need to deduplicate the samples, and this will cause a reduction of the screener samples. To make up for this loss, we will inflate the samples to be selected from the SNAP frame by a factor of 1.25 assuming a 20 percent overlap in the samples selected. (The SNAP frame is completely overlapped with the postal frame, but the overlap between the two samples will be much smaller. We have generously estimated it to be 20 percent.) We accomplish this by reducing the SNAP screener response rates by 20 percent. This results in 25% increase in the sample size.


Table B2 - 4. Sample Size Calculations for the Participant and Non-participant Household Surveys from Continuing Sites

Sample size calculation for a single site

Total all sites

Group

Frame

Initial sample size

Screen RR

% of HHs with eligible children

Summer meal participation or non-participation rate

Sub-sampling rate

Survey RR

Target number of completes

Completes

Screens3

SFSP Open site (n=280)

Participant

Onsite

4

NA

NA

1

1

0.3

1

280

0

Participant

SNAP

77

0.25

NA1

0.15

1

0.7

2

560

21,560

Nonparticipant

SNAP

0.2

NA1

0.85

0.328

0.7

3

840

Participant

Postal

96

0.25

0.4

0.15

1

0.7

1

280

26,880

Nonparticipant

Postal

0.2

0.4

0.85

0.219

0.7

1

280

SFSP Closed site (n=140)

Participant

Enrolled

8

NA2

NA

1

1

0.5

4

560

NA2

Nonparticipant

SNAP

35

0.25

NA1

1

1

0.7

6

840

4,900

Nonparticipant

Postal

36

0.2

0.4

1

1

0.7

2

280

5,040

SSO site (n=105)

Participant

Onsite

4

NA

NA1

1

1

0.3

1

105

0

Participant

SNAP

77

0.25

NA1

0.15

1

0.7

2

210

8,085

Nonparticipant

SNAP

0.2

NA1

0.85

0.328

0.7

3

315

Participant

Postal

96

0.25

0.4

0.15

1

0.7

1

105

10,080

Nonparticipant

Postal

0.2

0.4

0.85

0.219

0.7

1

105

 

 

 

 

 

 

 

 

Total

4,760

76,545

1 Not applicable because the SNAP households have already been screened for children.

2 No need to screen as eligibility and participation are known from the administrative data.

3 The screener sample size is obtained by multiplying the initial sample size and the site sample size (e.g., 21,560 = 77*280).

Table B2 - 5. Sample Size Calculations for the Participant and Non-participant Household Surveys from New Sites

Sample size calculation for a single site

Total all sites

Group

Frame

Initial sample size

Screen RR3

% of HHs with eligible children

Summer meal participation or non-participation rate

Sub-sampling rate

Survey RR3

Target number of completes

Completes

Screens4

SFSP Open site (n=120)

Participant

Onsite

4

NA

NA

1

1

0.3

1

120

0

Participant

SNAP

96

0.2

NA1

0.15

1

0.7

2

240

11,520

Nonparticipant

SNAP

0.16

NA1

0.85

0.329

0.7

3

360

Participant

Postal

96

0.25

0.4

0.15

1

0.7

1

120

11,520

Nonparticipant

Postal

0.2

0.4

0.85

0.219

0.7

1

120

SFSP Closed site (n=60)

Participant

Enrolled

8

NA2

NA

1

1

0.5

4

240

NA2

Nonparticipant

SNAP

43

0.2

NA1

1

1

0.7

6

360

2,580

Nonparticipant

Postal

36

0.2

0.4

1

1

0.7

2

120

2,160

SSO site (n=45)

Participant

Onsite

3

NA

NA1

1

1

0.3

1

45

0

Participant

SNAP

96

0.2

NA1

0.15

1

0.7

2

90

4,320

Nonparticipant

SNAP

0.16

NA1

0.85

0.329

0.7

3

135

Participant

Postal

96

0.25

0.4

0.15

1

0.7

1

45

4,320

Nonparticipant

Postal

0.2

0.4

0.85

0.219

0.7

1

45

 

 

 

 

 

 

 

 

Total

2,040

36,420

1 Not applicable because the SNAP households have already been screened for children.

2 No need to screen as eligibility and participation are known from the administrative data.

3 For the SNAP frame, the screener response rate is reduced by 20%.

4 The screener sample size is obtained by multiplying the initial sample size and the site sample size (e.g., 11,520 = 96*120).

Estimation Procedure

To analyze the survey data for various surveys, we will first calculate the appropriate base weight, which is the inverse of the selection probability at the stage of sampling, and then adjust the base weight for nonresponse. We will use the weighting class method for the nonresponse adjustment, where weighting classes will be formed using the response propensity score. We will estimate the response propensity score using available auxiliary variables, and then convert the estimated response propensity scores into an appropriate number of weighting classes (mostly 5 to 10 classes). The nonresponse adjusted state weight will then be used to calculate the site weight, which will be adjusted for site nonresponse. We will further adjust this nonresponse-adjusted site weight through post-stratification using the 2018 SFSP and SSO site lists from States to first correct under-coverage (by missing about 9 percent of the site population in the site sample frame) and to enhance the efficiency of analysis. This post-stratified site weight is then applied to the sampled participants and nonparticipants for their base weighting and nonresponse adjustment. In base weighting of caregivers selected from the SNAP frame for new sites, we will incorporate the adjustment for multiple chances of selection due to using the multiple frame approach. For variance estimation, we will use the jackknife variance estimator for each survey in the manner that incorporates all weighting adjustments.

For the sponsor survey, we will use a special method18 to calculate the base weight that accounts for the multiple chances of selection due to multiplicity of sites because they will be sampled through sites.

Degree of Accuracy

Table B2 - 6 presents the expected respondent sample size, precision, and minimum detectable difference (MDD) when comparing two subgroups with assumed design effects (DEFFs). For example, an MDD of 7.8 percent for a subgroup with a sample size of 3,000 means that if the population proportion of the subgroup is 50 percent and the population proportion of a comparison group of the same sample size is at least 7.8 percent away from that of the first group (i.e., 57.8% or 42.2%), then such difference can be detected with 5 percent type I error and 10 percent type II error (i.e., 90 % power) using the survey data provided that the design effect is 3.5.

Table B2 - 6. Precision and Power Analysis for Various Overall and Subgroup Sample Sizes

Survey type

Overall/subgroup

Sample size

DEFF

Precision (%)

MDD (%)

Caregiver and Child

Overall with SSO

6,800

3.5

±2.3

NA

Caregiver and Child

Subgroup within SFSP

3,000

3.5

±3.4

7.8

Caregiver and Child

Subgroup within SFSP

2,000

3.5

±4.2

9.5

Caregiver and Child

Subgroup within SFSP

1,000

3.5

±5.9

13.3

Caregiver and Child

Subgroup within SFSP

500

3.5

±8.4

18.5

Site

Overall with SSO

750

1.6

±4.6

NA

Site

Subgroup within SFSP

600

1.5

±5.0

NA

Site

Subgroup within SFSP

300

1.5

±7.1

15.8

Site

Subgroup within SFSP

150

1.5

±10.0

21.8

Site

SSO Only

150

1.4

±9.7

NA

Sponsor

Overall with SSO

385

3

±8.8

NA

Sponsor

SFSP Only

308

3

±9.9

NA


Unusual Problems Requiring Specialized Sampling Procedures

There are no unusual problems that require specialized sampling procedures. As noted in Question 1 of this supporting statement (Part B), the majority of the site sample will be drawn from State-provided lists of 2017 approved sites/sponsors. An additional round of sampling will be conducted in June 2018 to ensure that new sites approved as of June 1, 2018, are also included.

Any Use of Periodic (Less Frequent Than Annual) Data Collection Cycles to Reduce Burden


This is a one-time study; concern regarding the periodicity of data collection cycles is not applicable.


B.3 Methods to Maximize Response Rates and to Deal With Nonresponse


Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.


It is anticipated that we will achieve response rates of 83, 70 and 17 percent from sponsors, sites, and caregivers, respectively. These procedures will be used to maximize response rates from respondents:

  • Send an introductory letter (Appendices D1, D2, D7, E1, G1, G3, G5), Study Brochure (Appendix D25) and Informational Study Recording for Sponsors and Site Supervisors (Appendices D8 and E2) stating the importance of the study and their participation;

    • For caregivers only:

      • Include a $2 incentive with the introductory letter and instructions to complete the screener.

      • Discuss a $10 incentive (to be mailed after survey is completed) in the introductory letter.

      • Provide two primary data collection modes (web or mail).

  • Send reminders (postcards and letters) to participants who have not completed the surveys (Appendices D3, D9, D12, E3, E6, F3, F5, F6, F9, G4);

  • Follow-up with nonresponders via telephone if the response rates are low (Appendices D4, D10, D14, E4, E7, F4, F7, F10, G6);

  • Make multiple call attempts—up to six—to a number without reaching someone before considering whether to treat the case as “unable to contact”;

  • Provide a toll-free number for respondents to call to verify the study’s legitimacy or to ask other questions about the study; and

  • Implement standardized training for telephone data collectors.

Nonresponse Bias Analysis

Although efforts will be made to achieve as high a response rate as practicable with the available resources, nontrivial nonresponse losses are likely to occur, and a nonresponse bias analysis will be conducted to assess the impact of nonresponse on the survey estimates and the effectiveness of the weight adjustments to dampen potential nonresponse biases. Nonresponse bias analysis will be performed for variables that are available for both respondents and non-respondents. In addition, we will geocode sites, sponsors, and caregiver addresses and merge in community-level data at the census tract-level from the American Community Survey. The merged data will provide community-level information on geography (e.g., urbanicity), demographic characteristics (e.g., percent of households living at or below poverty, percent of households with children), and economic characteristics (e.g., percent of unemployed households).

We will compare responding and nonresponding sponsors, sites, and caregivers on basic characteristics from summer meal program data provided by States as well as relevant variables from the ACS. For sponsors, we will examine potential differences in sponsor type, number of sites operated by sponsors, and number of meals served. For sites and caregivers, we will examine potential differences in site type, length of program operations, types of meals served, number of meals served, and community-level geographic, demographic, and economic characteristics.

The types of analyses to be conducted to evaluate nonresponse for sponsors, sites, and caregivers will include:

  • Evaluating differences found between survey respondents and nonrespondents using data from extant outside sources (e.g., summer meal program data, American Community Survey data);

  • Comparing weighted estimates of characteristics available for both respondents and nonrespondents using unadjusted (base) weights versus nonresponse-adjusted weights;

  • Comparing characteristics of respondents providing completed data at different levels of data collection effort (e.g., cases completed with limited follow-up compared to those requiring considerable follow-up).

B.4 Test of Procedures or Methods to be Undertaken


Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


All data collection instruments were pre-tested under FNS Generic Clearance For Pre-Testing, Pilot, And Field Test Studies, OMB Control Number 0584-0606 (expiration 03/31/2019). Pre-testing began on 08/04/2017 and was completed on 9/29/17. Specific pretest objectives included identifying problems related to communicating intent or meaning of questions and concepts; determining whether respondents could accurately provide the information requested; and assessing the adequacy of the range of responses. Pretests were conducted with respondents from the target population for each instrument (e.g., sponsors, site supervisors, caregivers of participants, teens, etc.). In total, 64 interviews were conducted across 52 respondents. The pretest interview protocols may be found in Appendix H.

Several revisions were made to draft instruments as a result of the pretests, including:

  • Adding reference to site address to improve recognition of the sampled site;

  • For sponsor and site materials, emphasizing summer meals since some organizations provide year-round meal service;

  • Defining summer program in caregiver materials;

  • Specifying that menu output from USDA-approved nutrient analysis software is acceptable;

  • Simplifying and standardizing response options, and reducing the number of responses offered in long lists;

  • Removing question and response option grids for household and child/teen surveys whenever possible;

  • Combining the participant caregiver and nonparticipant caregiver surveys to create a single caregiver survey with common questions for both respondents, and skip instructions for sections relevant only to participant caregivers or nonparticipant caregivers; and

Combining the teen participant and teen nonparticipant surveys into a single teen survey with skip patterns to direct respondents based on their participation status.


B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Table B5 - 1 presents a summary of individuals consulted on statistical aspects of the design. Westat staff will be responsible for the collection and analysis of the study’s data, in coordination with FNS.

Table B5 - 1. Individuals Consulted on Data Collection or Analysis

Staff

Title

Contact Information (phone or email)

Westat (contractor)



Hyunshik Lee, Ph.D.

Senior Statistician

301-610-5112

Laurie May, Ph.D.

Vice President

301-517-4076

Tracy Vericker, Ph.D.

Senior Study Director

301-251-4242

Melissa Rothstein

Senior Study Director

301-315-5975

Sujata Dixit-Joshi, Ph.D.

Senior Study Director

240-314-2442

Thea Zimmerman

Senior Study Director

240-314-2413

Peer Advisory Panel

Caroline Cooke

Connecticut Department of Education

[email protected]

Walt May

Utah Food Bank (sponsor)

[email protected]

Keven Vicknair

Equal Heart (TX) (sponsor)

[email protected]

Crystal FitzSimons

Food Research and Action Center (FRAC)

[email protected]

FNS Staff



Alice Ann Gola

Social Science Research Analyst

703-517-3306

NASS Staff

Edwin Anderson

Section Head, Methodology Branch

202-690-0270


1 The sample size of 20 States is large enough to achieve representativeness, yet manageable in processing time to accommodate the data collection protocol.

2 We expect an 83% response rate from States based on a similar State SNAP caseload data request for the Evaluation of Food Insecurity Nutrition Incentives, in which 26 States were recruited and 22 agreed to participate and provided data, resulting in an 85% response rate. For this study, we aim to recruit 20 States. An 85 percent response rate would yield an initial sample of 23.5 States. We round up to 24 States, resulting in an 83 percent response rate.

3 Based on 2015 and 2016 SFSP site lists from Iowa, Michigan, North Carolina, Pennsylvania, and Texas, we estimate that 64 percent of the total 2018 site population will consist of continuing sites (those that participated in SFSP/SSO in 2017) and the remaining 36 percent will be new sites.

4 We expect that the 2018 site frame consists of 64 percent continuing sites and 36 percent new sites, and that 75 percent of new sites will be included in the new site frame. Therefore, the survey coverage by the continuing and new sites in the frame is expected to be 91 percent (= 64+36*0.75).

5 The margin of error is defined as the half-length of the 95 percent confidence interval.

6 Important subgroups include subgroups defined by site type (open vs. closed), site location (urban vs. rural), SSO status (SSO vs. non-SSO), and School Food Authority (SFA) status (SFA vs. non-SFA).

7 The total number of SFSP sites in 2016 was obtained from the FNS National Databank. The number of SSO sites in 2016 was provided in a correspondence from FNS dated May 5, 2017.

8 A 75% response rate was achieved in Evaluation of the Summer Food Service Program Participant Characteristics (OMB Control No. 0584-0595, Expiration Date 08/31/2016).

10 The number of sponsors linked to the sampled sites was estimated by running a sampling experiment using 2017 data from the National Hunger Clearinghouse.

11 This assumption is based on the response rate achieved in Evaluation of the Summer Food Service Program Participant Characteristics (OMB Control No. 0584-0595, Expiration Date 08/31/2016).

12 The USDA Food Access Research Atlas uses 1 mile urban and 10 miles rural to define areas with low food access. Our investigation indicates the 10 miles creates rural catchment areas that are too large as they are more than seven times larger, on average, than urban catchment areas defined by 1 mile. Moreover, they substantially overlap with urban areas. We will use a 5-mile catchment area for rural sites, which makes rural catchment areas 1.4 times larger than urban ones in terms of the number of households.

13 An example of a closed site with restriction for participation is a sports camp, which admits only children of a certain age group.

14 Screening is not necessary for the onsite or enrolled groups. In all other groups, screening will be used to determine the eligibility of households.

15 We will use the U.S. Census Bureau’s definition of an urban area. Census defines an urban area as one that has at least 2,500 people. Rural includes all areas that are not included in an urban area.

16 The raking technique allocates the internal cells by ensuring the predetermined marginal cell sizes for one dimension at a time in iteration. The iteration stops when all dimensional marginal cells have the desired cell sizes.

17 Gordon, A., Briefel, R., and Allhouse, J. (2003). Feeding low-income children when school is out – the Summer Food Service program: Executive Summary Food Assistance and Nutrition Research Report No. (FANRR-30).

18 The multiplicity-adjusted weight of a sponsor is given by the ratio of the sum of weights of sites associated with the sponsor to the total number of sites under the sponsor.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement for OMB No
AuthorUSDA
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy