NSLP OMB Part B_02032012_final

NSLP OMB Part B_02032012_final.doc

National School Lunch Program (NSLP) Direct Certification Improvement Study

OMB: 0584-0529

Document [doc]
Download: doc | pdf

Part B: Collection of Information Employing Statistical Methods Mathematica Policy Research

Part B: Collection of Information Employing Statistical Methods

B1. Respondent Universe and Sampling Methods

Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

This section discusses three data collection efforts: (1) the national survey of direct certification practices, (2) the in-depth interviews with State and local officials, and (3) the study of unmatched Supplemental Nutrition Assistance Program (SNAP) participant records. The universe for the survey of direct certification practices includes child nutrition (CN) staff in the 50 States; the District of Columbia (DC); five territories (Puerto Rico, U.S. Virgin Islands, American Samoa, Guam, and the Northern Mariana Islands); and all local education agencies (LEAs) (hereafter, districts)1 in States using district-level matching. For the in-depth interviews, the universe is officials in seven purposively selected States. For the study of unmatched records, the universe includes National School Lunch Program (NSLP) applications determined to be categorically eligible in those States selected for in-depth interviews.

National survey of direct certification practices. The national survey (see Appendix BA) will be a census of all entities directly responsible for conducting direct certification, namely the 50 States, DC, and the five territories, as well as the approximately 6,265 districts in 19 States that currently conduct direct certification at the district- level matching States, plus one State, Ohio, which uses regional matching (regional technology centers match students to statewide data using district enrollment files, and these matches are then distributed to districts). In another State, New York, we will include survey only those districts that are participating in a pilot study of district-level matching. Table B.1.1 lists the 20 States in which districts will be surveyed. Districts will not be surveyed in the remaining 30 States because the State, not the district, is the entity responsible for conducting direct certification in those States.

Table B.1.1. District-Level Matching States in Which Direct Certification is Conducted at the District Level

States

Alabama

Michigan

New York***

Colorado

Mississippi

Ohio**

Connecticut

Missouri

Pennsylvania

Florida

Montana

Tennessee

Kentucky

Nebraska

Virginia

Maine

Nevada

Wyoming

Maryland*

New Mexico


*Maryland employs a hybrid approach in which districts larger than 1,000 students perform matching at the district level. These districts will be included in the sampling procedures.

**Ohio is a region-level matching State. For purposes of this study, the regional information technology centers that perform the matches will be considered the relevant districts’ sampling procedures.

***New York is conducting a pilot study in which a select number of districts are conducting district-level matching.


For the districts in the States using district-level matching, we will ask a probability sample of districts to complete a long version of the survey and we will ask the remaining districts to complete a shortened version of the survey. The questions common to both the long and short version of the survey address the three key areas of direct certification processes; (1) student enrollment data characteristics, (2) LEA data matching process characteristics, and (3) methods of linking children in the same household. The long version includes additional questions in these areas, as well as additional topic areas, such as planned changes to direct certification, challenges and barriers faced in the direct certification process, among others. This approach is intended to capture key information from all districts conducting direct certification, as well as more detailed descriptive information on direct certification processes for a representative sample of districts in States using district-level matching.

The sample of districts completing the long version of the district survey will be large enough to yield approximately 2,000 completed longer versions and 3,012 completed shorter versions. This tiered approach will enable us to collect the detailed information required to address the study objectives while minimizing burden. The 2,000 sample size for the longer version will provide sufficient precision to address the study’s research questions, specifically it will provide half-width 95 percent confidence intervals of less than 0.03 for outcomes specific to the longer version that are expressed as proportions. The questions common to both the long and short form of the survey were picked to address the three key areas of direct certification processes; (1) student enrollment data characteristics, (2) LEA data matching process characteristics, and (3) methods of linking children in the same household. [Do we need power analysis/sample selection information here too?]

As shown in Table B.1.2, we expect a 90 percent response rate, or 50 completed surveys, among the State-level respondents, who will be CN staff; all States responded to the previously approved data collection. We expect a slightly lower response rate—80 percent—for both the long version of the survey (2,000) and the short version (3,012) that districts in district-level matching States will complete. This would result in a total of 5,062 completed surveys (50 from the States and 5,012 from the districts). We do not suspect that response rates will differ between districts that receive the long or the short form survey.We will seek to minimize potential survey nonresponse through email and phone follow-ups to States and districts selected for the long version of the district survey, and email follow-ups to districts intended to complete the short district survey.

Based on the previously approved data collection, we do not anticipate high levels of item nonresponse; in the previous data collection all states responded to most key data items. In addition, the web survey will include tracking features that help users identify incomplete sections or items in order to further minimize item nonresponse. It is reasonable to assume item response to be less for those districts receiving the long form than those districts receiving the short form survey, however.However, we will investigate item nonresponse patterns and take appropriate steps if item nonresponse is common. The steps may include imputation methods, such as hot decking or multiple imputation techniques. If nonresponse to entire sections is common, we will explore developing section-specific nonresponse weights.


Table B.1.2. Sampling and Response Rate—National Survey of Direct Certification Practices

Respondent Type

Number of Offices (Universe)

Sampling Method

Respondents Contacted/Attempted

Respondents Participating

State CN Staff

56

Census

56

50

District Staff





Long survey

2,500

Census

2,500

2,000

Short survey

3,765

Census

3,765

3,012

Total



6,321

5,062

Expected Response Rate



80%


In-depth, semistructured interviews. We will conduct cases studies of the direct certification practices of the seven States. These case studies will be based onconduct interviews with program and technical staff responsible for direct certification at the State level and in two or three districts in each of the seven case study States. We will select the seven in-depth study States using an index designed to identify the States that have in place direct certification processes that best address the key research questions. These states and districts to be visited for the semi-structured interviews are case studies. Data gathered from these interviews are to be used for descriptive analysis only. The purpose of these interviews is to probe deeper into how these states and districts perform the data matching for direct certification. Because FNS will recruit the seven States for the study, we expect a 100 percent response rate for the in-person interviews in each of the States (Table B.1.3).

Table B.1.3. Sampling and Response Rate—In-Depth, Semistructured Interviews

Respondent Type

Number of Offices (Universe)

Sampling Method

Respondents Contacted/Attempted

Respondents Participating

State CN Staff

7

Convenience sampling

7

7

State Education Staff

7

Convenience sampling

7

7

State SNAP Staff

7

Convenience sampling

7

7

State Medicaid Staff

7

Convenience sampling

7

7

State TANF Staff

7

Convenience sampling

7

7

State IS Staff

7

Convenience sampling

14

14

District Sstaff

18

Convenience sampling

18

18

District IS Staff

18

Convenience sampling

18

18

Total


85

85

Expected Response Rate



100%


After FNS recruits the seven case study States, the contractor will provide each of these States with full details of the study and explain what will be expected of them in terms of completing the national survey, scheduling site visits, helping to identify the relevant State and local staff to conduct in-depth interviews, and obtaining the SNAP participant records and NSLP applications.

For State-level matching States, we will spend one day at State offices and two days visiting two local districts. For States that conduct district-level matching, we will spend half a day at State offices and two-and-a-half days visiting three districts. Table B.1.4 provides a summary of the on-site data collection activities.

Table B.1.4. Summary of On-Site Data Collection Activities per State

Activity

Expected Number per State

Total Days on Site

3

Site Visit Trips

1

State Office Interviews

1

District Offices Visited

2 or 3


We will conduct in-depth interviews with the following entities in each of seven States: (1) State CN agency, (2) State education agency, (3) State SNAP agency, (4) State Medicaid agency, (5) State Temporary Assistance for Needy Families (TANF) agency, (6) State IS agency, (7) local districts, and (8) local IS agency.

All site visits will begin with discussions with the State CN agency director. We will interview key technical and policy staff from SNAP and any other involved programs, such as TANF and Medicaid, about their roles in the direct certification process. At the district sites, we will interview the district director and technical staff knowledgeable about the systems and data used in direct certification. At both levels, we will interview the staff member(s) with primary responsibility for developing, programming, and implementing the data-matching process at the site.

The semistructured interviews will begin with a discussion of the State’s survey responses—which we will receive before the site visit—and performance measures, along with open-ended, free-flowing conversations to allow for a complete picture of the processes that the State and districts use and their experiences with direct certification.

The interview protocol (see Appendix CB) is organized into six distinct sections with questions designed to address the objectives of the study. The six sections include (1) introduction/overview; (2) current direct certification process; (3) grants; (4) changes to direct certification procedures; (5) successes, challenges, and lessons learned; and (6) concluding questions.

Unmatched SNAP participant records. The universe includes records in all of the districts within the seven States that are selected for the in-depth site visits. We will ask the seven States to send SNAP participant files used in the initial matching with student enrollment data. We will select a sample of 28 districts from the seven States, with the expectation that we will receive 2,100 to 2,150 NSLP applications in which a student was categorically eligible. That range, which represents a 100 percent response rate, is based on the average number of NSLP applications per district that have categorically eligible students (28 districts * 76 NSLP applications = 2,128 total NSLP applications). We will request all such applications from the sampled districts.

Before selecting districts, we will stratify within State and district size (number of categorically eligible students that are either certified by application or direct certification). The States selected will include both State- and district-level matching States.

Within each State, we will form up to three strata based on size (the number of categorically eligible students), as follows:

  1. Large districts will be those that, based on their numbers of categorically eligible students, are expected to contain more than 200 categorically eligible NSLP applications

  2. Medium districts will be those expected to have 50 to 199 such applications

  3. Small districts will be those expected to have fewer than 50 such applications

We will collect categorically eligible NSLP applications from 28 districts. Small districts will be sampled in only two of the seven States. Those two States will be will be randomly selected from the seven. In State-level matching States, we will obtain applications from three districts. In district-level matching States, we will collect applications from four or five districts. The numbers allocated to each district-level matching State will depend on the number of State-level matching States. Since the number of districts will not be the same for each district-level matching State, we will randomly select the States in which we will collect applications from four or five districts.

Within each State, the allocation to each size-based stratum will be determined by the number of districts in each stratum. To achieve a proportionate distribution, we will employ implicit rather than explicit stratification and use probability minimum replacement selection (also known as sequential selection or the Chromy method) available in SAS Proc Survey Select.2 The initial sample of districts will be twice as large as the number of districts we hope to recruit. To facilitate replacement of districts in case of nonresponse, we will form pairs of similarly sized districts among the districts initially selected in each State; one district in each pair will be randomly assigned as the “main” selection and the other will be the “alternate.” Alternate selections will be recruited only if the main selection from their pair does not participate.

Within each recruited district, we will select all categorically eligible NSLP applications. The total number to be selected will depend on the actual sample but we expect it to be approximately 2,128. Since the sample of records will be an equal probability (self-weighting) sample within each State, sampling weights will not be needed for State-level analysis. The use of nonresponse adjustment weights is discussed in Section B.3 below.

B2. Procedures for Collection of Information

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection

  • Estimation procedure

  • Degree of accuracy needed for the purpose described in the justification

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

This study employs three primary data collection activities: (1) national survey of direct certification practices; (2) in-depth, semistructured interviews; and (3) collection of unmatched SNAP records and NSLP applications.

National survey of direct certification practices. The survey will be a census of all 56 States, D.C., and territories, as well as the 6,265 districts performing district-level matching. We will employ a sampling procedure to select 2,500 districts that we will ask to complete a long version of the district survey; we will ask the other 3,765 districts to complete a short survey. When selecting the sample for the long district survey, we will form strata of districts based on State, size (reported enrollment), and public or private status. We will not oversample based on State but will use stratification to ensure proportionate representation of the States within strata defined by size and public/private status. We will set sample allocation based on size and public/private status after defining key subgroups for analysis. We recommend that any key subgroup be allocated enough sample to result in 200 completed surveys. We do not anticipate examining If a key subgroups comprises comprising less than 10 to 20 percent of the population, therefore oversampling might will not be called for depending on the level of precision desired.

The precision of estimates made with data from the long district survey depends mainly on two factors: (1) the degree of oversampling and (2) the increase in variance due to weighting adjustments made to compensate for nonresponse. Although the proposed sample size will make up a significant portion of the population, for most if not all of the analyses to be conducted use of the finite population correction (FPC) factor is not appropriate. If there is no oversampling, it is reasonable, based on our experience with similar surveys, to anticipate a design effect (DEFF) of between 1.25 and 1.75. We cannot predict the effect of oversampling at this point, but it might increase the DEFF to a range of 2.0 to 3.0. Not all cases would be subject to the increased DEFF; that would depend on which groups were oversampled. For example, if small States were oversampled, their estimates would be based on the lower range of DEFFs (1.25 to 1.75), but estimates based on the whole sample or on subgroups based on size would be subject to the higher levels (2.0 to 3.0). Using a DEFF of 1.75, the sample of 2,000 district respondents for the longer version will provide half-width 95 percent confidence intervals of 0.029 for proportions with a mean of 0.500, and 0.023 for proportions with a means of 0.200 or 0.800.

We will administer the national survey via the web, which allows for easy access and efficient collection of data and ensures the confidentiality privacy of respondents’ information. Results will be reported only at the State level, and the names of participating districts will not be revealed. We have designed the survey so that a respondent can save responses and then hand off sections to other appropriate administrators who have relevant knowledge. The survey design enables detailed data to be collected while minimizing burden on individual survey respondents. We will ask respondents only questions relevant to the direct certification method they use and whether they are State or district staff.

In-depth, semistructured interviews. Within each of the seven case study States, we will work with the State CN directors to recruit them into this part of the study and ask them for recommendations of districts near the State offices for us to visit. We will send the States documents describing the content and structure of the on-site data collection activities. These documents will explain the purpose for and methodology of the study; outline the eligibility criteria for participation (that is, clearly describe the factors used to identify in-depth study States); and identify the responsibilities of participants. We will ask the State director to send an email to the recommended districts before our contact, to encourage them to cooperate. We will then send similar introductory materials to each selected district. We will work with State and local contacts to identify other relevant key staff to interview and to schedule the visit. In our conversations with State CN directors, we will determine to what extent the agencies that oversee programs such as TANF and Medicaid are involved in the direct certification process and whether these staff should be interviewed in that State.

We will conduct the in-depth, semistructured interviews with either individuals or small groups. Based on our experience in conducting the interviews for the State Implementation of Direct Certification Report to Congress best practices section, we have found that in some cases one individual can answer and provide valuable insight into State direct certification practices. In other States, we received input from several people. In our experience with other agencies conducting similar semi-structured interviews we have found that the interviews tend to be more dynamic and well-rounded when individuals from all relevant areas are included in the interviews. Two researchers will participate at each site visit and conduct the interviews, which will have a 60-minute time limit. One researcher will lead the questioning while the other will focus on taking written notes, using modified versions of the interview guides. After completing the interviews for a State, the researchers will prepare a site visit report following a standard format.

Collection of unmatched SNAP participant records and NSLP applications. We will work closely with the State CN directors or their designees to obtain SNAP participant files used in the initial matching with student enrollment data. We will ask the States to supply an indicator in the files of the match results. Information collected from the SNAP participant file will include the key data elements used as primary identifiers in data matching with student enrollment data and key demographic data (date of birth, address, zip code, county, and so on).

We will also work closely with State and local staff to collect NSLP applications that are approved based on categorical eligibility from a sample of 28 districts in the seven case study States. Information collected from those applications would include the data elements used in the data-matching algorithms in the States and districts—to the extent that they are available—along with information used to determine categorical eligibility, for example SNAP, Food Distribution Program on Indian Reservations (FDPIR), or TANF case numbers.

States and some of the larger sampled districts will be able to transmit their SNAP participant files and NSLP applications electronically via our secure file transfer site. However, we recognize that the collection of the NSLP applications in particular might pose a burden on other selected districts that do not store these records electronically. To provide the applications, most of the sampled districts will have to go through their files, photocopy applications, and then mail them. We will closely monitor the data collection effort and take steps to reduce the burden on the districts.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

It is critical to the success of the study to maximize the response rate in each of the data collection activities. It is also necessary that the final samples (after nonresponse) are distributed proportionately with respect to their populations (so that they are representative). The samples described above will either be used for qualitative data collection or be selected with equal probability within their domains of analysis. Thus, weights are not needed to correct for disproportionate sampling. However, the final samples of districts (for the survey and for unmatched records) will be weighted to reflect nonresponse. Weighting classes will be defined based on State and district size and, within these weighting classes, the nonresponse adjustment factor for the sampling weights will be the inverse of the response rate for the class. The final analysis weight is the product of the sampling weight and the nonresponse adjustment factor.

National survey of direct certification practices. We will provide all respondents with a set of introductory materials at the start of data collection and follow up with reminders via email and telephone as needed. All respondents will receive an introductory letter, which will introduce the study, ask for participation, and provide instructions on for accessing the web survey. We will also provide a detailed project description and answers to frequently asked questions. Together, these introductory materials will help achieve two key goals: (1) to emphasize the study’s purpose and the importance of participation and (2) to encourage completion of the questionnaire on the web.

At various times throughout the survey period, we will send survey reminder emails to nonrespondents to ensure the highest rate of response (see Appendix HF). These emails will remind respondents about the study and provide the necessary log-in information, along with contact information if they have questions. We will also make reminder calls to respondents who have not yet completed a survey (see Appendix IG). During these calls, we will ask respondents whether they have any questions about the study or how to access the web survey. Upon request, we will mail a hard copy of the survey, along with a self-addressed business-reply envelope, to any respondents who prefer that mode.

Importantly, the design of the web-based survey itself is intended to maximize participation and minimize nonresponse. The survey allows a respondent to save responses and then hand off sections to other appropriate administrators who have relevant knowledge. In addition, a portable document format (PDF) version of the survey will be posted, which can be printed for easy reference by the respondent. Lastly, the web survey includes functions for tracking survey responses, enabling project staff to keep abreast of the status of survey respondents. The database will alert staff on past-due surveys so they can follow up with nonrespondents.

To further promote high response rates, respondents will be able to contact Mathematica through several avenues, including a toll-free telephone number, project email, and project staff telephone numbers. Mathematica’s trained help desk staff will monitor the toll-free number during business hours. A respondent may call in if he or she has difficulty accessing or completing the survey. Another function of the help desk will be to provide the log-in ID and password to respondents who want to complete the web survey but have misplaced that information. We will train help desk staff to identify each caller through a look-up file. We will also provide the survey director’s telephone number to staff in case help desk staff cannot answer all the questions a respondent poses.

As discussed above, we do not anticipate high levels of item nonresponse. In the previous data collection all states responded to most key data items. In addition, the web survey will include tracking features that help users identify incomplete sections or items in order to further minimize item nonresponse. However, we will investigate item nonresponse patterns and take appropriate steps if item nonresponse is common. The steps may include imputation methods, such as hot decking or multiple imputation techniques. If nonresponse to entire sections is common, we will explore developing section-specific nonresponse weights.

In-depth, semistructured interviews. Because case study States will be recruited and give their consent to participation, we do not expect any difficulties in completing interviews with State and local staff who are involved with NSLP direct certification activities.

Collecting SNAP participant records and NSLP applications. We will contact each State and district point of contact at the State level before the in-depth site visits to describe the project and explain the need for the SNAP participant records and NSLP applications. A critical discussion with States and districts is to identify the least burdensome means for providing the unmatched SNAP participant data and NSLP applications. We expect that, in most cases, States will provide their unmatched SNAP participant data through the contractor’s secure file transfer site. We will follow up with any States from which we have not received the required records and provide any necessary technical assistance.

To minimize burden and maximize the response rates for the submission of the NSLP applications, we will accept the applications in the format (such as hard copy, Excel file, text file, PDF file, or other format) and delivery method (such as the use of a secure FX site or hand delivery or mailing of hard copies) that are most convenient to States and districts. Throughout the data collection period, we will follow up via telephone and email with States and districts that have not submitted applications. Project staff directly involved with the collection of the NSLP applications will conduct the calls in order to assist with any technical issues or answer any questions about how to submit the NSLP applications most efficiently.

B4. Tests of Procedures

Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

Mathematica pilot tested the national survey of direct certification practices in four States, two of which were identified as employing State-level matching—Idaho and New Jersey—and two of which that were identified as employing district-level matching—Kansas and Wyoming. We sought input from a total of nine pretest respondents in these four States. There were four State-level respondents and, in the district-level matching States, three districts received the full survey and two districts received the brief survey. Each of these States, except New Jersey, has been a case study site for the best practices component of recent Reports to Congress. As such, we were able to assess the accuracy of the survey responses effectively to ensure the questions elicited true answers. We included New Jersey to ensure a fair balance of the early burden across FNS regions.

The pilot test followed the protocols developed for the survey instrument:

  • Sending each participant an email invitation along with a study description, the relevant survey instrument, and contact information

  • Calling participants to confirm participation and schedule a telephone debriefing interview

  • Providing technical assistance as needed

  • Conducting debriefing interview

After the pilot States returned the surveys, we conducted 30-minute interviews with each respondent to collect feedback on the survey. These debriefs followed a structured set of questions to ensure that we obtained comparable information from each of the pilot respondents on the flow of the survey and to collect any recommendations they might have. During the debriefing interview, we asked participants to identify any questions that they found difficult to answer or that seemed irrelevant, and any topics we may have missed. We also sought feedback on specific questions based on our own concerns about item difficulty or because participants’ responses required clarification.

Based on the findings from the debriefing, we have made minor revisions to the survey. In a number of questions, we are incorporating definitions and providing examples to minimize respondent confusion. We have added additional answer categories in some questions, as was suggested by pilot test respondents. We have also added soft and hard checks to particular questions to minimize the possibility of misclassification of States as either State- or district-level matching. We limited one question to State-level respondents only to minimize confusion, burden, and inaccurate data from districts. The pilot tests also showed that the burden estimates published in the Federal Register are largely accurate, except that State-level respondents did not take as long as expected. As such, we reduced the burden estimate for State-level respondents from 75 minutes to 65 minutes.

We will program the national survey of direct certification practices as a web survey. Prior to going into the field, we will make test case IDs available to project staff members, who will develop scenarios in order to check programming logic paths, edit checks, question wording, and formatting. Testers will also ensure that partially completed cases route to the next unanswered question upon reentry to the survey.

The interview guides for the data collected in the in-depth study States are semistructured. We will tailor the instruments to the specific direct certification practices and data matching techniques employed by each study State. As such, the specific questions asked of each respondent category will vary greatly from State to State. Given this variability across States, the interview guides and, therefore, were not pilot tested. However, the interview protocol development—both substance and timing—was informed by best practice interviews with states that are conducted as a part of the annual report to Congress on NSLP direct certification implementation progress. Those best practice interviews lasted one hour and we designed the protocols for the in-depth case study interviews to be similarly paced. We will tailor the instruments to the specific direct certification practices and data matching techniques employed by each study State. As such, the specific questions asked of each respondent category will vary greatly from State to State. Before each site visit, project staff will create individualized protocols for each respondent that are tailored to the specific processes and procedures in place, which will be determined by the responses to the national survey. The intent of the interviews is to obtain a more in-depth understanding of the responses to the national survey questions, as well as to obtain additional information on direct certification processes in the State or district that could not be captured in the survey.

B5. Individuals Consulted

Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Mathematica staff and the FNS project officer contributed to planning for the survey and other aspects of the collection (Table B.5.1). Comments from the public and from NASS were also consulted.

Table B.5.1. Individuals Consulted on Data Collection or Analysis

Mathematica Staff (Contractor)


Kevin Conway, Project Director

609-750-4083

Nancy Cole, Senior Researcher

617-674-8353

John W. Hall, Senior Statistician

609-275-2357

Quinn Moore, Senior Researcher

919-240-4879

Lara Hulsey, Researcher

609-936-2778

Brandon Kyler, Senior Program Analyst

609-716-4381

FNS Staff


Joe Robare, FNS Project Officer

703-305-2128




1 The Richard B. Russell National School Lunch Act (NSLA) and the Child Nutrition and WIC Reauthorization act of 2004 use two different terms to refer to the local entities that enter into agreements with State agencies to operate the NSLP: LEAs and school food authorities (SFAs). In essence, LEAs are responsible for the application, certification, and verification functions of the school meal programs. SFAs are responsible for other aspects of the NSLP, such as meal pattern requirements and meal-counting and claiming reimbursements. For consistency’s sake, we will use the term “district” throughout the remainder of this document. However, it is important to note that the sampling frame is SFA.

2 See “PPS Sequential Sampling” and “Sequential Random Sampling” in SAS Online Doc 9.1.3 at http://support.sas.com/onlinedoc/913/docMainpage.jsp

22

File Typeapplication/msword
AuthorDawn Patterson
Last Modified ByKevin Conway
File Modified2012-02-03
File Created2012-02-03

© 2024 OMB.report | Privacy Policy