Part B: Collections of Information Employing Statistical Methods
Supporting
Statement for OMB Clearance for
Third National Survey of WIC
Participants (NSWP-III)
0584-NEW
Part B
September 18, 2018
United States Department of Agriculture
Food and Nutrition Service
3101 Park Center Drive
Alexandria, Virginia 22302
Karen Castellanos-Brown
Office
of Policy Support
Telephone: 703-305-2017
Section Content Page Number
Part B. Collections of Information Employing Statistical Methods 1
B.1 Respondent Universe and Sampling Methods 1
B.2 Statistical Methods for Sample Selection and Degree of Accuracy Needed 16
B.3 Methods to Maximize the Response Rates and to Deal with Nonresponse 40
B.4 Test of Procedures or Methods to be Undertaken 43
B.5 Individuals
Consulted on Statistical Aspects and Individuals Collecting
and/or
Analyzing Data 46
Tables
Table B1. Respondent Universe, Initial Sample Sizes, Expected Response Rates, Final Sample Sizes, and Design Features for Each Data Collection 3
Table B3. Key Features of Each Sampling Stage for the Recently Certified WIC Participant
Certification Survey. …. 18
Table B4. Example Showing Method of Determining Measure of Size for the First Stage of the Sampling for Recently Certified WIC Participants 19
Table B5. Allocation of Primary Sampling Units for Sample of Recently Certified WIC Participants in the U.S. 21
Table B6. Allocation of the Population and Sample Local Agencies over FNS Region 30
Table B7. Research Team Contact Information 46
Appendix A. Study Background and Regulatory Information
Appendix A1. Improper Payment Elimination and Recovery Improvement Act of 2012
Appendix A2. 2009 Executive Order 13520—Reducing Improper Payments
Appendix A3. (OIG) FY 2014 Compliance Improper Payment Requirements
Appendix A4. M-18-20 – Appendix C to Circular No. A-123, Requirements for Payment Integrity Improvement
Appendix A5. NSWP-III Research Questions and Objectives
Appendix A6. Code of Federal Regulations. §215.11 Special Responsibilities of State Agencies
Appendix A7. Code of Federal Regulations. §246.7 Certification of Participants
Appendix A8. Contractor Confidentiality Form
Appendix
A9. Section 28 of the Richard B Russell National School Lunch Act as
amended by the Healthy, Hunger-Free Kids Act of 2010 (HHFKA)
Appendix
B. Survey Instruments
Appendix B1.a State Agency Survey
Appendix B1.b State Agency Survey – Screenshots
Appendix B2.a Local Agency Survey
Appendix B2.b Local Agency Survey - Screenshots
Appendix B3.a Certification Survey Version A (Adult)-English
Appendix B3.b Certification Survey Version B (Infant/Child)-English
Appendix B3.c Certification Survey Version A (Adult)-Spanish
Appendix B3.d Certification Survey Version B (Infant/Child)-Spanish
Appendix B4.a Denied Applicant Survey Version A (Adult)-English
Appendix B4.b Denied Applicant Survey Version B (Infant/Child)-English
Appendix B4.c Denied Applicant Survey: Version A (Adult)-Spanish
Appendix B4.d Denied Applicant Survey Version B (Infant/Child)-Spanish
Appendix B5.a Program Experiences Survey Version A (Adult)-English
Appendix B5.b Program Experiences Survey Version B (Infant/Child)-English
Appendix B5.c Program Experiences Survey Version A (Adult)-Spanish
Appendix B5.d Program Experiences Survey Version B (Infant/Child)-Spanish
Appendix B6.a Former WIC Participant Case Study Interview Guide-English
Appendix B6.b Former WIC Participant Case Study Interview Guide-Spanish
Appendix B7.a Denied Applicant Log Request Email
Appendix B7.b Denied Applicant Log
Appendix B7.c Denied Applicant Log Request Reminder Email
Appendix B7.d Denied Applicant Log Request Reminder Telephone Script
Appendix B8 State Agency Administrative Data Request Email
Appendix C. Recruitment Materials
Appendix C1. Notification Email to FNS Regional Offices
Appendix C2. Letter to State Agencies from Regional Offices
Appendix C3. State Agency Survey Invitation Email
Appendix C4. State Agency Survey Invitation Letter with Instrument
Appendix C5. State Agency Survey Reminder Email
Appendix C6. State Agency Survey Reminder Telephone Script
Appendix C7. Local Agency Survey Invitation Email
Appendix C8. Local Agency Survey Invitation Letter with Instrument
Appendix C9. Local Agency Survey Reminder Email
Appendix C10. Local Agency Survey Reminder Telephone Script
Appendix C11.a Certification Survey Recruitment Telephone Script-English
Appendix C11.b Certification Survey Recruitment Telephone Script-Spanish
Appendix C12.a Certification Survey Recruitment In-Person Script-English
Appendix C12.b Certification Survey Recruitment In-Person Script-Spanish
Appendix C13.a Text Message Reminder for Scheduled Certification Survey-English
Appendix C13.b Text Message Reminder for Scheduled Certification Survey-Spanish
Appendix C14.a Telephone Reminder for Scheduled Certification Survey-English
Appendix C14.b Telephone Reminder for Scheduled Certification Survey-Spanish
Appendix C15.a Denied WIC Applicant Survey Recruitment Telephone Script-English
Appendix C15.b Denied WIC Applicant Survey Recruitment Telephone Script-Spanish
Appendix C16.a Denied WIC Applicant Survey Recruitment In-Person Script-English
Appendix C16.b Denied WIC Applicant Survey Recruitment In-Person Script-Spanish
Appendix C17.a Text Message Reminder for Scheduled Denied Applicant Survey-English
Appendix C17.b Text Message Reminder for Scheduled Denied Applicant Survey-Spanish
Appendix C18.a Telephone Reminder for Scheduled Denied Applicant Survey-English
Appendix C18.b Telephone Reminder for Scheduled Denied Applicant Survey-Spanish
Appendix C19.a Program Experiences Survey Invitation Telephone Script-English
Appendix C19.b Program Experiences Survey Invitation Telephone Script-Spanish
Appendix C20.a Program Experiences Survey Invitation Letter-English
Appendix C20.b Program Experiences Survey Invitation Letter-Spanish
Appendix C21.a Program Experiences Survey Invitation Email-English
Appendix C21.b Program Experiences Survey Invitation Email-Spanish
Appendix C22.a Program Experiences Survey Invitation In-Person Script-English
Appendix C22.b Program Experiences Survey Invitation In-Person Script-Spanish
Appendix C23.a Former WIC Participant Case Study Interview Invitation Telephone Script-English
Appendix C23.b Former WIC Participant Case Study Interview Invitation Telephone Script-Spanish
Appendix C24 WIC Administrative Data Request to States from Regions
Appendix C25 State Agency Administrative Data Request Reminder Email
Appendix C26 State Agency Administrative Data Request Reminder Telephone Script
Appendix D. Communications Materials
Appendix D1. Study Description for State and Local WIC Agencies
Appendix D2. State Agency Survey Thank You Letter
Appendix D3. Certification End Date Verification Email
Appendix D4. Certification End Date Verification Reminder Telephone Script
Appendix D5. Local Agency Survey Thank You Letter
Appendix D6.a Certification Survey Information Letter from State Agencies-English
Appendix D6.b Certification Survey Information Letter from State Agencies-Spanish
Appendix D7.a Participant Consent Form-Certification Survey-English
Appendix D7.b Participant Consent Form-Certification Survey-Spanish
Appendix D8.a Denied WIC Applicant Information Letter from State Agencies-English
Appendix D8.b Denied WIC Applicant Information Letter from State Agencies-Spanish
Appendix D9.a Participant Consent Form-Denied Applicant Survey-English
Appendix D9.b Participant Consent Form-Denied Applicant Survey-Spanish
Appendix D10.a Program Experiences Survey Invitation Postcard-English
Appendix D10.b Program Experiences Survey Invitation Postcard-Spanish
Appendix D11.a Participant Information Brochure-English
Appendix D11.b Participant Information Brochure-Spanish
Appendix D12.a Program Experiences Survey Thank You Letter and Visa Debit Card-English
Appendix D12.b Program Experiences Survey Thank You Letter and Visa Debit Card-Spanish
Appendix D13.a Former WIC Participant Case Study Interview Thank You Letter and Visa Debit Card-English
Appendix D13.b Former WIC Participant Case Study Interview Thank You Letter and Visa Debit Card-Spanish
Appendix E. Requirements for Office of Management and Budget Review
Appendix E1.1 Federal Register 60-Day Notice Public Comment 1
Appendix E1.2 Federal Register 60-Day Notice Public Comment 2
Appendix E1.3 Federal Register 60-Day Notice Public Comment 3
Appendix E2.1 Federal Register 60-Day Notice: FNS’s Response to Public Comment 1
Appendix E2.2 Federal Register 60-Day Notice: FNS’s Response to Public Comment 2
Appendix E2.3 Federal Register 60-Day Notice: FNS’s Response to Public Comment 3
Appendix E3.1 National Agricultural Statistics Service (NASS) Comments
Appendix E3.2 Response to National Agricultural Statistics Service (NASS) Comments
Appendix E4. Estimates of Respondent Burden
Appendix E5. Annualized Cost to Respondents
Appendix F1. Pre-Test Memorandum
Appendix F2. Pre-Test Revisions Summary
Appendix G. NSWP-III General Data Collection Procedures
Appendix H. Institutional Review Board Approval Letter
The Third National Survey of WIC Participants (NSWP-III) will collect data from six distinct data collection activities and one pilot study:
State Agency Survey
Local Agency Survey
Certification Survey
Denied Applicant Survey
WIC Participant Program Experiences Survey
Former WIC Participants Case Study
Pilot of Alternative Methodology to Provide Annual Estimates of Improper Payments in WIC
The collective goal of these data collection efforts (other than the pilot) is to provide the U.S. Department of Agriculture (USDA) Food and Nutrition Service (FNS) with (a) representative estimates of the number and rate of erroneous certifications (i.e., of participants) and denials (of applicants) and corresponding improper payments in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), both excluding and including rebates for infant formula and other foods; 1 (b) information on potential State and local WIC agency and participant characteristics that may be associated with improper payments; (c) information on State and local WIC agencies’ certification policies and procedures, caseloads, and other operations; (d) nationally representative descriptions of WIC participants’ experiences with the WIC program, including the program certification procedures, services, and agency staff; and (e) information about factors that facilitate or hinder the retention of eligible WIC participants in the program. NSWP-III will also pilot a method to produce annual updates of the estimates of the number and rate of case errors and the amount and rates of associated dollar error. Table B1 summarizes, for each data collection, the universe, initial sample size, anticipated response rates, expected final sample sizes for analysis, and key design features. The overall expected response rate for the study is 76 percent (Table B1). More detailed descriptions of this information are provided below for each data collection.
Data Collection |
Respondent Universe |
Initial Sample Size |
Response Rate |
Final Sample Size |
Design Features |
State Agency Survey |
90 State Agencies |
90 |
100% |
90 |
Census of all State Agencies |
State Agency Administrative Data |
20 State Agencies |
20 |
100% |
20 |
Four-Stage Sample Selection of PSUs, Local Agencies, Clinics, and Participants |
Local Agency Survey |
1,825 Local Agencies |
965 |
80% |
772 |
Stratified Systematic Sample of Local Agencies |
Certification Survey* |
WIC Participants, recently certified |
2,000 |
80% |
1,600 |
Four-Stage Sample Selection of PSUs, Local Agencies, Clinics, and Participants |
Denied Applicant Survey** |
Denied WIC Applicants |
240 |
80% |
192 |
Four-Stage Sample Selection of PSUs, Local Agencies, Clinics, and Participants |
Program Experiences Survey |
WIC Participants, current |
2,500 |
80% |
2,000 |
Cross-Sectional Samples of Participants Who Completed the Certification Survey and Additional Sample of Current WIC Participants |
Former Participant Case Study*** |
WIC Participants, former/inactive |
520 |
24% |
125 |
Mixed Methods Approach to Identifying Former WIC Participants |
Totals |
6,335 |
76% |
4,799 |
|
|
Notes: * The universe of “recently certified” WIC participants, defined as WIC participants who were approved (or reapproved, also known as “recertified”) for WIC participation no more than 6 weeks prior to a given date (the date of survey administration) is unknown. Sampling and data collection plans will identify WIC participants with certification dates of 0 to 3 weeks prior to sample selection to allow time for recruitment and data collection. ** The initial sample size of denied WIC applicants depends largely on the number of records of formally denied applications retained by SAs and/or LAs in the study sample. *** The number of former WIC participants in a given period is unknown; national estimates of the rate at which WIC participants discontinue are unavailable. The initial sample of former participants and expected response rate reflects uncertainty in the number of WIC participants who have suspended participation (not redeemed benefits); some agencies terminate the certification periods of such participants after 1 to 2 months of non-redemption.
|
The study will incorporate a variety of mechanisms to ensure high response rates. Periodic reminders, via telephone, email and text, as applicable, will be sent to the non-respondents several times after the initial invitation is sent (See Appendix G and Section B.3 for more details regarding the data collection procedures). In addition, the web surveys (State Agency Survey and Local Agency Survey) can be completed in more than one sitting since their answers are saved automatically. The surveys also will be programmed with skip patterns and auto-filling where applicable.
State Agency Survey. The respondent universe for the State Agency Survey is 90 State agency (SA) directors from the 50 States, the District of Columbia, 34 Indian Tribal Organizations (ITOs), and 5 U.S. Territories. A sampling frame of SAs will be constructed by obtaining contact information for each WIC director in each State, ITO, and Territory. A census of all 90 SAs will be conducted for the survey to enable comparisons of the potential effects of the different policies that each SA requires, or allows its LAs to follow, on improper payments. The responses from this census survey and the data from the most recent WIC Program and Participant Characteristics (PC) study will be used to produce subgroup estimates of error rates by State. A 91 percent response rate was achieved for the NSWP-II State Agency Survey, and a 100 percent response rate is expected for the NSWP-III State Agency Survey based on prior experience working with these directors in data collection for other projects.2
Local Agency Survey. The respondent universe for the Local Agency Survey is the approximate 1,825 local agencies (LAs) currently located in the 50 States, the District of Columbia, 34 ITOs, and 5 U.S. Territories. The 2017 WIC Local Agency Directory (WICLAD) will serve as a sampling frame for the Local Agency Survey. A nationally representative sample of 890 LA directors will be selected from the population of LAs using a stratified systematic sampling design. An 80 percent rate is assumed for NSWP-III, which is the response rate needed to satisfy the requirement for a margin of error of ±4.0 percent at the national level, and ±7.5 percent at the regional level, at a 95 percent level of confidence. This will result in 712 completed Local Agency Surveys.3 The NSWP-II Local Agency Survey achieved an 86 percent response rate. Since the estimated burden to complete the surveys is the same, there is potential that NSWP-III could yield a higher response rate than the anticipated 80 percent.
WIC Participant Certification Survey. The respondent universe for the Certification Survey is all WIC participants from the 48 contiguous States, the District of Columbia, and ITOs in these States, who were certified within the 6 weeks prior to the date targeted for in-person administration of the Certification Survey.4
Rationale for Excluding Alaska, Hawaii, and the Territories
The study limits the respondent universe to the 48 contiguous States, the District of Columbia, and ITOs in these States. Excluding Alaska (AK) and Hawaii (HI) and the Territories is necessary to maintain (a) the feasibility of in-person data collection for the study and (b) the comparability of 2019 estimates to 2009 estimates established by the National Survey of WIC Participants-II (NSWP-II). With the exception of Puerto Rico, the geographies omitted from the respondent universe represent a small fraction of the WIC population (see Table B2 below). Recent natural disasters in Puerto Rico and Hawaii further complicate the study’s availability to collect in-person data in these locations. The state of Hawaii is comparable in size to a mid-size PSU, while the state of Alaska is smaller than the typical PSUs formed (albeit larger than the smallest state among the contiguous U.S., Wyoming). The U.S. territories of American Samoa, Guam, Northern Mariana Islands, and Virgin Islands are so small that some categories of WIC participants would be at a danger of failing the requisite sample sizes.
State or Territory |
% of all WIC participants nationally (Table I.1) |
N of WIC participants (Table A.II.1) |
US WIC N participants 2014 total (Table A.II.1) |
AK |
0.23% |
21,590 |
|
HI |
0.42% |
38,820 |
|
American Samoa |
0.07% |
6,906 |
|
Guam |
0.09% |
8,451 |
|
Northern Mariana Islands |
0.04% |
4,032 |
|
Puerto Rico |
1.93% |
179,092 |
|
Virgin Islands |
0.05% |
4,942 |
|
Total |
2.84% |
263,833 |
9,303,253 |
Source: Thorn, B., Tadler, C., Huret, N., Trippe, C., Ayo, E., Mendelson, M., Patlan, K. L., Schwartz, G., & Tran, V. (2015). WIC Participant and Program Characteristics 2014. Prepared by Insight Policy Research under Contract No. AG-3198-C11-0010. Alexandria, VA: U.S. Department of Agriculture, Food and Nutrition Service
Based on the figures in the above Table B2, and assuming that the error rates in the excluded areas are the same as in the sampled areas, a non-coverage correction factor for the totals can be obtained as 9,303,253/9,039,420 = 1.0292 where the numerator is the total WICPC 2014 count, and the denominator is the WICPC 2014 count for the covered WIC population.
To determine whether the exclusion of these areas affect the statistical validity of the sampling and estimation plans (i.e., for the Certification and Denied Applicant Surveys), FNS will use data from the WIC State Agency Survey and the Local Agency Survey. The State Agency Survey will be a census of all State WIC agencies (and thus will include AK, HI and the U.S. Territories); the Local Agency Survey sampling frame also includes all local agencies. These surveys will help FNS understand WIC program operational practices, including those that may affect error rates, throughout the nation including the excluded areas. The results of these surveys will help inform FNS whether the error rates and structures should be expected to be the same in AK, HI and the U.S. territories as in other states.
In accordance with the criteria OMB has established in its June, 2018 “M-18-20 – Appendix C to Circular No. A-123, Requirements for Payment Integrity Improvement,” the sampling methodology for the Year 1 Certification Survey and Denied Applicant Surveys will produce “statistically valid (SV)” estimates of improper payments that represent the population of interest, which is the WIC population within 48 contiguous States, the District of Columbia, and the ITOs in these States.5 The most recent National Survey of WIC Participants-II (NSWP-II; OMB 0584-0484, expired 6/30/2012) conducted in 2009 provided estimates of improper payments that were representative of WIC participants in these same geographic locations.
Sampling frames for recently certified WIC participants will be constructed at each stage during a four-stage sampling process, which is described in detail in section B2. A sampling frame of geographically proximate clusters of LAs (the primary sampling units [PSUs]) will be constructed for the first stage of sampling; sampling frames of LAs will be constructed for each selected PSU for the second stage of LA sampling; sampling frames of clinics will be constructed for each selected LA for the third stage of clinic sampling; and sampling frames of recently certified WIC participants will be constructed for each selected clinic for the fourth and final stage of participant sampling.
A sample of 2,000 recently certified WIC participants, representing the 48 contiguous U.S. States, the District of Columbia, and the ITOs in these States, will be selected using the four-stage sampling design described in section B2. With an expected response rate of 80 percent, 1,600 interviews will be completed, with 320 in each of the 5 WIC certification categories (pregnant women; breastfeeding women; non-breastfeeding postpartum women; infants; and children).6
Denied Applicant Survey. The respondent universe for the Denied Applicant Survey is all denied WIC applicants in the 48 contiguous States, the District of Columbia, and ITOs in these States. As with the Certification Survey, the sampling methods for the Denied Applicant Surveys will produce “statistically valid (SV)” estimates of improper payments that represent the population of interest, namely the denied WIC applicants within 48 contiguous States, the District of Columbia, and the ITOs in these States. Sampling frames for the Denied Applicant Survey will be constructed at each stage during the same four-stage sampling process used for the Certification Survey, with one exception. Sampling frames of denied applicants will be constructed from each of the same clinics from which sampling frames of recently certified participants were constructed for the Certification Survey.
Based on the expected average WIC participant population size per PSU and the limited data on denial rates from NSWP–II, the researchers project that approximately eight denied applicants per PSU will be available for the Denied Applicant Survey.7 The researchers will attempt to interview all denied applicants in the sampled clinics from all sampled PSUs. The researchers project that a total of 240 denied applicants will be available. Based on an expected response rate of 80 percent, 192 completed interviews with denied applicants will be obtained.8 WIC Participant Program Experiences Survey. The respondent universe for the Program Experiences Survey is a statistically valid, representative cross-sectional sample of WIC participants from the 48 contiguous States and the District of Columbia, and all ITOs in these States.9 The sampling frames for the Program Experiences Survey will be drawn from (1) recently certified participants who completed the Certification Survey, and (2) currently active participants who were certified more than 6 weeks prior to the Program Experiences Survey.
A sample of 1,000 WIC participants who complete the Certification Survey will participate in the Program Experiences Survey. Assuming an 80 percent response rate, an estimated 800 WIC participants who completed the Certification Survey will also complete the Program Experiences Survey.
A separate sample of 1,500 WIC participants who were certified more than 6 weeks prior to the survey will be selected from the same sampling frame of LA clusters. An estimated 750 WIC participants are expected to complete the survey by telephone, and an estimated 450 WIC participants are expected to complete the survey in person during follow-up visits to the current WIC participants that did not respond via telephone. Additionally, 1,000 of the 2,000 recently certified WIC participants from the Certification Survey sample will be asked to complete both the Certification Survey and the Program Experiences Survey in-person. An estimated 800 WIC participants will complete Program Experiences Survey after the Certification Survey. Therefore, the total sample for the Program Experiences Survey is 2,500 WIC participants and the anticipated number of responses is 2,000, yielding an overall response rate of 80 percent. A 50 percent response rate is typical for telephone-only surveys with this population, which was the response rate for NSWP–II. In addition, NSWP-II did not provide incentives for the telephone surveys, but the current study proposes to provide $25 for completing the Participant Experiences either in-person or by telephone. Therefore, it is anticipated that NSWP-III will achieve a higher response rate than NSWP-II.
Former WIC Participants Case Study. The respondent universe for the Former WIC Participants Case Study is all former/inactive WIC participants from the 48 contiguous States and the District of Columbia, and all ITOs within these States.10 The population of former WIC participants is estimated to be 1,097,783, based on data from the WIC Program and Participant Characteristics (PC) 2014 report. This estimate was calculated by comparing April 2014 FNS administrative data (8,205,701 food benefits claimed) with April 2014 PC participant data (9,303,253 certified enrollees), which found that approximately 88.2 percent of WIC participants claimed their monthly benefits11 and, therefore, an estimated 11.8 percent of participants (or 1,097,783) are considered inactive, meaning that they have stopped picking up their WIC food benefits or reloading their WIC EBT cards.12 It should be noted that 1,097,783 is only an estimate of all former WIC participants; rates of picking up food benefits fluctuate monthly, and the WIC PC 2014 report only provided data for one month. Expert panel members, who serve as consultants to the NSWP-III study, confirmed that a 15 to 20 percent nonparticipation ratio is fairly standard across SAs.13
The sample for the Former WIC Participants Case Study will consist of 520 former WIC participants; approximately 30 percent of the sample is expected to respond and approximately 80 percent of those responding are expected to be qualified for the case study, which will be determined thorough screening questions. With a combined response rate and qualified screening rate of 24 percent, an estimated 125 former WIC participants will participate in the Former WIC Participants Case Study interviews.14 The steps utilized to maximize the numbers of respondents for the Former WIC Participants Case Study, including the use of incentives, are described in Section B.3.
Administrative Data. Administrative data will be collected from the 20 States and two ITOs in the PSUs for sampling WIC participants (see Section B2 for more details). The research team will request State administrative data for WIC participants certified within a given reference period, their most recent certification dates and participant categories, along with address, telephone, and other contact information. The data elements will be used for final sample selection as well as contact information for selected participants. Consistent with the approach used in NSWP–II, the research team will provide estimates for both pre- and post-infant formula rebate certification and dollar error rates. State redemption and rebate data will be used for this purpose. The State Agency Survey will include a section that examines rebates for infant foods (such as infant cereal). Data collected from these sources will be used to calculate a national estimate of annual improper payments including specific data on items that are not infant formula rebates. It is estimated that all 20 States will provide the requested data, representing a response rate of 100 percent.
Pilot of Alternative Methodology for Annual Updates. Clearance is requested for data collection in Year 2 to field test a new method to produce annually updated estimates of improper payments. We refer to this new method as an “annual rotating panel design.”
The annual rotating panel design is designed to generate annual updates to the improper payment estimates for all years into the future, replacing the current combination of decennial “bookend” studies and model-based “aged” estimates for the intervening years. During the phase-in period of the new design, it will generate annual estimates each year from Year 2 through Year 10, by gradually replacing samples from which data were collected in Year 1 with more recent data. Once the phase-in is complete, in Year 11, annual estimates will no longer be based in any matter on the Year 1 data from NSWP-III, which would become the last bookend study. The new method will collect data through a new administration of the Certification and Denied Applicants Surveys to generate updated estimates for Year 2.
The major foreseen benefit of the new method will be intra-decade sensitivity to systematic changes in error rates caused by changes in eligibility rules, policies of programs whose participants are adjunctively income eligible (such as Medicaid, SNAP), and/or economic trends affecting household income (for example, if Congress mandated work requirements as an eligibility criterion for WIC.) Unlike the current reliance on “bookend” studies, the new method will assure that there will no longer be the risk of an abrupt change in estimated improper payment rates once every decade.
FNS will decide whether to continue this approach in future years by considering the following:
The expected costs of the annual rotating panel design (both short term and expected costs over a ten-year period) relative to the expected costs of the current approach (i.e., estimates based on once-per-decade data with estimates for intervening years produced by applying an aging model to update the decennial );
The extent to which estimates predicted by the aging model for a given year are biased (that is, diverge from) relative to estimates derived from data collected in the 2009 (NSWP-II) and Year 1 (NSWP-III); and
Logistical or operational advantages and disadvantages for FNS to implement an annual rotating panel design (via oversight of a contractor) relative to implementing the current approach.
The main objective of the pilot –which, perhaps, is more accurately described as a feasibility study – is to determine the feasibility of the new approach and its expected costs, including data collection, analysis, and project management. Maintaining a program of recurring data collection could simplify procurement, oversight, and obtaining data from States for sampling. Differences in expected costs between the current and alternative methods will depend on trade-offs between the additional cost of starting up data collection annually instead of decennially and any economies realized by conducting the data collection and analysis more frequently. Note that even if costs are higher with the new method, the fact that they would be nearly constant across the years in a decade would simplify FNS budgeting.
In addition to cost analyses, an important consideration in FNS’s decision on whether to switch methods will be the results of analysis of the bias of aging methods. This bias can be measured both by the difference in aged estimates for 2019 (using the current aging model based on NSWP-II data) with actual 2019 estimates (Year 1 of the currently proposed data collection) and the difference between NSWP-II estimates and backward aged estimates for 2009 based on using the aging model resulting from the Year 1 effort.. If these biases are large relative to estimated variances on the unbiased estimates in the bookend years, that will be a powerful argument to transition to the new approach. While there is no way to measure the bias in the annual rotating panel design, the average of ten consecutive annual estimates (once the new design is completely phased in) will be an unbiased estimate of the decade-long average improper payment rates. It seems reasonable to prefer a method that unbiasedly estimates long-term improper payment rates over a method that is biased nine years out of ten. Additionally, in contrast to the aging method, unbiased variance estimation will be easy with the annual rotating panel design. FNS will submit a new clearance request if it decides to make this transition. Any such request would include a detailed report on biases in the aging methods and projected changes in decade-long costs.
The “Next Decade Update” sample for the annual rotating panel design will be a new set of 30 PSUs that would cover Years 2-10 if FNS decides to transition to the new design and OMB approves that decision. At this time, FNS is only requesting clearance to collect data in three of these 30 PSUs in each of Years 2 and 3, but the long-term proposal for the alternative method is described below. The sample for the new design will consist of 30 PSUs selected from the same stratified frame as the 30 PSUs in the sample for the Year 1 data collection. Both the 30 PSUs in the sample for Year 1 data collection and the Next Decade Update PSUs will be arranged in 10 panels of three PSUs each. Each year, one of the panels of the new set of PSUs will be rotated in, and one of the panels from the old set will be rotated out. Current data will be collected from recently certified WIC participants and denied applicants selected from the panel of three new PSUs (the selection of these respondents will follow the same sampling methods used for the primary data collection). The current data from the new PSUs and the old data from the other nine panels will be combined to produce partially updated national estimates, as shown in equation 1, where the first subscript indicates the year of data collection and the second subscript indicates which of the 10 panels is used to collect fresh data in the referenced year.
(1)
Note that in Year 2, 90 percent of the data will be from Year 1 and 10 percent from Year 2, but that by Year 9, 90 percent of the data will be newer than Year 1, and that by Year 10, all of the Year 1 data will have been phased out. In each year of this rotating panel design, FNS will consider aging the Year 1 results (i.e., the results from panels that have not yet been rotated out) to account for any state-level changes in the profile of WIC participants and applicants before combining them with post-Year 1 updates (that is, updates derived from each year’s newly-rotated-in panel). FNS will also consider the quality of the aging estimates in deciding whether to weight more recent annual data more heavily than older annual data, or whether to assign equal weight to recent and older data. For 2030 and beyond, the annual estimates will be formed by averaging the most recent ten years of data collection with no aging adjustments. The averaging could be unequal with heavier weight for more recent data. By 2040, it may be possible to use time-series methods to further smooth annual updates.
For this “Next Decade Update,” the respondent universe will consist of recently certified WIC participants and recently denied WIC applicants. The initial sample size will be the same for each of the respondent types for each year: 200 recently certified WIC participants (i.e., one-tenth of the initial sample size of 2,000 in the primary data collection) and 24 recently denied WIC applicants (i.e., one-tenth of the anticipated census of 240 recently denied WIC applicants in the primary data collection).15 Assuming an 80 percent response rate, approximately 160 recently certified WIC participants and 19 recently denied WIC applicants are expected to complete the Certification Survey and Denied Applicant Survey, respectively.16
This method for providing annual updates to improper payment estimates does not cover the entire population each year and estimates are based on data collected from multiple years. Instead, each year, the annual update is calculated based on a weighted estimate in which one-tenth of the original Year 1 sample in each subsequent year is replaced with new units from a matched panel of PSUs.
Statistical methodology for stratification and sample selection
Estimation procedure
Degree of accuracy needed for the purpose described in the justification
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
A brief overview of the sampling methods for each data collection activity is described in Section B1. This section provides details on the statistical methodology, where applicable, for carrying out the sampling procedures and selection of the initial sample for each data collection activity.
State Agency Survey. The sampling methodology for the State Agency Survey is described in Section B1. A census of all SAs from 50 States, the District of Columbia, the 34 ITOs, and the 5 U.S. Territories will be conducted.
Local Agency Survey. As described in Section B1, the Local Agency Survey will use the 2017 WICLAD as the sampling frame. LAs will be stratified by FNS region to assure proportionate representation of these regions and to facilitate analysis by FNS region. Within FNS region, LAs will be selected systematically after sorting the LAs by the urban/rural classification for the county in which the LA is located. The Census categories to define urban/rural are: “completely rural,” “mostly rural,” and, “mostly urban.” 17 We also included total population size as a selection variable based on the most recent Census to ensure a national representation of LAs, resulting in a total of 890 LAs being selected for data collection.
Certification Survey. Table B3 summarizes the key features of each of the four stages of sampling for the Certification Survey. The LAs were grouped into 215 PSU so that each PSU was nested within a State, included at least 1,200 WIC participants across all the LAs in that PSU, and was geographically compact to minimize interviewer travel time to under 2 hours when possible. The PSUs were then stratified by FNS region, including substrata in the Southeast, Midwest, Southwest and Western regions. The sub-stratification ensures that each stratum has only two or three PSUs selected and a more balanced geographic representation of the sample, avoiding the possibility that the sample of PSUs selected from one region would all come from just one or two States in the region (e.g., the chance that the sample of PSUs in the Southeast region would include only PSUs from the Carolinas, or only from Florida). Using a greater number of strata also allows for additional precision gains because it better controls for variability between States and a more predictable case load (i.e., by reducing the total number of States represented in the sample).
Sampling Stage |
Stratification |
Sampling Units |
Sampling Method |
Measure of Size |
1 |
FNS region/subregion + 1 stratum of ITOsa |
Clusters of LAsc
(30 from States + 2 ITOs) |
High-entropy random systematic Probability Proportion to Size (PPS) |
Average of the nationwide proportion of WIC participants in each of the 5 WIC participation categories, calculated within each cluster of LAs |
2 |
None |
LAs within clusters
(1–2 LAs per cluster, up to 60 LAs in the contiguous States + 4 LAs in ITOs) |
High-entropy random systematic PPS |
Average of the proportion of participants in each of the 5 categories found in the agency within cluster or State |
3 |
None |
Service delivery sites (clinics) within agencies
(2 per LA or one if only 1 exists within LA) |
High-entropy random systematic PPS |
Average of the proportion of participants in each of the 5 categories found in the clinic within agency |
4 |
WIC participant certification categoriesb |
WIC participants within service delivery sitesd
(2,000 in initial sample)e |
Stratified simple random sample without replacement |
|
a. For sampling purposes, the three ITOs in Northeast region and the two ITOs in the Southeast region will be considered geographical equivalents of LAs within the States where they are located. The ITOs within AZ, CO, ND, NE, NM, NV, OK, SD, and WY are included in a separate ITO stratum. b. There are five certification categories: pregnant, breastfeeding women, non-breastfeeding postpartum women, infants, and children. c. Clusters of LAs are geographically adjacent LAs, where cluster boundaries respect State boundaries. d. Replacement from the clinic list in case of unit nonresponse. e. The initial sample assumes an 80 percent (or higher) response rate to produce a final sample size of 1,600: 1,500 from the non-ITO WIC population and 100 from the sampled ITOs.
|
In the first stage of sampling, 30 PSUs were selected using a high entropy randomized PPS method;18 the measure of size (MOS) was based on the number of WIC participants in the LAs in a PSU relative to the number of WIC participants in the strata. To illustrate this calculation Table B4 shows the WIC population for the State of Illinois and the Midwest Region (in which Illinois is included), by participant category. The percentages in the third row sum up to 1.16, or 116 percent; this percentage was multiplied by 20 so that the MOS of the region as a whole is 100. Then, the normalized measure of size for the State of Illinois is 23.16, the interpretation being that across the 5 categories, the WIC population of the State of Illinois is about 23.16 percent of the total WIC population of the Midwest Region.
State/Region |
Pregnant Women |
Breastfeeding Women |
Postpartum Women |
Infants |
Children |
Measure of Size |
Illinois, Counts |
26,680.43 |
16,764.00 |
17,174.71 |
69,118 |
123,553 |
|
Midwest Region, Total Counts |
110,774.43 |
62,787.00 |
87,908 |
286,893.14 |
578,196 |
100 |
Illinois, % of Region |
24.09% |
26.70% |
19.54% |
24.09% |
21.37% |
23.16 |
The results of the first stage of sampling are shown in Table B5. The number of PSUs within each State is determined so that the sizes of the PSUs are roughly similar across States, and each PSU covers a population of about 25,000–50,000 WIC participants (in the above example in Table B4, the 98 LAs in the State of Illinois with a total WIC population of 253,291 can be clustered into 6 to 8 PSUs for the first stage sampling).19 Using these large clusters of LAs as PSUs (instead of simply using LAs as PSUs) makes the PSUs more similar in size than LAs, and reduces the number of States that have at least one sample PSU drawn from within it.
A high-entropy design was used in drawing samples of PSUs with PPS from each stratum. 20,21,22,23 One common implementation of this method is a random systematic sampling design where systematic PPS samples are taken from a randomly ordered list of PSUs, similar to the method used in NSWP–II. Using a simulation of this sampling approach, the number of times one or more PSUs were selected from a given State was estimated (Table B5). Although one would expect less than 20 States to have 1 or more PSUs selected according to the sampling, it is estimated that a total 23 or 24 States will be represented, which helps inform the plan for data collection.
From the 30 PSUs that are selected in Stage 1, up to two LAs per PSU (excluding ITO PSUs) and up to two clinics per LA will be selected in Stages 2 and 3, respectively. The researchers will determine, based on the organization, size, and geographic characteristics of the ITOs selected in Stage 1, whether or not to conduct Stages 2 and 3 sampling within the two ITOs selected.
Once States, LAs, and clinics are recruited, an initial sample of 12 certified WIC participants will be selected per certification category, along with some reserve units (on average there will be 0.5 reserve units per category). With 5 certification categories and 30 PSUs, this yields an initial sample of 1,875 participants from the general population. From within each of the 2 ITOs serving as PSUs, an initial sample of 12 certified WIC participants will be selected per category, along with some reserve units (on average 0.5 reserve units per category) yielding an additional 125 participants served by ITOs. The total initial sample size of 2,000 is the sum of the initial sample from the general population (1,875) and the ITOs (125).
Region |
Measure of Size within U.S. |
# of PSUs Formed |
# of PSUs Selected |
Number of States Represented in the Selected PSU |
Northeast Region |
9.10% |
23 |
3 |
2 |
Mid-Atlantic Region |
9.92% |
22 |
3 |
2 |
Southeast Region |
21.51% |
42 |
6 |
5 |
Midwest Region |
14.58% |
35 |
5 |
3 |
Southwest Region |
17.20% |
28 |
5 |
3 |
Mountain Plains Region |
6.78% |
22 |
2 |
2 |
Western Region |
22.96% |
43 |
6 |
3 |
TOTAL |
100% |
215 |
30 |
20 |
Denied Applicant Survey. The sampling design of the Denied Applicant Survey mirrors the sampling design of the Certification Survey and is stratified at the first stage by FNS Region to assure proportionate representation of these regions. Within each of the five WIC participant categories, the proportion of the PSU population relative to the region’s covered population will be computed, and the five proportions added up to form the PSU MOS. Similar calculations will be done at the regional level to produce an MOS of the region within the nation. The sample of denied applicants will be drawn from the same LAs used to construct the sampling frame for the Certification Survey. Stages 1–3 of the sampling procedures for the Denied Applicant Survey are therefore identical to the Stages 1–3 of the sampling procedures for the Certification Survey.
To concentrate the field effort for collecting Denied Applicant Survey data within an efficient sample of LAs, the researchers will select the sample of denied applicants from within the sample of clinics selected in sampling Stage 3 for the Certification Survey, and will conduct the Denied Applicant Survey in the same field period. The sample for the Denied Applicant Survey will be the maximum feasible sample within the constraints of the sampling design for the Certification Survey. Based on the expected average WIC participant population size per PSU and the limited data on denial rates from NSWP-II, the researchers project that approximately eight denied applicants per PSU will be available for the Denied Applicant Survey. All denied applicants will be selected for the survey, and an initial sample of 240 applicants is expected.
Program Experiences Survey. The Program Experiences Survey will be a statistically valid sample of 2,500 WIC participants representative of WIC participants in the 48 contiguous States and the District of Columbia, and all ITOs in these States. The sample design will mirror the geographic stratification of the Certification Survey. The sample will then be stratified by certification date in relation to the date of the interview: (1) 1,000 of those certified within the past 6 weeks (interviewed in person after completing the Certification Survey), and (2) 1,500 of those certified at least 6 weeks prior to the start of data collection (interviewed by phone or in person). Additional details about the Program Experiences Survey sample are provided in Section B1.
Former Participants Case Study. The sample for the Former WIC Participants Case Study will consist of 520 former WIC participants. Former WIC participants will be identified for the case study in two ways. First, the two sequential periods of the redemption/certification data obtained for the Certification Survey will be compared. Participants redeeming in one month who are eligible for benefits in the second month, but who are not associated with redemptions in the second month, will be targeted for the survey.24 The second, and preferred, method will be performed in those States where the management information system (MIS) is capable of identifying former participants, who were terminated before the end of their certification period, as targets for the survey. The MIS in some of the selected States will be able to identify inactive participants who are eligible to receive benefits but who failed to pick up their food benefits and/or failed to redeem food benefits during specified periods.
Since this is a case study, a nationally representative sample of former participants is not needed. A case study was selected as the best way to collect in-depth information to better understand the barriers and facilitators to WIC program retention. The guided, qualitative interview approach is utilized for this survey to gain a more in-depth understanding of former participant experiences. This qualitative interview will be designed to encourage elaboration and detailed responses from participants. Therefore, a mix of methods for identifying potential respondents is therefore acceptable, and this sample is best described as a case study sample. Although not representative, an attempt will be made to obtain a diverse mix of inactive participants in terms of race/ethnicity, rural and urban areas, and other demographic characteristics. Identification and surveys of former participants will proceed until the threshold of 125 respondents is reached. Assuming the sample exists in sufficient quantities, approximately 55–70 participants will be interviewed from those selected using each of these two methods for identifying potential respondents.
The interview guide developed for the Former WIC Participant Case Study interviews will include screening questions to eliminate potential respondents who no longer meet the eligibility criteria to participate in the WIC program within the same State. Until FNS has the opportunity to examine the data, it is unknown how many participants could fall into the eligible, inactive participant designation using these criteria. However, it is estimated that, on average, 15 to 20 percent of a State’s eligible clients are not participating at any given time.25 An assumption of 15 percent would require 3,467 records to produce 520 participants eligible to participate in this survey. Between a relatively low response rate (the propensity of eligible, non-participants to respond is assumed to be lesser than participants who are receiving benefits) and the potential to be screened out as ineligible to participate in the survey (20 percent), 520 records is estimated to result in 125 responses.
Pilot of Alternative Methodology for Annual Updates. The sampling methodology for the Pilot of Alternative Methodology for Annual Updates will generally follow the procedures described in the Certification Survey and Denied Applicant Survey sections above. The sample for the Pilot of Alternative Methodology for Annual Updates, referred to as the Next Decade Update sample, will have 30 sampled PSUs selected from the 48 contiguous States and the District of Columbia, the same as the sample for the Year 1 Certification Survey. These 30 Next Decade Update PSUs will have been drawn from the same stratified frame as the 30 PSUs for Year 1. The PSUs in each stratum will be randomly re-sorted before drawing the Next Decade Update systematic sample. The Next Decade Update PSUs will be arranged in 10 panels of 3 PSUs each. The allocation of the PSUs into panels will depend on the particular sample drawn, with the purpose of reducing the burden on the States in terms of obtaining their administrative data. In other words, large States (e.g., California, Texas, Florida, New York) that are expected to be sampled more than once will be spaced out evenly throughout the 10-year period. The complementary PSUs from smaller States will be allocated in a way that assures that each year’s sample covers three different FNS regions. This can be implemented by sorting the selected PSUs by region, State, and a PSU-level characteristic such as poverty rate and drawing a highly regular systematic sample with a sampling interval of 10 units, with the starting points selected so that the States sampled more than once have the desired pattern of survey requests in the update decade. Each year, one of the panels of the new set of PSUs will be rotated in, and one of the panels from the old set will be rotated out. LAs and clinics will be selected within the sample PSUs. Then the States in the selected PSUs (up to three) will be contacted to get participant lists (i.e., certification data), and the LAs or clinics will be contacted to get lists of denied applicants.
The Next Decade Update will sample the ITOs over the decade in the same way as the cross-sectional Year 1 sample. The few ITOs that are considered as LAs are eligible for sampling if the PSU where they are located is selected and will remain in the sampling frame for the 48 contiguous States and the District of Columbia. All other ITOs in the Southwest, Mountain Plains, and Western regions that will form the ITO stratum for the Year 1 sample will continue to be in that stratum. The Next Decade Update sample of one ITO will be drawn every 5 years if FNS needs to update the error estimates for ITOs, producing two additional PSUs over the decade.
This approach will spread the burden of participating in the Certification Survey both across States and over time. States and PSUs will have the same odds of being selected for the Next Decade Update sample as they will for the Year 1 sample. Each PSU selected for the Next Decade Update sample will have interviews conducted in only one of the 10 years. If any PSUs are selected for both the Next Decade Update sample and the Year 1 sample, a complementary sample of the LAs will be drawn to reduce the burden on the LAs that have already responded in the Year 1 survey.26 As a result, only the largest LAs will have a significant likelihood of being selected for both samples. Overlap of the PSUs is a random event, and its likelihood can be estimated as follows: With the PSU-level sampling rate of about 30:200, the probability of any given PSU to be drawn more than once is about 2.25 percent, and the probability of at least one PSU being drawn more than once among the total of 60 PSUs drawn is about 75 percent.
Estimation Procedures. Prior to the analysis to address the research objectives, data from samples intended to produce representative estimates must be properly weighted. NSWP-III will produce estimates representative of the 48 contiguous States, Washington DC, and the ITOs for the following:
Case error rates and associated dollar error amounts and rates (based on data collected from the sampled WIC participants and denied applicants who complete the Certification and Denied Applicant Surveys)
Key indicators of WIC participants’ satisfaction with WIC staff, certification procedures, rates of participation in other assistance programs, and rates of different levels of food security (based on data from the Program Experiences Survey)
Key aspects of LA policies, operations, and practices (from the Local Agency Survey)27
The other samples in the study will not be weighted. The State Agency Survey is a census of those agencies; weighting is, therefore, not appropriate. A nonresponse adjustment could be implemented if the researchers do not get the full census; however, a 100 percent response rate from SAs is anticipated (and there is no indication that a nonresponse adjustment was used in NSWP–II for the SA census). The interviews with former WIC participants for the case study are not a representative sample, and will not be weighted.
Published reports of the estimates of improper payments will include detailed descriptions of the procedures used to produce these estimates as well as their associated variance and precision level(s). Planned estimation procedures are described below.
Weighting and Estimation for the Certification, Denied Applicant, and Program Experiences Surveys. Estimates of improper payments for the 48 contiguous States, Washington, DC, and the ITOs will be calculated for Year 1 using data from the Certification and Denied Applicant Surveys. Prior to data analysis of the Certification and Denied Applicant Surveys, baseline weights will be computed by multiplying inverse probabilities of selection at each stage (that will be computed as the ratio of the MOS of the PSU or LA within the PSU to the sum of the MOSs of all PSUs within an FNS region for the first stage, and the sum of all MOSs of LAs within a PSU for the second stage of selection).
The Program Experiences Survey uses a stratified sampling design that draws a combined sample from two sources: a subsample of the participants of the Certification Survey, and a sample of participants who have been in the program for more than 6 weeks. Strata are defined by the FNS region and eligibility for the Certification Survey. The probability of selection into the Program Experiences Survey from the Certification Survey is equal to the probability of selection to the Certification Survey, multiplied by the rate of subsampling into the Program Experiences Survey. The second stage probability of selection into the Program Experiences Survey from the balance of the population will be computed as the ratio of the number of participants certified more than 6 weeks before the start of the field period in the PSU, divided by the number of participants sampled from the PSU. Multiplying this by the PSU’s probability of selection will give the overall probability of selection into the Program Experiences Surveys for these participants. The base weights for the Program Experiences Survey are the inverse of these probabilities of selection.
Nonresponse adjustments will be applied within cells defined by participant categories and other frame data. Weights within each region and category of WIC participants will be scaled to ensure that they sum up to the population size of the region from which the PSU was drawn.
Estimation procedures must account for the complex sampling design.28,29 Variances will be estimated using traditional Taylor linearization and Jackknife variance estimation methods.30 In addition, using state-of-the-art specially designed software,31,32 the researchers will create replicate weights that incorporate the required corrections for unequal probabilities of selection, the equivalents of unequal finite population correction (FPC) discussed by Preston and Henderson (2007), and raking or post-stratification calibration to the known population margins.33 These methods will ensure the most accurate estimation possible of the sampling variances of the sample estimates.
Weighting and Estimation for the Local Agency Survey. Two weights will be constructed for the sample of LAs. The sampling weight will be calculated as the inverse of the probability of selection. The final weight will adjust for nonresponse in each regional stratum. A bootstrap replicate variance estimation method,34 modified for high-entropy sampling procedures,35,36 will be used to estimate variances,37 as discussed in more detail above. As in NSWP–II, the weighted data will support estimates of (1) the percentage of LAs that provide each type of service, and (2) the percentage of participants who receive each category of benefits.
Weighting and Estimation for the Alternative Methodology for Annual Updates. The researchers will compute estimates of improper payments for Year 2 using pooled survey data from Year 1 and Year 2, with the 2 years’ data weighted according to the methodology described above. One option for estimation will be to simply rerun the Year 1 estimation procedures on the dataset obtained by adding interviewed post-Year 1 panels to the Year 1 sample and removing the matched Year 1 panels. All the same software and code could be run without the need for custom modifications due to these parallel structures. Alternatively, FNS will consider aging the Year 1 sample using a model that incorporates Year 2 State participant by category counts and/or weighting the more recent samples slightly more heavily.
State Agency Survey. A census of SAs will be selected for the State Agency Survey and a response rate of 100 percent is expected, so no degree of accuracy calculation is required.
Local Agency Survey. Selecting a sample of 890 LAs (and obtaining 712 completes, assuming an 80 percent response rate) will achieve the precision targets for nationally representative estimates (95 percent confidence intervals of +/- 4 percentage points or less) and for region-level estimates (95 percent confidence intervals of +/- 7.5 percentage points or less). The precision calculations assume a conservative 50 percent response distribution for a binary outcome and apply a finite population correction (FPC) within strata. Table B6 shows the allocation of the LA population and sample over FNS region. To facilitate analysis by FNS region, and by location within a mostly urban, mostly rural, or completely rural county, the sample will be selected from strata defined by FNS region and location will be used as a sorting variable to select units systematically within the strata.
Region |
Population |
Sample |
Northeast Region |
108 |
85 |
Mid-Atlantic Region |
423 |
157 |
Southeast Region |
410 |
156 |
Midwest Region |
184 |
114 |
Southwest Region |
269 |
134 |
Mountain Plains |
189 |
115 |
Western Region |
242 |
129 |
TOTAL |
1825 |
890 |
Assuming an 80 percent response rate, the survey will initially be distributed to 890 LAs. With permission from FNS, the researchers will draw this sample from the most recent WICLAD that FNS can provide.38 SA data on the monthly number of participants by category for 2014 will also be needed to calculate the measures of size required by the PPS sampling method discussed below.
Certification Survey. Sampling for the Certification Survey is designed to provide estimates of case and dollar error rates that meet the required level of precision, namely a 90 percent confidence interval of ±3.5 percentage points, based on the expected mean distribution of 10 percent of erroneous classification for each of the 5 categories of WIC participants (pregnant women, breastfeeding women, non-breastfeeding postpartum women, infants, and children aged 1 to 5 years). To meet this requirement, the sample design provides an effective sample size of 216 in each of the 5 certification categories (using a -distribution with 24 degrees of freedom to construct the confidence interval) in the WIC population from the non-ITOs (sample size calculations for ITOs follow below).39 To meet the M-18-20 IPERIA precision requirement for a statistically valid and rigorous plan, the total case error estimates, assuming the error rate of 10 percent, should have a margin of error of ±3.0 percentage points at the 95 percent confidence level. The proposed design achieves an effective sample size of 665, yielding a 95 percent confidence interval of ± 2.3 percentage points for an estimate of 10 percent, a more precise estimate for improper payments than what is required by M-18-20.
To determine the final sample sizes needed (i.e., for analysis) both within each category and overall to meet precision requirements, the total design effect will be determined and the effective sample size will be multiplied by the design effect. Shown below are the detailed calculations to determine the sample sizes needed.
Recently Certified WIC Participants from the 48 Contiguous States and the District of Columbia. To derive the number of PSUs needed for the sample of WIC participants from the 48 contiguous States plus the District of Columbia, the number of observations per PSU will be determined. From each of the 5 certification categories of WIC participants, interviews will be completed with 10 WIC participants per PSU, for 50 observations per PSU.40 The number of PSUs needed is determined by the relationship between the number of observations per PSU and the following considerations:
an assumed within PSU intra-class correlation (ICC) of 0.004;41,42
an expected FPC at the first stage of the sampling design of 0.85;43,44 and
the design effect at the first stage of sampling is defined as follows:
Design
effect (first stage) =
The total design effect is further increased by unequal weighting due to nonresponse adjustments (assumed to be 1.28 based on prior experience with similar studies). Also, an unequal weighting design effect will be incurred due to the different sampling rates within WIC categories (postpartum and breastfeeding women have the highest rates, while children ages 1 to 5 have the lowest rates. Samples of 320 each are taken from 550,000 to 580,000 postpartum and breastfeeding women, while a sample of 320 is taken from a much larger population of about 4 million children). The component of the design effect due to unequal weighting is 1.51. The total design effect is the product of the clustering design effect at the first stage and the two unequal weighting design effects: The required total sample size must then be at least , which when rounded up, aligns with the sample sizes of 30 PSUs and 50 final Certification Surveys per PSU.
For each WIC participant certification category, the effective sample size is determined by the clustering effect (based on participants per category per PSU) and the nonresponse adjustment unequal weighting effect:
Design
effect (first stage) =
.
Total
design effect=
=1.32
(adjusting for unequal weighting due to nonresponse adjustments).
The
effective sample size is then:
exceeding the sample size of 212 required per category to achieve the required precision.
The resulting sample size needed for analysis from the general, non-ITO population is 1,500 recently certified participants, selected from within 30 PSUs (50 WIC participants per PSU) from the non-ITO WIC population. Thus, the sample sizes (number of PSUs sampled, number of participants sampled within the PSU), given the population properties (ICC) and the expected survey process properties (nonresponse rates), are aligned to achieve the precision required by IPERIA.
Assuming an 80 percent response rate, the initial sample size needed is . For the practical implementation of the method, 12 principal participants will be sampled from a PSU, and 1 reserve unit will be additionally sampled and placed on hold. They will only be released to the field when a non-interview final disposition is reached for the participants in the principal sample.45
Recently Certified WIC Participants from ITOs. As described above, a separate sampling stratum at the first stage will be created, from which 2 ITOs and 50 WIC participants per ITO (100 total) will be selected. This stratum will consist of the 29 ITOs combined across the States of AZ, CO, ND, NE, NM, NV, OK, SD, and WY, from which 2 ITOs will be selected using a PPS sampling method. At Stage 4, just as described for Stage 4 of the sampling from within the general (non-ITO) population, 10 principal WIC participants will be selected in each of the 5 certification categories (i.e., 50 per PSU), resulting in a final sample of 100 WIC completed Certification Surveys from the ITOs. To ensure that the study yields these 100 completed surveys from the 2 ITOs, and assuming an 80 percent response rate, 5 reserve participants per ITO will be included in the initial sample (10 additional WIC participants).
Combining the 1,500 “recently certified” WIC participants from the 48 contiguous States plus the District of Columbia, and the 100 “recently certified” WIC participants from ITOs, the total final sample size will be 1,600 “recently certified” WIC participants. Assuming an 80 percent response rate, the initial sample size will be 2,000 “recently certified” WIC participants. As discussed above, assuming an expected error rate of 10 percent (across the 5 categories), this design will meet FNS’s required precision of estimates for error rates for each certification category (a confidence interval of ±3.5 percentage points, with a confidence level of 90 percent), and the necessary precision for the overall estimate (i.e., across all certification categories combined) required by IPERIA.
The sample of expired certifications will give us approximately the same precision as for the sample of denied participants. It is important to note that, compared to the reported over-certification rate in NSWP–II (3.05 percent overall), expired certifications were rare. An exploratory analysis in NSWP-II based on data from SAs’ certification dates identified an expired certification rate of 1.15 percent, most of which involved breastfeeding women. Note that IPERIA precision requirements apply to the total estimate of improper payments, including those from erroneous certifications, expired certifications, and erroneous denials. Therefore, while estimates of the expired certification rate will not by themselves meet the IPERIA requirements, the sample of expired certifications will contribute to the total estimate of improper payments, which will meet the IPERIA requirements.
Denied Applicant Survey. The proposed sample size of denied applicants reflects the expected number of denials in the sampled LAs that are observable during the field period (or within a 3-month reference period just prior to the field period). The researchers note that there is no separate precision requirement for underpayments (due to erroneous denials) alone; the precision requirements instead apply to the total payment error, which is the sum of overpayments and underpayments. The combined samples of the Certification and Denied Applicant Surveys will meet the IPERIA requirement (a 95 percent confidence interval of ± 3.0 percentage points) for the total certification error rate (the sum of the certification and denied applicant error rates), based on an assumed error rate of 10 percent. In fact, the Certification Survey sample of 1,600 completed surveys, by itself, will meet this requirement. The estimates of improper payments are the sum of the estimates of overpayments (to erroneously certified WIC participants) and underpayments (resulting from erroneous denials to WIC applicants). As a result, the combined samples of completed Certification Surveys and completed Denied Applicant Surveys will meet or exceed the precision requirements.
The researchers note that estimating a denial rate and an erroneous denial rate may be difficult for several reasons. First, it is likely infeasible to determine the number of potential applicants who do not formally apply for WIC benefits based on use of the online prescreening tool (http://wic.fns.usda.gov/wpps/pages/start.jsf). In addition, one of the key challenges cited in NSWP–II for sampling denied applicants was that WIC agency staff often conducted a prescreening conversation with potential applicants over the phone, and rarely kept records of these conversations. The NSWP–II reports also documented difficulties in obtaining data on denied applicants (i.e., those who actually file formally and are denied based on failure to meet one or more eligibility criteria): Among those who actually kept an appointment at a WIC clinic, “very few applicants … receive[d] a ‘notice of ineligibility,’ which may be documented in some scant form, if at all.”46 As a result of these difficulties, NSWP–II did not attempt to calculate a nationally representative denial rate, or the rate of erroneous denials. Across the 147 clinics asked to supply names of denied applicants, 410 names were obtained and 194 interviews were completed; 14 clinics did not deny any applicants during the month of data collection, and 4 other clinics did not provide their records of denials. This translates to an estimated three denials per clinic, on average. At this rate, the researchers would need to include between two and three clinics per PSU to get a sample of eight denied applicants per PSU. Requirements and practices in documenting denials likely vary from State to State, which necessitates adapting the procedures for constructing the sampling frame on a State-by-State basis.
In the absence of empirical information about denial rates and denial error rates for WIC applicants, the confidence intervals for the specified sample were estimated with a range of conditional error rates (i.e., errors among denied applications), under the conservative assumption that the denial rate is 10 percent. The upper boundary for this analysis had 19 erroneous denials in the sample of 192, and thus, a 10 percent conditional error rate, and under the study’s assumptions, a 1 percent unconditional error rate among all applicants. Even with these highly conservative assumptions, the half width of the 95 percent confidence interval for the erroneous denial rate (i.e., the proportion of all applicants who are erroneously denied) would be from 0.85 to 2.13 percent, well within the 2 percent standard set by OMB for IPERIA. With a smaller denial rate or a smaller error rate for denials, the size of the 95 percent confidence interval in percentage points would be even smaller. While FNS intends to complete 192 interviews for this survey, the precision analysis suggests that a smaller sample may still meet the IPERIA requirements even if fewer cases are available in the sampled LAs.
Program Experiences Survey. For the Program Experiences Survey, a sample of 2,500 WIC participants will be selected to meet the required national and subgroup precision targets. The nominal sample size for each region is 234. The cluster design effect and FPC is calculated by
Design
effect (first stage) =
,
where the sample of 2,000 is distributed across 60 clusters in 7 regions, for an average cluster size of 5. Multiplying the nominal sample size of 234 by 1.0018, and then by a nonresponse adjustment of 1.28, yields required samples of 300 in each region, or a total sample of 2,100, to meet the requirements of a ±7.5 percent, 95 percent confidence interval in each of the 7 regions. This calculation may need to be updated when the researchers recalculate the results using actual counts of LAs for each region when they are available.
Sample sizes also need to be calculated to assure a ±7.5 percent, 95 percent confidence interval for comparisons of mostly urban, mostly rural, and completely rural LAs. The size of the samples needed for the Program Experiences Survey will therefore need to be updated when the data on urban, suburban, and rural areas are examined. NSWP-II received a response rate of 51.3 percent.47 NSWP-III expects to achieve an 80 percent overall response rate among the 2,500 sampled WIC participants, yielding 2,000 completed Program Experiences Surveys.
Sampling Denied Applicants. The sample of denied applicants will be selected from within the sample of clinics (up to 120 total) selected for the Certification Survey in Stage 3. Due to the limited empirical information about denial rates for WIC applicants, the sampling procedures for the Denied Applicant Survey may require adaptation on a State-by-State (or PSU-by-PSU) basis. The researchers will attempt to survey all denied applicants from these 120 sampled clinics who applied within a 3-month period. The number of such denied applicants is estimated to be 8 per sampled clinic (i.e., 2.8 per month per clinic, based on NSWP-II reported number of denied applicants, for 3 months = 8.4 denied applicants per clinic), or 240 across the maximum number 120 clinics. Similar procedures will be utilized for the pilot of alternative methodology conducted in Year 2and Year 3.
During communication with SAs covering sampled PSUs, the researchers will ask if the State tracks denied applicants in administrative data. If so, these data will be used to identify denied applicants to be interviewed. Otherwise, the researchers will request that these WIC LAs maintain and provide logs of denied applicants (Appendix B7.b) over a 5-month period. It is estimated that one third (n=20) of the 60 LAs will need to maintain a log of denied applicants. Logs will include denied applicants’ names, application categories and dates, contact information, language (if non-English), and reasons for denial.
Expired Certification Errors. For the data collection from SAs and LAs on expired certification errors, the sample will comprise WIC participants who redeem a benefit with a “first day of use” date that falls after the end of the participant’s certification period. This sample will be identified by matching State administrative certification and redemption data. The researchers will use State certification and redemption data for all participants in the States included in the Certification Survey sample (i.e., the 20 States and two ITOs resulting from selection of 30 PSUs composed of geographically proximate clusters of LAs) for 3 months (assuring that results are not overly influenced by a single month, while making wise use of project resources). Because certification end dates are not always accurate, the researchers will confirm the end date with sampled LAs by telephone, for a sample of up to 380 cases of apparent expired certification error. Telephone follow-up to LAs will confirm the certification end date. The researchers will call up to 60 LAs to inquire about an estimated average of 6 to 7 participants per LA.48
This is a one-time study; concern regarding the periodicity of data collection cycles is not applicable.
Please see Appendix G, General Data Collection Procedures, for additional information regarding communication protocols, including follow-up attempts and reminders, for each of the surveys. The invitation emails and letters; reminder emails, texts, and phone calls; and recruitment scripts for the surveys (Appendices C2 through C18 and Appendices C20 and C21) can be found in Appendix G. Descriptions of the studies provided to the state and local WIC agencies, thank you letters, invitation postcards, and information brochures (Appendices D1 and D2, D5, and D10 through D13) can be also be found in Appendix G as well. And lastly, the state agency administrative data request reminder email and telephone script (Appendices C25 and C26) and the certification end date verification email and telephone script (Appendices D3 and D4) can be found in Appendix G as well.
Overall response rate projections for NSWP-III are presented above in Table B1 and additional information regarding the data collection methods, including follow-up attempts and reminders, in Appendix G (General Data Collection Procedures), and the type and amount of information collected in Part A, Section A.3 and Appendix E4 (Estimates of Respondent Burden). Achieving the specified response rates involves identifying the sample members to secure participation using procedures described below (similar to those used by NSWP-II to reach minimum acceptable response rate for the NSWP-II statistical surveys). It is estimated that 100 percent of the sampled SA directors will complete the State Agency Survey and 80 percent of the LA directors will complete the Local Agency Survey.49 An expected 80 percent of “recently certified” WIC participants, recently denied WIC applicants, and current WIC participants will complete their respective surveys.
The Former Participant Case Study information collection is categorized as qualitative and does not employ a sampling frame or other proven statistical methods. Information collected from case study interviews will be used to better understand the barriers and facilitators to WIC program retention. Approximately 24 percent of selected former WIC participants are expected to participate in the Former WIC Participant Case Study interviews.
Below is a description of recruitment procedures designed to maximize the number of sampled SA and LA directors who complete the State Agency Survey, State Agency Administrative Data request, and Local Agency Survey, respectively. An 80–100 percent response rate is expected by using these procedures:
The letters inviting SA and LA directors to participate in study were carefully developed to emphasize the importance of this study, and how the information will help FNS identify policies and practices that each WIC SA and LA has established under these discretionary powers, and to enable comparisons of their potential effects.
SA and LA directors participating in programs authorized under the Healthy, Hunger-Free Kids Act of 2010 (HHFKA) will be reminded that they are required to cooperate in USDA studies.
Designated FNS regional staff will be kept closely informed of the project so they will be able to answer questions from SA and LA directors and encourage participation.
A toll-free number and study email address will be provided so SA and LA directors can receive assistance with the study.
Sampled SA and LA directors will have the option of completing the web-based State Agency Survey and Local Agency Survey as a hard-copy survey that can be returned the research via fax or scanned and emailed.50
Periodic email reminders will be sent to sample SA and LA directors who have not yet completed the survey and to SA who have not submitted the requested administrative data.
Follow-up attempts by telephone and email will be made with all sampled SA and LA directors who do not complete the survey within 2 weeks. The primary purpose of the call will be to urge them to complete the survey. At that point, if the directors prefer to complete the survey or remaining sections of the survey over the telephone, an interviewer will administer the full survey or any remaining parts of the survey over the telephone.51
The following procedures will be used to maximize the completion rates for respondents to the Certification Survey, Denied Applicant Survey, Program Experiences Survey, and Former WIC Participant Case Study52 that will be administered either by telephone or in person:
Select SA directors will be asked to provide the contact information for current WIC participants, recently denied WIC applicants, and former WIC participants for the Certification Survey, Denied Applicant Survey, Program Experiences Survey, and Former WIC Participant Case Study. An informational letter signed by their respective SA will be provided to sample respondents to the Certification Survey and Denied Applicant Survey.
The emailed letters inviting sample respondents to participate in the Certification Survey, Denied Applicant Survey, Program Experiences Survey, or Former WIC Participant Case Study were carefully developed to emphasize the importance of this study, and how the information will help FNS to meet the objective of calculating improper payment rates and learning about WIC participants’ program experiences.
A toll-free number and email address will be provided to respondents. They will be encouraged to call or email if they have questions about the study.
Follow-up attempts will be made by telephone with sample respondents who do not respond. The primary purpose of this call will be to urge them to participate in their respective survey.
Call scheduling procedures that are designed to call numbers at different times of the day (between 8 a.m. and 6 p.m.) and days of the week (Monday through Friday) will be used to improve response rates.
A core set of interviewers with experience conducting telephone and in-person surveys, particularly interviewers who have proven their ability to obtain cooperation from a high proportion of sample members, will be employed.
A training for telephone and FIs will be conducted. The training, specific to this study will include an overview of the project, a review of the research questions the study will address, a primer on interviewing practices and procedures, and techniques for encouraging respondent candor.
Highly skilled interviewers, some of which speak Spanish, will be hired and trained. During recruitment and interviewing, the interviewers will make assurances to potential respondents of the privacy of their individual answers, conduct multiple callbacks to reach respondents, and vary the days and hours of contact attempts.
The respondents to the Certification Survey, Denied Applicant Survey, Program Experiences Survey and Former WIC Participant Case Study will receive $25 Visa debit card to increase response rates.
Many of the tactics outlined for maximizing participation will also be useful for reducing nonresponse. If the response rate is below 80 percent for any of the surveys, a nonresponse bias analysis will be conducted as required by OMB. This analysis will examine any known differences between respondents and non-respondents to illuminate any potential bias introduced by nonresponse. Results of this analysis will be included in the final report.
It is anticipated that all WIC State agency directors will complete the State Agency Survey. Prior experience working with these directors in data collection for other projects indicates that nonresponse is unlikely to be a problem.
We received generic clearance for pre-testing (OMB # 0584-0606, FNS Generic Clearance For Pre-Testing, Pilot, And Field Test Studies: Pre-testing for the Third National Survey of WIC Participants (NSWP-III), approved 9/22/16; expires 03/31/2019) with a package specifying pre-testing methods. Each of the data collection instruments were pre-tested with nine respondents.53 Pre-testing respondents evaluated assigned instruments for understandability (confusing wording or layout, failure to comprehend the question, et cetera) and length of time to answer. Details regarding selection of respondents for the pre-test and complete pre-test results are presented in Appendix F, Pre-Test Memorandum, and are briefly described below.
The research team mailed a hard copy of the invitation letter and two hard copies of the State Agency Survey (one to keep and one to return) to nine of the selected SA directors. The research team followed up with an email and a telephone call within two days of expected delivery of the package to the SA director. Once the research team received a completed State Agency Survey, a trained interviewer contacted the SA director to schedule a 30-minute debriefing telephone interview. During the debriefing telephone interview, the interviewer asked the SA director to estimate how long it took them to complete the survey, and to identify any survey questions that were confusing or difficult to answer. Generally, feedback on the State Agency Survey was positive. Aside from suggestions to revise specific questions, comments included, “it was pretty easy to respond to,” “in the past we have done studies that take hours and are confusing but this one is pretty clear,” and “these questions [. . .] are pretty well-designed.” None of the respondents thought that the questions felt repetitive.
A similar pre-testing methodology was used to test the Local Agency Survey. Overall, feedback on the Local Agency Survey was positive. Aside from suggestions to revise specific questions, comments included, “pretty thorough,” “fairly straightforward,” and “easy to follow.” Full results can be found in Appendix F, Pre-Test Memorandum.
In order to pre-test the Certification Survey, for each of the nine completed surveys, a field interviewer went to each respondent’s house. For each respondent, the field interviewer read aloud the survey questions, marked the respondent’s answers on the paper copy, and followed any relevant skip patterns based on each respondent’s answers. The interviewer also noted directly on the survey any difficulty that a given respondent had understanding a question and kept track of the start and end times for key sections of the survey. Immediately following each survey, the interviewer administered a debriefing questionnaire, thanked the respondent, and gave each a $25 prepaid gift card to a national retailer. Results of the pre-test suggested that respondents to the Certification Survey may lack sufficient income documentation.
Methodology and results of the Denied Applicant Survey were similar to the Certification Survey. Results of the pre-test suggested that respondents to the Denied Applicant Survey may lack sufficient documentation of income and/or of participation in a program that would have conferred adjunctive income eligibility at the time of application.
For the Program Experience Survey, trained interviewers administered the survey using paper copies that had the same questions, response options, and prompts as the questionnaires that will be programmed for use with the full study sample. After verbally obtaining informed consent by telephone, the interviewers administered the survey, reading each question aloud, following interviewer instructions, and wrote down the responses indicated by the WIC participant. The interviewers conducted a debriefing interview with each respondent immediately following the completion of the Program Experiences Survey to identify any questions that were confusing or difficult to answer. After completing the survey and debriefing interview, each pre-test participant was mailed a $25 gift card. All nine respondents reported that all survey questions were easy to answer, none of the questions were confusing or difficult, and none of the questions included unfamiliar words. None of the nine respondents offered suggestions for improving the Program Experiences Survey. The research team, however, had several general and specific suggestions to improve the survey, which are described in Appendix F, Pre-Test Memorandum.
The Former Participant Case Study interviews were conducted by telephone and in English or Spanish as appropriate. The interviewers conducted a debriefing interview with each respondent immediately following the completion of the Former Participant Survey to identify any questions that were confusing or difficult to answer. After completing the survey and debriefing, a $25 gift card was mailed to each respondent. Similar to the Program Experiences Survey, none of the nine respondents offered suggestions for improving the Former Participant Case Study; however, the research team made suggestions to improve the survey (see Appendix F2, Pre-test Revisions Summary, for those suggestions).
The Contractor, Capital Consulting Corporation, and its Subcontractors, Abt Associates and 2M Research Services, will conduct this study. See Table B7 for contact information. The contact information for FNS staff in charge of the study and the reviewer from the National Agricultural Statistics Service (NASS) also is provided.
1 For estimates of the number and rate of erroneous certifications (i.e., of participants) and denials (of applicants) and corresponding improper payments in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), the study will produce estimates that are representative of the population of WIC participants within the 48 contiguous United States, the ITOs within these States, and the District of Columbia.
2 U.S. Department of Agriculture, Food and Nutrition Service, Office of Research and Analysis. (2012). National survey of WIC participants II: State and local agencies report, by Daniel M. Geller, et al. Project Officers: Karen Castellanos-Brown, Sheku G. Kamara, Alexandria, VA. Retrieved from: http://www.fns.usda.gov/sites/default/files/NSWP-II_Vol2.pdf
3 An additional 75 LAs selected to recruit recently certified WIC participants and denied applicants will also complete the Local Agency Survey, bringing the total sample to 965 LA directors and estimated respondents to 772 LA directors (overall response rate of 80 percent)
4 The population size of WIC participants who were certified within 6 weeks, or within a similar period, is not known. Because the survey will be administered in-person, Alaska, Hawaii, and the U.S. Territories are excluded from the sampling frame due to logistical and financial implications making in-person data collection infeasible, including recent natural disasters in Puerto Rico and Hawaii. However, web surveys with the SA and LA directors of these States and territories are feasible; therefore, will be conducted.
5 Based on WIC PC 2014 data, this respondent universe includes 97.2 percent of all WIC participants in the 50 United States, District of Columbia, and the U.S. Territories. See: Thorn, B., Tadler, C., Huret, N., Trippe, C., Ayo, E., Mendelson, M., Patlan, K. L., Schwartz, G., & Tran, V. (2015). WIC Participant and Program Characteristics 2014. Prepared by Insight Policy Research under Contract No. AG-3198-C11-0010. Alexandria, VA: U.S. Department of Agriculture, Food and Nutrition Service.
6 For infant and child WIC participants in the sample, the interview will be conducted with the participant’s relevant parent or legal guardian.
7 The estimated monthly average population of denied applicants is 28,000. Because no administrative data exists for the population of denied WIC applicants, this population was estimated based on information collected in 2009 in the NSWP-II study, which cited several difficulties obtaining records on denied applicants.
8 A low response rate of 47 percent for denied applicants in NSWP-II “appears to result from a 7- to 9-month lag time between the date of denial and the telephone survey and inability of data collectors to reach the denied new applicants.” (NSWP-II Final Report, Vol. 3, page 24). NSWP-III will target recent applicants, those whose date of denial is within one to three months of the targeted data collection. Given the shorter lag relative to NSWP-II, higher response rates are anticipated.
9 Note: The Program Experiences Survey does not inform IPERIA estimates. As with the Certification Survey, the sample will be representative of the population of interest, namely WIC participants within the 48 contiguous U.S. States, the District of Columbia, and the ITOs in these States.
10 Note: The Former Participant Case Study does not inform IPERIA estimates. As with the Certification Survey, the sample will be representative of the population of interest, namely former WIC participants within the 48 contiguous U.S. States, the District of Columbia, and the ITOs in these States.
11 Thorn, B., Tadler, C., Huret, N., Trippe, C., Ayo, E., Mendelson, M. . . .Tran, V. (2015). WIC participant and program characteristics 2014. Prepared by Insight Policy Research under Contract No. AG-3198-C11-0010. Alexandria, VA: U.S. Department of Agriculture, Food and Nutrition Service.
12 Numbers of local agencies and WIC participants are based on 2014 WIC PC data. No reliable source of information is available for the number of applicants for the WIC program who are determined to be ineligible and issued a formal denial, nor for the number of WIC participants who are inactive at a given point in time. Evidence from the NSWP-II study indicated that, as of 2009, only five SAs maintained data on formally denied applicants, and that potential “applicants” who inquire about eligibility criteria by telephone (or using FNS’s online prescreening tool) and choose not to submit an application are not considered to be applicants. In 2009, the NSWP-II study estimated that approximately 1,066,567 WIC participants received food vouchers in April, but not in May, of 2009; however, this number includes participants whose certification periods ended involuntarily as well as those who voluntarily stopped participating.
13 Third National Survey of WIC Participants (NSWP-III) Expert Panel Meeting. (2016). Alexandria, Virginia. 19 January 2016 [transcript].
14 Note: There were no case studies conducted in NSWP-II.
15 The initial sample size is 10 percent of the initial sample sizes (2,000 WIC participants and the 240 denied applicants) used in the primary data collection. Note that the census of recently denied applicants from the sampled local WIC agencies will be recruited for data collection.
16 The sampling methods used for the primary data collection will also be used for the pilot.
17https://www2.census.gov/geo/pdfs/reference/ua/Defining_Rural.pdf
18 High entropy PPS sampling designs are preferred to traditionally-used sampling design when clustered sampling, which is more economically efficient for data collection, is needed. High entropy PPS designs allow sufficiently straightforward calculation of sampling variances and standard errors and unbiased variance estimation. For methodology and applications of high entropy PPS sampling designs in federal surveys see Brewer & Hanif, 1983; Brewer & Donadio, 2003; Tille 2010, and Parsons et al 2014. Brewer, K. R. W. and M. E. Donadio (2003). The high entropy variance of the Horvitz-Thompson estimator. Survey Methodology 29 (2), 189-196.
19 The U.S. Territories, the States of Alaska and Hawaii, and ITOs in the Southwest, Mountain Plains, and Western Regions are excluded from the sampling frame, and so their WIC populations are not counted in these measure of size calculations.
20 In PPS sampling where units have unequal probabilities of selection, variance estimation (which is needed to determine the precision of estimates derived from the sample) can be extraordinarily complex, because it depends on knowing the probabilities of selection of all possible pairs of PSUs. In the traditionally used systematic sampling, the variance cannot be estimated without bias. High-entropy sampling designs simplify variance estimation, as variance of survey statistics can be estimated using only the selection probabilities of units, rather than pairs of units. To achieve high entropy, units must be selected approximately independently from one another. Special sampling algorithms need to be used to achieve this property, but it makes the variance estimation more tractable. High-entropy designs have been used in large-scale national surveys (e.g., the CDC’s National Health Interview Survey, Parsons et al 2014). See discussions in Brewer & Hanif, 1983, Brewer & Donadio 2003, and Tille, 2006.
21 Parsons, V. L., Moriarity, C., Jonas, K., Moore, T. F., Davis, K.E., & Tompkins, L. (2014). Design and estimation for the National Health Interview Survey, 2006–2015. National Center for Health Statistics. Vital and Health Statistics, 2(165).
22 Brewer, K. R. W, & Hanif, M. (1983). Sampling with unequal probabilities. Lecture Notes in Statistics 15. Springer, New York.
23 Tillé, Yves. (2006). Sampling algorithms. New York: Springer.
24 Redemption data processing takes place 30 to 60 days after the end of the month when benefits expire, depending on the State’s procedures. Therefore, there will be a lag of 45 to 90 days between the point when a participant stops participating (the end of the first month of non-participation) and when that participant is identified.
25 Third National Survey of WIC Participants (NSWP-III) Expert Panel Meeting. (2016). Alexandria, Virginia. 19 January, 2016 [transcript].
26 If a PSU consisting of fewer than four LAs is selected for both the Year 1 sample and the Next Decade Update sample, it will not be possible to select a complementary sample of LAs from that PSU. This situation could arise if, for example, a small ITO with fewer than four LAs is selected for both the main and Next Decade Update samples. If this situation occurs, the data collection in this sampled PSU can be delayed until later in the decade (e.g., no data collection until at least year 5 or later of the next decade) to lessen the burden on LAs that were already part of the Year 1 sample.
27 The Local Agency Survey estimates will be representative of the entire United States.
28 In finite population inference, the attributes of the observation units are treated as fixed, and the only variability is due to the fact that some units were randomly drawn into the sample, while other units were not. Variances are understood as the variances of hypothetical distribution of the sample statistic that would have been obtained should all possible samples be drawn. Some elements of survey designs, such as stratification and high sampling fractions (and finite population corrections associated with them), reduce sampling variances, while clustering and nonresponse typically increase them. Thus, the actual sampling variances may differ markedly from the textbook formulae that assume an independent and identically distributed sample from an infinite population. Variance estimation methods that properly account for the complex survey designs produce estimates that are as close to the true underlying variance as possible. See Wolter (2007) for an extended discussion.
29 Wolter, K. (2007). Introduction to variance estimation (2nd Ed). Springer Science & Business Media.
30 Efron, B. (1982). The jackknife, the bootstrap and other resampling plans. Montpelier, VT: The Capital City Press.
31 Kolenikov, S. (2010). Resampling variance estimation for complex survey data. The Stata Journal, 10(2), 165–199.
32 Kolenikov, S. (2014). Calibrating survey data using iterative proportional fitting (raking). The Stata Journal, 14(1), 22–59.
33 Preston, J., & Henderson, T. (2007). Replicate variance estimation and high entropy variance approximations. Papers presented at the ICES-III, June 18–21, 2007, Montreal, Quebec, Canada.
34 Kovar, J. G., Rao, J. N. K., & Wu, C. F. J. (1988). Bootstrap and other methods to measure errors in survey estimates. Canadian Journal of Statistics, 16(S1), 25–45.
35 Brewer, K. R. W., & Donadio, M. E. (2003). The high-entropy variance of the Horvitz-Thompson estimator. Survey Methodology 29(2), 189–196.
36 Preston, J., & Henderson, T. (2007). Replicate variance estimation and high entropy variance approximations. Papers presented at the ICES-III, June 18–21, 2007, Montreal, Quebec, Canada.
37 Kovar, J. G., Rao, J. N. K., & Wu, C. F. J. (1988). Bootstrap and other methods to measure errors in survey estimates. Canadian Journal of Statistics, 16(S1), 25–45.
38 The most recently available WICLAD will be consulted to identify LAs that have been added or dropped.
39 The effective sample size is the sample size that would have been required in a survey using simple random sampling to achieve the same level of precision. Because the sampling design is more complex than a simple random sample, the actual sample size needed to achieve this level of precision is determined by the design effect, which takes into account such factors as stratification, intra-class correlations between participants within PSUs, and weighting for nonresponse adjustment and unequal selection probabilities.
40 When an infant or child is sampled, the parent or person who applied on behalf of the participant will be interviewed.
41 Intra-class correlation (ICC) is the portion of the total population variance that is due to between-PSU variance. It characterizes the similarity of units within PSUs; the higher the similarity, the higher the ICC. Pedlow, et al. (2005) discuss the typical ICCs in area surveys, and report ICCs for the PSU size of 2,000 ranging from 0.0002 to 0.0175, with the median value of 0.003, and a tendency of ICCs to get lower as the PSU geographic size increases. The average PSU size will be about 16,000 square miles, so ICC=0.01 is a conservative assumption.
42 Pedlow, S., Wang, Y., & O’Muircheartaigh, C. (2005). The impact of cluster (segment) size on effective sample size. Proceedings of the American Statistical Association, Survey Research Methods Section, 3952–3959: Alexandria, Va.
43 The FPC correction is used in simple random sampling when the sample to be selected (n) is comparable in size to the population (N) and is calculated as follows: FPC = . Here, N1 is the number of PSUs per stratum in the first stage of selection, and n1 is the number of PSUs to be drawn. Analogies for PPS sampling are offered in Brewer and Donadio (2003).
44 Brewer, K. R. W., & Donadio, M. E. (2003). The high-entropy variance of the Horvitz-Thompson estimator. Survey Methodology 29(2), 189–196.
45 Because the surveys will be conducted in English or Spanish only, if information about language of WIC participants is available in State certification data, any sampled participants whose language is other than English or Spanish will be replaced before attempting interviews. These replaced participants will be counted as non-respondents.
46 U.S. Department of Agriculture, Food and Nutrition Service Office of Research and Analysis, Huang, Gary et al., "National Survey of WIC Participants II: Technical Report," ed. Project Officers: Sheku G. Kamara and Karen Castellanos-Brown (Alexandria, VA2012). Retrieved from: http://www.fns.usda.gov/sites/default/files/NSWP-II_Vol4.pdf
47 U.S. Department of Agriculture, Food and Nutrition Service, Office of Research and Analysis. (2012). National survey of WIC participants II: Participant characteristics report, by Daniel M. Geller, Ph.D., et al. Project Officers: Sheku G. Kamara, Karen Castellanos-Brown, Alexandria, VA. Retrieved from: http://www.fns.usda.gov/sites/default/files/NSWP-II.pdf
48 This issue also applies to the pilot data collection in Year 2 and Year 3. For the pilot, the researchers will call up to 6 LAs to inquire about an estimated average of 6 to 7 participants per LA in Year 2 andYear 3.
49 Even though the LA survey is mandatory, there is no consequence if the LAs don’t respond to the survey. Based on prior research, LA response rate of 80 percent is a reasonable estimate.
50 The burden associated with completing the survey via the web or hard-copy is estimated to be the same.
51 The burden associated with answering any remaining questions over the phone is estimated to be the same as completing online.
52 For the Former WIC Participant Case Study, the response rate is 24 percent, which is lower than the other surveys. The response rate is based on the propensity of eligible, non-participants to respond to be less than participants who are receiving benefits, and the potential to be screened out as ineligible to participate in the survey. The same procedures will be utilized to maximize the respondents to the Former WIC Participant Case Study as with the other surveys to answer the research questions while balancing the burden to the respondents and overall cost of the study.
53 All other study materials not tested with respondents were reviewed and tested internally by the research team, consisting of one team member who reviewed an instrument and then a different team member debriefed the team member providing the review.
A-
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | FNS |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |