Supporting Statement Part B
OMB No. 0584-[NEW]
Food Security Status and Well-Being of Nutrition Assistance Program (NAP) Participants in Puerto Rico
May 3, 2022
Project Officer: Kristen Corey
Office of Policy Support
Food and Nutrition Service
U.S. Department of Agriculture
1320 Braddock Place
Alexandria, VA 22314
703-305-2517
Contents
Part B. Collection of Information Employing Statistical Methods 1
B.1. Respondent Universe and Selection Methods 1
B.2. Procedures for the Collection of Information 4
B.3. Methods to Maximize Response Rates and the Issue of Nonresponse 19
Tables
Table B.1.1. Target Subgroups for In-Depth Interviews 3
Table B.1.2. Breakout of Respondents and Nonrespondents by Respondent Type 4
Table B.2.1. Key Features of the Two Components of the Sample Design for the Household Survey 6
Table B.2.2. Computation of the Compositing Factor 12
Table B.2.3. Population Prevalence of Key Subgroups 15
Table B.2.4. Expected Response Rates by Sampling Frame for Baseline Study 17
Table B.2.6. Expected Yield Rates and Number of Respondents for the 2‐Year Follow‐Up 17
Table B.3.1. Expected Response Rates 21
Table B.3.2. Timeline for Sampled NAP Participant Data Collection Activities 22
Appendices
B. Research Objectives and Questions by Data Source
C.1. Household Survey Instrument in English
C.2. Household Survey Instrument in Spanish
D.1. Web-based Household Survey Instrument in English
D.2. Web-based Household Survey Instrument in Spanish
E.1. In-depth Interview Protocol in English
E.2. In-depth Interview Protocol in Spanish
F.1. Concept Mapping Welcome and Scheduling Email
F.2. Reminder Email for First Concept Mapping Meeting
F.3. Advance Material for First Concept Mapping Meeting
F.4. First Concept Mapping Meeting Facilitator Guide
F.5. Additional Ideas Concept Mapping Email
F.6. Instructions for Sorting and Rating Concept Mapping Email
F.7. Second Concept Mapping Meeting Scheduling Email
F.8. Reminder Email for Second Concept Mapping Meeting
F.9. Advance Material for Second Concept Mapping Meeting
F.10. Second Concept Mapping Meeting Facilitator Guide
G.1. First Survey Invitation Letter for NAP Participant List Sample in English
G.2 Reminder Postcard for NAP Participant List Sample in English
G.3. Invitation Letter With Mail Survey for NAP Participant List Sample in English (2 mailings)
G.4. Recording for Inbound Calls to Schedule Survey in English
G.5. Return Call to Schedule Survey in English
G.6. Script for Telephone Nonresponse Follow-Up for NAP Participant List Sample in English
G.7. Invitation Letter for Area Probability Sample in English (In-Person Delivery)
G.8. Script for Data Collectors for Area Probability Sample in English (In-Person Delivery)
G.9. Text for Website (Home Page) in English
G.10. FAQ Document in English
G.11. Thank You Letter for Survey Participants in English
G.12. First Survey Invitation Letter for NAP Participant List Sample in Spanish
G.13. Reminder Postcard for NAP Participant List Sample in Spanish
G.14. Invitation Letter With Mail Survey for NAP Participant List Sample in Spanish (2 mailings)
G.15. Recording for Inbound Calls to Schedule Survey in Spanish
G.16. Return Call to Schedule Survey in Spanish
G.17. Script for Telephone Nonresponse Follow-Up for NAP Participant List Sample in Spanish
G.18. Invitation Letter for Area Probability Sample in Spanish (In-Person Delivery)
G.19. Script for Data Collectors for Area Probability Sample in Spanish (In-Person Delivery)
G.20. Text for Website (Home Page) in Spanish
G.22. Thank You Letter for Survey Participants in Spanish
H. Use of Incentives
I.1. Voicemail Script for In-Depth Interview Recruitment in English
I.2. Script for Answered Call in English
I.3. Study Announcement for Local Community Partners in English
I.4. Voicemail Script for In-Depth Interview Recruitment in Spanish
I.5. Script for Answered Call in Spanish
I.6. Study Announcement for Local Community Partners in Spanish
J.1. Email to ADSEF
J.2. Agenda for Meeting With ADSEF
J.3. Instructions for Using SFTP Site
K.1 Recruitment Email for Concept Map Stakeholder Groups
K.2 Concept Mapping Informed Consent Form
L. 60-day Federal Register Notice
M. Comments from TWG and FNS responses
N. NASS Comments and FNS Responses
O. Pretest Methods and Summary of Findings
P.1. System of Records Notices FNS-8 USDA Studies and Reports
P.2. System of Records Notices FNS-10 Persons Doing Business With the Food and Nutrition Service
Q. Privacy Office Comments and FNS Responses
R. Institutional Review Board Approval Letter
S. Insight Policy Research Information Security and Confidentiality Pledge
T. Total Public Burden Hours and Cost
Part B. Collection of Information Employing Statistical Methods
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
B.1.1. Respondent Universe
The objectives of this study are to collect representative survey data on household food security in Puerto Rico; conduct in-depth interviews with Nutrition Assistance Program (NAP) participants and low-income nonparticipants; and identify policies that influence the delivery and effectiveness of NAP in reducing food insecurity and gaps in knowledge of how NAP protects against low food security. This study is powered for a 2-year follow-up study. Accordingly, the sample size is larger than what would be required without follow-up. The 2-year follow-up study is not part of this data collection request.
Household survey: The respondent universe for the survey includes all households in Puerto Rico. The sample design will support a sample of respondents to be analyzed at the household level. It will include households enrolled in NAP, which is sponsored by the U.S. Department of Agriculture (USDA), and households not enrolled in NAP.
In-depth interviews: The respondent universe for the in-depth interviews includes individuals in low-income households who participated in the survey, including NAP participants and low-income individuals not participating in NAP.
Concept-mapping task: The respondent universe for the concept-mapping task includes stakeholders who have technical knowledge of the policies related to Puerto Rico’s food and nutrition system and represent the primary interest groups engaged in food security issues (e.g., representatives from public agencies that administer food security and human service programs, human service providers, advocacy organizations, private sector representatives, and academics or researchers).
B.1.2. Selection Methods
Household survey
A dual-frame sampling approach will be used to identify and sample NAP‐participating and low‐income nonparticipating households and collect representative survey data on household food security in Puerto Rico (see section B.2.1). The two sources to build the household survey sampling frames are an administrative list of NAP participants provided by Administración de Desarrollo Socioeconómico de la Familia (ADSEF) and an area probability sample. The classic multiple‐frame sampling approach1 facilitates selecting samples from overlapping frames—one with near‐complete population coverage and other(s) with potentially lower coverage—to be combined so that the total aggregated sample has the coverage of the more complete frame. The survey instrument will contain questions to identify the overlap across the two sample frames—specifically, to ascertain whether the household was participating in NAP at the time the NAP list was extracted; this information will be used to adjust the sample weights for the overlap.2
In-depth interviews
The study sample for the in-depth interviews will be drawn from survey respondents who agreed to be contacted for an interview and provided contact information for follow-up purposes. Every respondent who indicates interest will be classified into a subgroup based on their NAP participation status and income level, presence of children in the household, and food security status. Respondents who fall into 1 of the 12 subgroups shown in table B.1.1 will be recruited until the study team completes the desired number of interviews for each subgroup.
Table B.1.1. Target Subgroups for In-Depth Interviews
Participation Status |
Participating in NAP |
Low Income, Not Participating in NAP |
||||||||||
Household status |
Household with children |
Household without children |
Household with children |
Household without children |
||||||||
Food security status |
FS |
LFS |
VLFS |
FS |
LFS |
VLFS |
FS |
LFS |
VLFS |
FS |
LFS |
VLFS |
Target number of completed interviews |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
12 |
Notes: FS = food secure; LFS = low food secure; VLFS = very low food secure
As recommended by a Technical Working Group member, the study team will allow for some flexibility regarding the number of completed interviews per subgroup. Interviewers will participate in weekly or biweekly meetings to discuss recruitment efforts, information gaps, and commonalities across interviewees and subgroups. These discussions can help inform decisions about how many interviews are sufficient for a particular subgroup. If data collectors are unable to recruit an acceptable number interviewees from the list of survey respondents who express interest, the study team will seek assistance from local community partners that serve the target population. The team will provide a study announcement (see Appendix I.3 and I.6) for local partners to disseminate via social media or display in places where members of the target population are likely to see it. The announcement will include a brief description of the study and a toll-free number. Interviewers residing in Puerto Rico will answer calls to the toll-free number and use a screening script to determine whether a caller belongs to one of the subgroups being recruited (see Appendix I.2 and I.5). Screening questions will ask about NAP participation, presence of children in the household, and household income.
Concept mapping
The study team will assemble a list of stakeholders for the concept-mapping task by asking FNS staff and the study’s Technical Working Group (TWG) members for recommendations. The study team will send identified stakeholders a recruitment letter that explains the concept-mapping task and requests their participation (see Appendix K.1 and K.2). Invited stakeholders will be asked to recommend additional participants until six stakeholder groups representing different sectors (e.g., public agencies, human service providers) are formed. Each group will include about seven members. This number of groups and participating individuals will help ensure each group is small enough to engage in the data collection task, and that stakeholders collectively represent a broad array of knowledge and experience.
B.1.3. Estimated Number of Respondents
This study will gather data through a representative household survey, in-depth interviews with NAP participants and low-income nonparticipants, and concept mapping. This new information collection will have 12,504 respondents (19 Puerto Rico Government staff; 18 staff from businesses or other for-profit organizations; 18 staff from nonprofit organizations; and 12,449 individuals). It is anticipated that of the 12,504 contacted, 3,745 will be responsive and 8,759 will be nonresponsive. Table B.1.2 provides the breakout of respondents and nonrespondents by respondent type and the expected response rate; see section B.3 for details on efforts to ensure the intended response rates are achieved. Appendix T provides a full breakdown of respondents.
Table B.1.2. Breakout of Respondents and Nonrespondents by Respondent Type
Respondent Type |
Total Contacted |
Number of Respondents |
Number of Nonrespondents |
Response Rate |
|
Concept mapping |
|||||
Puerto Rico Government |
Human services, education, and healthcare agency staff |
18 |
14 |
4 |
0.78 |
Business or other for-profit organizations |
Private business and academia staff |
18 |
14 |
4 |
0.78 |
Nonprofit organizations |
Advocacy organizations, human services provider staff |
18 |
14 |
4 |
0.78 |
Survey |
|||||
Puerto Rico Government |
ADSEF staff |
1 |
1 |
0 |
1.00 |
Individuals |
Pretest participants |
12 |
8 |
4 |
0.67 |
NAP sample |
3,170 |
922 |
2,248 |
0.33a |
|
Area probability sample |
9,110 |
2,733 |
6,377 |
0.30 |
|
In-depth interviews |
|||||
Individuals |
Pretest participants |
12 |
9 |
3 |
0.75 |
NAP sampleb |
360 |
58 |
302 |
0.16 |
|
Local organization recruitment |
145 |
29 |
116 |
0.20 |
|
Area probability samplec |
360 |
57 |
303 |
0.16 |
|
Total (unique) |
|
12,504 |
3,744 |
8,760 |
0.30 |
a Because the nonrespondents are subsampled at a rate of 1/3, respondents to the telephone follow up are included with a weight of 3.0 in the response rate computations. Ineligible sampled cases are excluded from both the numerator and denominator of the response rate computation.
b Individuals in the NAP sample who are contacted for interviews are a subset of the individuals in the NAP sample who complete the survey; thus, they are not included in the totals of unique persons contacted, respondents, and non-respondents.
c Individuals in the area probability sample who are contacted for interviews are a subset of the individuals in the area probability sample who complete the survey; thus, they are not included in the totals of unique persons contacted, respondents, and non-respondents.
Procedures for the Collection of Information
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection
Estimation procedure
Degree of accuracy needed for the purpose described in the justification
Unusual problems requiring specialized sampling procedures
Any use of periodic (less frequent than annual) data collection cycles to reduce burden
The in-depth interviews and concept-mapping tasks do not use statistical sampling or estimation methodologies. The information collection procedures used for the in-depth interviews and concept-mapping tasks are described below, followed by a review of statistical methodologies used for the household survey data collection in sections B.2.1 through B.2.5.
In-depth interview procedures
Every 3 weeks during the survey data collection period, the study team will generate a list of survey respondents who volunteer to be contacted for an in-depth interview. The list will include the following data elements: respondent name and contact information, NAP participation status, household size and income, presence of children in the household, and food security status. These data elements will be used to classify each respondent into one of the 12 target subgroups (see table B.1.1).
A field supervisor will assign survey respondent volunteers to trained interviewers. Interviewers will contact individuals by phone to schedule the interviews, using the script shown in Appendix I.1 and I.4 Upon reaching a potential interviewee by phone, the interviewer will explain the purpose of the call and try to schedule the in-person interview within the next few days (see Appendix I.2 and I.5). Avoiding delays between scheduling and conducting the interview will help minimize the likelihood of no-shows or cancellations.
Concept-mapping procedures
The study team will primarily rely on guidance from FNS, TWG members, and a snowball sampling approach to select stakeholders to participate in the concept-mapping process. The team will first seek input from FNS staff and TWG members to identify and recruit stakeholders. Selection criteria will include expertise in policy issues related to NAP and food security in Puerto Rico, familiarity with one or more of the stages of Puerto Rico’s food and nutrition system, and ability to represent the perspectives of key stakeholders in that system. After contacting potential participants recommended by FNS and the TWG via email (see Appendix K.1), the team will solicit recommendations for additional stakeholders from those contacted and repeat the process until all stakeholder group members are selected. The team anticipates convening 6 stakeholder groups with 7 members each, for a total of 42 stakeholder group members.
B.2.1. Statistical Methodology for Stratification and Sample Selection for Household Survey
The study team will use two sources to build the household survey sampling frames: an administrative list of NAP participants provided by ADSEF and an area probability sample.3 This dual-frame approach is designed to balance the relative efficiencies and coverage of each sampling frame. The NAP administrative list provides the most efficient approach for sampling NAP participants, and the area probability sample provides coverage of the full population, including NAP nonparticipants.
The study team will use the NAP administrative list to identify and sample NAP participating households. The area probability sample will be used to identify and sample NAP participating households, low‐income nonparticipating households, and households ineligible for NAP. Table B.2.1 provides an overview of key features of the two sample design components, including the sample sizes and expected response rates for each sampling frame. Data collection will begin at the same time for the two survey sample frames (i.e., NAP participant list and area probability), but data collection procedures and field periods will vary by frame type. The field period for the NAP participant list frame is 10 weeks and the area probability frame is 20 weeks.4
Table B.2.1. Key Features of the Two Components of the Sample Design for the Household Survey
Sample Design Features |
NAP Participants List |
Area Probability Sample |
Original sample size (excluding reserve) |
3,170 |
9,110 |
Eligibility rate (still residing in Puerto Rico) |
0.9 |
1 |
Response rate to mailed web invitation and paper surveya |
0.3 |
N/A |
Web/Paper survey completes |
856 |
N/A |
Subsampling rate for phone follow‐up |
0.33 |
N/A |
Number of cases sent for phone follow‐up |
666 |
N/A |
Phone follow‐up response rate (conditional) |
0.1 |
N/A |
Phone completes |
66 |
N/A |
Overall response rate |
0.33 |
0.3 |
Overall yield rate |
0.29 |
0.3 |
Completed baseline interviewsb |
922 |
2,733 |
Note: Response rates experienced in similar studies of SNAP participants: (1) Food Insecurity Nutrition Incentive grant evaluation (63 percent), (2) Barriers That Constrain the Adequacy of Supplemental Nutrition Assistance Program (SNAP) Allotments (40 percent), and (3) Nutrition Assistance in Farmers Markets: Task 2‐Understanding the Shopping Patterns of SNAP Participants (46 percent)
a Assumes $5 prepaid cash incentive, initial survey mailing, postcard, second survey mailing
b Respondents from both the NAP participant list and the area probability sample have three response options: mail back the paper survey, complete the web-based survey online, or schedule a telephone interview
NAP participant list frame.
The Insight team will work with ADSEF to obtain administrative case files of NAP participants, which will be used as the NAP participant list sampling frame. The NAP participant list includes the household’s address, phone number, and other variables related to NAP participation, which may include household size, income, NAP start date, and other demographics. This sampling frame will provide total coverage of NAP participants at the time the list is created. Depending on the time lag between the list creation date and data collection, there may be newly enrolled NAP participants who are not included in the NAP participant list. These households will not have a chance to be selected from the NAP participant list frame, but they will be included on the area probability sampling frame.
The NAP participant list frame may also include sampled households that no longer participate in NAP at the time of data collection. These households will be included in the study and analyzed as part of the non‐NAP subgroup. The study team will request the participant list from ADSEF (see Appendix J.1–J.2) as close to data collection as possible to reduce the number of households that have a change in NAP participation status between the time the list is created and the data collection period.
A systematic sample of households will be selected from the NAP participant list sampling frame. Variables available on the frame will be reviewed as possible stratification variables. Variables chosen for stratification will be those expected to be correlated with food security, including geographic variables, income, household size, and demographic variables. A total of 3,170 households will be sampled from the NAP participant list sampling frame.
Using the mailing addresses provided by ADSEF, the study team will mail an initial invitation letter with a $5 prepaid cash incentive to households sampled from the NAP participant list. The letter will explain the study, offer a $40 postparticipation incentive (paid by gift card), and invite an adult member of the household to complete the survey via web (see Appendix G.1 and G.12).
One week after the initial mailing, the study team will send a postcard to nonrespondents, reminding them to complete the survey (see Appendix G.2 and G.13). Next, the study team will mail nonrespondents a paper survey packet, which will include a cover letter with a link to the web survey, a hardcopy version of the survey, and a postage-paid return envelope (see Appendix G.3 and G.14). Approximately 5 weeks after the initial mailing, remaining nonrespondents will receive a second paper survey packet. The response rate for the survey will depend on the reliability of addresses on the NAP list and cooperation of selected households. The study team will follow up with a one‐third subsample of nonrespondents by telephone, using the phone numbers provided on the NAP participant list (Appendix G.6 and G.17 Telephone interviewers will reference prior communications (survey mailings and reminders) sent by the study team. See table B.3.2 for a brief overview of the timeline for data collection activities for households sampled from the NAP list.
Table B.3.2. Timeline for Sampled NAP Participant Data Collection Activities
Time Period |
Activity |
Appendix |
Week 1 |
Initial survey packet mailing |
G.1 and G.12 |
Week 2 |
Reminder postcard to nonrespondents |
G.2 and G.13 |
Week 4 |
First paper survey packet mailing to nonrespondents |
G.3 and G.14 |
Week 6 |
Second paper survey mailing to nonrespondents |
G.4 and G.15 |
Weeks 8 to 10 |
Telephone follow-up with nonrespondents |
G.5 and G.16 |
Area probability frame
The study team will use a stratified multistage sampling design to sample households from the area probability frame. Households will be sampled within geographic areas (segments). Segments (the primary sampling units) will be formed by grouping neighboring census blocks until the number of housing units in the census blocks (based on data from the 2020 Census) is large enough to support the sample to be selected within the segment. The study team will select a stratified probability proportionate to size (PPS) sample of segments with core‐based statistical areas as the strata. A total of 100 segments will be selected from the area probability frame.
The area probability sample will include a sample of 9,110 addresses selected within the 100 sampled segments with equal probabilities within segments. The target sampling rate to be applied to addresses within a given segment will be computed by dividing the overall target sampling rate (9,110 divided by the total number of housing units in Puerto Rico, based on the 2020 Census) by the probability of selection of the segment. The sampling interval for the segment will be obtained by rounding the reciprocal of the target sampling rate to the nearest integer, k. Data collectors will have maps of each segment and instructions on how to move about the segment, sampling every kth residential unit into the sample.
To achieve cost efficiencies, the team plans to use local data collectors, who will knock on doors and attempt to talk with an adult member of each sampled household about their selection into the study and the purpose of the study. If a member of the household answers, the data collector will hand that person a packet including a cover letter, paper survey, postage-paid return envelope, and a $5 prepaid cash incentive (see Appendix G.7, G.8, G.18, and G.19). If no one answers the door at a sampled household, the data collector will leave the packet at the doorstep.
When speaking with an adult household member, the field data collector will explain the four possible options to complete the survey and receive the $40 postparticipation incentive (paid by gift card): (1) visit the study website and complete the web survey, (2) mail the completed survey to the study team’s home office, (3) schedule a time for the data collector to return and pick up the survey, or (4) schedule a time to complete the survey by phone.
Reserve sample
In addition to the primary sample described above, the study team will select a reserve sample that is 5 percent of the primary sample size and will be prepared to release it if the sample yield falls substantially short of expectations. For the area probability sample, the reserve sample will be an additional 5 segments that are expected to yield an additional 456 sampled addresses. For the NAP list sample, the reserve sample will be an additional 159 sampled households. The study team will monitor the sample and numbers of completed surveys to determine whether release of any reserve sample is needed during data collection for the baseline study.
Quality control procedures for survey data collection
The study team will incorporate quality control (QC) procedures across all survey data collection activities for the duration of the field period. The study team will develop a Study Management System to provide complete data collection and management to support all survey activities and follow a case throughout its lifecycle.
The team responsible for hardcopy surveys will carefully inspect the contents of the survey packages before they are mailed. QC procedures for the field staff will involve tracking their location, verifying the amount of time spent on each block, and assessing the contact attempt recordings. QC procedures for returned paper surveys will involve tracking and comparing counts at all stages of processing to ensure every mailed survey has a status code. The study team will implement standard visual inspection protocols to verify the accuracy of the data-capturing process for the Teleform paper surveys. QC procedures for the telephone surveys will involve listening to 10 percent of the survey administration (real time or recorded) and providing feedback to the interviewers to improve interviewing techniques (if needed) for gaining cooperation, asking questions, or recording responses. The study team will develop a standardized QC checklist for team supervisors to review the telephone administration of the survey. Given the multimode data collection approach, the study team will harmonize the data across the modes prior to analyses. Once harmonized, the team will use descriptive statistics to identify and address outliers and response rates for key survey items.
B.2.2. Estimation Procedure for Household Survey
Overview
The study team will compute survey weights to account for the differential probabilities of selection of households from the two sampling frames, subsampling for nonresponse follow-up, differential nonresponse, and overlap between the two samples.
The team will use a classical design‐based approach, beginning with base weights constructed as inverses of the probabilities of selection. In the perfect data collection context (e.g., no nonresponse, no undercoverage), this scheme produces unbiased estimates and does not require any model assumptions. However, the base weights must be modified to account for undercoverage and nonresponse to reduce bias. The overlap in the two sampling frames also necessitates compositing of their weights to avoid overrepresentation of households in the overlap. The study team will calculate the household base weights and associated adjustments (e.g., unknown eligibility adjustments, nonresponse adjustments) separately for the NAP list sample and the area probability sample because the samples will be selected from separate frames with differential sampling rates, and differences in data collection procedures are expected to result in different response propensities among the two samples. The weighting steps are discussed below, followed by a review of the specifics of the weighting steps as they relate to each sampling frame.
The base weight will be adjusted for unknown eligibility status and nonresponse. The study team will use the weighting‐class method5 for the unknown eligibility and nonresponse adjustments. The team will form weighting classes by identifying variables correlated with the propensity to respond as determined via a classification tree algorithm. Variables used for creating weighting‐class cells must be available for all sampled cases regardless of their response status. Therefore, variables available on the sampling frame are candidates for forming weighting‐class cells.
Base weights. The base weight is the inverse of the probability of selection. For households sampled in the area probability frame, the base weight must account for the two stages of sampling, so the base weight is the reciprocal of the product of the probability of selection of the segment and the within‐segment sampling fraction. For households on the NAP list, the study team will adjust the base weight to take into account the subsampling of nonrespondents with available phone numbers for nonresponse follow-up by phone.
Adjustments for unknown eligibility and nonresponse. Next, the team will adjust the weights for unknown eligibility and nonresponse as follows:
Unknown eligibility. Each sampled household will be classified into four response categories: respondent, eligible nonrespondent, ineligible, and unknown eligibility (cases for which eligibility cannot be determined). To be considered eligible for this study, a household must reside in Puerto Rico. The determination of eligibility differs among sampling frames. For the unknown eligibility adjustment, the study team will adjust the weights for sampled households in which eligibility can be determined to account for households with unknown eligibility within the same weighting class.
Nonresponse. For the nonresponse adjustment, the study team will adjust weights for respondents to account for nonrespondents within the same weighting class. Within each weighting class, the team will calculate the weighting adjustment factor by the ratio of the sum of base weights of all sampled households (including respondents and nonrespondents but excluding ineligibles) in that weighting class to the sum of the household base weights of respondents in that weighting class. The study team will create weighting class cells with a minimum of 30 respondents in each cell. The team will collapse weighting class cells with larger adjustment factors, as determined by reviewing the data, with similar cells to reduce variations in weights. Extreme weights may be influential, and large weight variation will decrease the precision of the estimates.6 Collapsing weighting class cells with large adjustment factors will pool the sampled cases, smoothing the size of the adjustment factor, thereby limiting variation in the weights and the potential for very influential observations.
For the area probability sample, geographic variables, such as urban or rural status and aggregate census tract‐level estimates from the most current American Community Survey (ACS), can be used to adjust for unknown eligibility status and nonresponse. An example of a census‐tract level variable that might be considered in defining weighting classes is a categorized version of the percentage of households in the tract with incomes below the poverty threshold.
The study team will include variables available on the NAP participant list, such as demographics, the number of people in the household, NAP start date, and income, into the classification tree algorithm to form weighting classes for nonresponse adjustment.
Compositing adjustment
The study team will composite together the nonresponse adjusted weights of households with completed surveys to account for overlap in the sampling frames. Compositing is a weighting approach that essentially creates a new weight by applying a weighted average adjustment to the weights of two or more samples that represent the same population.7
The questionnaire contains questions to ascertain the overlap of the two sample frames. All households in Puerto Rico are represented on the area probability frame. Therefore, all households sampled from the NAP participant list will also be on the area probability frame. Questions included in the survey will determine whether those households sampled from the area probability frame are also on the NAP participant list frame (by ascertaining whether the households were receiving NAP benefits at the reference date corresponding to the timing of the NAP list extracted for sampling purposes). Based on the responses to these survey questions, responding households will be classified as shown in table B.2.2. The compositing factor (λ) for each group is shown in the last column. The composited weight is the product of the nonresponse adjusted weight and the compositing factor.
Table B.2.2. Computation of the Compositing Factor
Group |
Sampled From |
Household Participates in NAP |
Effective Sample Size* |
Compositing Factor (λ) |
|
Area Probability Frame |
NAP List |
||||
a |
X |
|
|
na |
1 |
b |
X |
|
X |
nb |
nb/(nb+nc) |
c |
|
X |
N/A |
nc |
nc/(nb+nc) |
*Computed as the number of respondents divided by the unequal weighting design effect. The unequal weighting design effect is estimated as , where is the coefficient of variation of the nonresponse adjusted weights of the respondents in the group.
Calibration adjustment
To compute the final household weight, the study team will calibrate the composite weights to external household totals so that the sum of weights of the respondent sample matches the total number of households in Puerto Rico from external sources, overall and for categories of variables used in the calibration adjustment. To facilitate the use of multiple characteristics in this calibration adjustment, raking8 will be used as the calibration method. To the extent possible, the respondent sample will be calibrated to external totals based on the key subgroups of interest: (1) NAP participation, (2) presence of children in household, (3) presence of older individuals (60 and older) in household, (4) presence of adults with disabilities in household, and (5) the low‐income nonparticipant subgroup membership. Because of associations among these dimensions, it is expected not all dimensions will be retained in the raking adjustment. The team will give priority to NAP participation status and collapse or eliminate other dimensions as necessary. For all dimensions other than NAP participation, the study team will obtain the external totals from the most recent 5‐year ACS. The team will obtain the NAP participation external total from administrative counts.
Because of the inherent differences in the two sampling frames, there may be large variations in the weights. Extreme weights may be influential, and large weight variation will decrease the precision of the estimates.9 The team will consider trimming extreme outliers to mitigate their impact on analysis results and will again rake the trimmed weights to the external totals.
Variance estimation
The study team can compute estimates of the precision of the estimates using replication methods, such as the jackknife method, or the linearization approach based on Taylor Series approximations.10 The replication approach is implemented by defining subsamples (replicates) and applying weighting adjustments for each replicate separately. In addition to the weights described above (the “full‐sample weight”), the study team will compute and provide replicate weights on the analysis files. With the replicate weights, standard software, such as R, Stata, SAS (Survey procs), and SUDAAN, can be used for analyses to appropriately reflect the precision, accounting for the complex sample design and estimation scheme. The team will also include variance stratum and variance unit identifiers on each record in the analysis files to ensure the Taylor Series linearization approach can be used (as an alternative to replication) by analysts who prefer that method. The two methods often give very similar estimates of standard errors, although replication methods are more effective in accounting for the precision effects of several adjustments to the weights. The differences in the standard errors obtained by the two methods are likely to be more substantial for estimates of totals compared with estimates of means and proportions.11
The study team will base the formation of replicates and the definitions of variance strata on how the sample was selected and will vary the formation and definitions by sampling frame. The team will create replicates and variance strata within each sampling frame. Variance strata will be defined as sampling strata nested within a sampling frame. To create balance across the samples, the team will create 50 replicates for each sampling frame, for a total of 100 replicates to provide sufficient degrees of freedom and limit the number of weight variables, keeping the dataset manageable for data analysis.
For the NAP list sample, the sampling unit is the household, and households will serve as the variance units for this sample. For the area probability sample, there are two stages of selection, with households sampled within sampled segments. The variance strata definition will mirror the definition of the strata defined for sampling, with combining of strata as needed to obtain 50 replicates associated with each of the two samples. The study team will use the stratified jackknife replication method (JKn) to create replicates. For the NAP list sample, within each variance stratum, the team will form equal‐sized clusters (groups) of sampled households by randomly grouping them to create 50 replicates.
The study team will also use the adjustments that produce the full sample weight in computing the replicate weights, with the adjustment factors computed separately within each replicate.
The final weighted sample of responding households will represent the population of households in Puerto Rico. The study team will attach the final full-sample weight and corresponding replicate weights to the respondents’ survey records (as variables in the dataset) for use in analyses. The dataset will contain one record for each respondent, with corresponding household weights.
B.2.3. Degree of Accuracy Needed for the Purpose Described in the Justification
The sample was designed to support estimates for the overall household population of Puerto Rico and key household subgroups for the baseline study and a 2‐year follow‐up with a margin of error of ±5 percent for a 95 percent confidence interval for a binary statistic with a response of 50 percent in each category. This means for a binary characteristic possessed by 50 percent of the population, the margin of error of a 95 percent confidence interval on the estimate would be expected to be 5 percentage points. For example, if 50 percent of households in Puerto Rico are food insecure, a 95 percent confidence interval for that estimate would be expected to go from 45 percent to 55 percent. To achieve the target level of precision, the required effective sample size (i.e., the nominal sample size divided by the expected design effect) can be computed by solving for n in the following:
1.96*sqrt(p*(1‐p)/n) ≤ 0.05
where p = 0.5 and n = the effective sample size. Based on this computation, at least 400 effective completed surveys are required for each key subgroup to meet the precision requirements.
The key subgroups of interest and their expected prevalence in the target population appear in table B.2.3. Population percentages of NAP participants were derived from FNS key data reports from 201912 and the estimated population totals from ACS 5‐year (2014–2018) Public Use Microdata Sample (PUMS) datafile. All other proportions were estimated using the ACS 5‐year (2014–2018) PUMS.13
Table B.2.3. Population Prevalence of Key Subgroups
Household-Level Subgroups |
Percent of Population (Households) |
NAP participating households |
46.7 |
Households with children |
29.2 |
Households with older members (60 and older) |
47.7 |
Households with at least one person with a disability |
40.5 |
Note: Disability status is based on the American Community Survey definition.
Sources: FNS Key Data Reports (2019) for NAP participation estimates; 2014–2018 American Community Survey 5‐year Public Use Microdata Sample for all other estimates
The sample size was designed to be large enough to meet the precision requirement for the subgroup with the lowest expected population prevalence, households with children.
Design effects
This study has three potential sources of design effects (increases in variance): (1) the differential sampling of households across sampling frames and the need to composite the samples, (2) the effect of the clustering of the area‐probability sample within segments, and (3) the weighting adjustments to account for differential nonresponse and coverage. Accounting for these aspects, the sample design and weighting adjustments are expected to yield an overall design effect of 1.5.
Yield rates
The yield rate is the expected number of respondents achieved from a fielded sample divided by the number of cases in the fielded sample. The yield rate is different from the response rate because it includes ineligibles in the denominator. The yield rate provides information on the level of effort and sample sizes required for a study design, while the response rate provides information on the representativeness of the survey respondents. Yield rates will vary by sampling frame and mode.
The sample size and yield rate for the study are driven by the precision requirements for a potential 2-year follow-up study in addition to the current study. Eligibility for the 2‐year follow‐up is based on at least one member of the responding household being alive and living in Puerto Rico. Thus, the estimation of eligibility rates for the 2‐year follow‐up study requires consideration of the possibility of a natural disaster in that 2‐year period. Migration patterns following Hurricane Maria can be used to inform eligibility rate assumptions for the 2-year follow-up, though each natural disaster is unique. Acosta et al. estimated outmigration following Hurricane Maria to be between 4 and 17 percent.14 Eligibility for the 2‐year follow‐up study is not expected to vary by sampling frame.
The study team has conducted a sensitivity analysis to explore the variation in the yield rate under a best case scenario of 95 percent eligible, a worst case scenario of 80 percent eligible, and a middle scenario of 90 percent eligible. These estimates are based on past experiences of similar studies, including studies in Puerto Rico. Table B.2.4 provides expected response rates for each sampling frame for the three scenarios for the baseline study. Table B.2.5 provides the corresponding yield‐rate scenarios explored based on the expected response and eligibility rates and the resulting expected number of respondents for each scenario for the baseline study.
For the 2‐year follow‐up, the study team anticipates the following response rates: a best case scenario of 90 percent, a worst case scenario of 60 percent, and a middle scenario of 80 percent. Eligibility rates for the 2‐year follow‐up study should not vary by sampling frame. Assuming the middle response rate for the baseline study and the eligibility rates discussed previously, table B.2.6 provides expected yield rates for the 2‐year follow‐up survey.
Table B.2.4. Expected Response Rates by Sampling Frame for Baseline Study
Sampling Frame |
Response Rate (Percent) |
||
Best |
Middle |
Worst |
|
NAP participant list (mail) |
40 |
30 |
20 |
NAP participant list (phone follow‐up) |
15 |
10 |
5 |
Area probability |
40 |
30 |
20 |
Table B.2.5. Expected Yield Rates and Number of Respondents for Each Sampling Frame for the Baseline Study
Sampling Frame |
Yield Rate (Percent) |
Expected Number of Respondents |
||||
Best |
Middle |
Worst |
Best |
Middle |
Worst |
|
NAP participant list (overall) |
39 |
29 |
19 |
1,227 |
922 |
609 |
Area probability |
40 |
30 |
20 |
3,644 |
2,733 |
1,822 |
Total |
4,871 |
3,655 |
2,431 |
Table B.2.6. Expected Yield Rates and Number of Respondents for the 2‐Year Follow‐Up
Overall Yield at 2-Year Follow-Up |
Yield Rate (Percent) |
Expected Number of Respondents |
||||
Best |
Middle |
Worst |
Best |
Middle |
Worst |
|
Total |
86 |
72 |
48 |
3,125 |
2,632 |
1,755 |
Expected levels of precision
Based on the estimated design effect and yield rates, table B.2.7 provides the resulting estimates of the margin of error for the baseline study when estimating a binary characteristic with 50 percent having the characteristic. Even in the worst case scenario, there are enough respondents to the survey to meet the precision requirements for the household subgroups.
Table B.2.7. Expected Margin of Error for Best, Worst, and Middle Yield‐Rate Scenarios for Estimates of a 50-Percent Characteristic: Baseline Study
Subgroup |
Expected Margin of Error by Scenario (Percent) |
||
Best |
Middle |
Worst |
|
NAP participants |
2.25 |
2.59 |
3.18 |
NAP nonparticipants |
2.62 |
3.09 |
3.78 |
Households with children |
3.18 |
3.67 |
4.51 |
Households with at least one older person (60 and older) |
2.49 |
2.87 |
3.52 |
Households with at least one person with a disability |
2.70 |
3.12 |
3.83 |
Considering the middle response rate scenario and a design effect of 1.5, table B.2.8 provides the resulting estimates of the margin of error when estimating a binary characteristic with 20, 30, and 40 percent having the characteristic.
Table B.2.8. Expected Margin of Error for Middle Yield‐Rate Scenario for Estimates of 20, 30, and 40 Percent Characteristics: Baseline Study
Subgroup |
Expected Margin of Error (Percent) |
||
P = 20 Percent |
P = 30 Percent |
P = 40 Percent |
|
NAP participants |
2.07 |
2.38 |
2.54 |
NAP nonparticipants |
2.47 |
2.83 |
3.02 |
Households with children |
2.94 |
3.37 |
3.60 |
Households with at least one older person (60 and older) |
2.30 |
2.63 |
2.82 |
Households with at least one person with a disability |
2.50 |
2.86 |
3.06 |
Note: P is the population prevalence of the characteristic of interest.
Based on the estimated design effect and yield rates, table B.2.9 provides the resulting estimates of the margin of error for the 2‐year follow‐up study when estimating a binary item with 50 percent having the characteristic. The sample was designed to meet the 5 percent precision requirement for the least prevalent household subgroup for the middle yield‐rate scenario at the 2‐year follow‐up. Thus, the expected margin of error for the households with children subgroup for the middle scenario is 5 percent. If the yield rates are low (as in the worst‐case scenario), the study will be unable to meet the precision requirements for most of the subgroups of interest at the 2‐year follow‐up; however, the margins of error are still expected to be under 10 percent for these subgroups under the worst case scenario.
Table B.2.9. Expected Margin of Error for Best, Worst, and Middle Yield‐Rate Scenarios for Estimates of a 50-Percent Characteristic: 2-Year Follow-up Study
Subgroup |
Expected Margin of Error by Scenario (Percent) |
||
Best |
Middle |
Worst |
|
NAP participants |
2.81 |
3.53 |
5.30 |
NAP nonparticipants |
3.27 |
4.20 |
6.30 |
Households with children |
3.97 |
5.00 |
7.51 |
Households with at least one older person (60 and older) |
3.11 |
3.91 |
5.87 |
Households with at least one person with a disability |
3.37 |
4.24 |
6.38 |
B.2.4. Unusual Problems Requiring Specialized Sampling Procedures
There are no unusual problems that require specialized sampling procedures.
B.2.5. Any Use of Periodic (Less Frequent than Annual) Data Collection Cycles to Reduce Burden
This is a one-time study. Concern about the periodicity of data collection cycles is not applicable.
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
B.3.1 Methods to Maximize Response Rates
Household survey
The data collection plan for the household survey includes multiple strategies intended to maximize response rates and address issues of nonresponse. All sampled households will receive a $5 prepaid cash incentive to encourage participation. Previous studies, including the Farmers’ Market Client Survey incentive experiment, have shown that a small noncontingent incentive has a positive effect on response rates15, 16 (see Appendix H). All responding households will also receive a $40 postparticipation incentive (paid by gift card17) for completing the survey. Households will have the option of completing the survey by web, mail, or telephone). Using this approach effectively addresses the preferences and needs of different respondents that are part of the household sample.
The study team will make several attempts to reach households sampled from the NAP participant list. Specifically, these households will receive an initial invitation letter with a $5 prepaid cash incentive, a reminder postcard, and up to two mail surveys. Among nonrespondents with available telephone numbers, a one-third subsample will be randomly selected and contacted by certified telephone data collectors to complete a computer-assisted telephone interview (CATI).
To encourage participation among eligible households selected from the area probability frame, field data collectors will hand deliver a survey packet including a $5 prepaid cash incentive, a hardcopy version of the survey and a postage‐paid return envelope. When speaking with an adult household member, field data collectors will offer several options for completing the survey, such as visiting the study website and completing the web survey, mailing the completed paper survey to the study team’s home office, scheduling a time for the data collector to return and pick up the survey, or scheduling a time to complete the survey by phone.
To ensure participants view the study as legitimate, the study team will develop and host a study website (see Appendix G.9, G.10, G.20, and G.21 for website text). The website will provide an overview of the study, responses to frequently asked questions, and contact information for the study team (USDA FNS, Insight, Westat, and Estudios Técnicos).
Based on the expected response rate of 33 percent, it is anticipated that among the 922 completed surveys from households sampled from the NAP list, approximately 871 households will complete by web or mail prior to phone follow-up of nonrespondents and 51 will complete by CATI after follow-up. The team anticipates having 2,733 completed surveys from eligible households from the area probability frame, with approximately 91 households completing the survey by phone and the remaining 2,642 households completing the survey by web, mail, or data collector pickup. Each sample member will be assigned a unique study identifier to track them across survey modes to ensure respondents complete the survey only once.
In-depth interviews
The last page of the household survey will invite respondents to provide their name and a primary and secondary telephone number if they are interested in participating in a one-on-one interview. The data collection plan for the in-depth interviews includes several strategies intended to maximize participation, starting with offering an incentive. If approved, the survey will inform respondents that in-depth interview participants will receive $50 postparticipation incentive (by gift card or cash app) as a thank you for their time. The incentive will be distributed via cash app or gift card. To minimize delays between survey completion and interviewer follow-up, the field manager will assign cases to interviewers on a rolling basis. All interviewers will be local Puerto Rico residents, and they will contact respondents using a local phone number. When trying to schedule an interview, they will ask potential participants to suggest a convenient time and location. If recruitment from the survey respondent sample does not yield 144 completed interviews, the study team will implement a backup recruitment plan. Specifically, the team will develop a study announcement for dissemination by the team’s data collection partner in Puerto Rico or by trusted local community partners that serve the target population (see Appendix I.3 and I.6). The announcement will provide general information about the interview topics and the participants the study is seeking, plus a toll-free number individuals can call if they are interested in participating. Interviewers will answer calls to the toll-free number and use a screening script to determine whether a caller belongs to one of the subgroups being recruited (see Appendix I.2 and I.5).
Concept-mapping task
The study team does not anticipate any challenges related to recruiting stakeholders for the concept-mapping task because the team plans to invite participants who are invested in the issues and have been recommended by FNS or members of the TWG. A $50 honorarium (paid by check) per meeting will be offered to concept-mapping participants.
B.3.2. Nonresponse Bias Analysis
The study team will conduct a nonresponse bias analysis to evaluate the potential impact of nonresponse on the survey estimates and the effectiveness of the weight adjustments in reducing potential nonresponse biases.
The team will benchmark the survey estimates to external totals for demographics and other key variables of interest, such as measures of poverty. The most recent ACS 5-year estimates will be used as the external source. The study team will compare weighted survey estimates (based on the final raked weights) with estimates from the ACS. For characteristics used in raking, the team will use the adjusted weights without the raking adjustment in this comparison. For this benchmarking analysis, the team will produce tables that include the weighted estimates, standard errors, 95 percent confidence intervals, and the estimates from the ACS.
Next, the study team will evaluate differences found in comparisons between weighted estimates (using the final survey weight) of responding NAP participants (from both samples, combined) and totals from the full NAP participant list for frame characteristics of interest, chosen in consultation with FNS (e.g., presence of an older person in household, highest educational attainment in household, whether anyone in household is employed, household income), depending on the availability of such characteristics on the NAP participant list. For each comparison, the study team will produce t-tests and resulting p-values.
Last, the team will compare estimates of survey outcomes of respondents (e.g., food insecurity) using the base weight with the compositing adjustment and fully weighted estimates of those same survey outcomes. This process provides an alternative way of assessing how nonresponse may have affected the distribution of the respondent sample and, therefore, may potentially affect the sample-based estimates. The team will produce t-test statistics and associated p-values for differences in estimates before and after weighting adjustments.
In addition to nonresponse bias analysis, the study team will consider using imputation techniques to possibly reduce nonresponse bias by filling in missing data for secondary variables. Missing data on outcome variables and key variables of interest will not be imputed. Imputation is widely used to fill in item nonresponse in survey datasets because it makes analysis more consistent, avoids large biases in estimates of totals, and can reduce nonresponse bias in other types of estimates in some circumstances. Eliminating missing data by imputation also enables analysts to include all cases when conducting multivariate analyses. The general approach involved in imputation is to use the information available for a record to assign values for the record’s missing items.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
The household survey was pretested with 8 low-income respondents in Puerto Rico (Appendix O, Table 2), and the interview guide, using completely different questions, was pretested with 9 respondents (Appendix O, Table 3). This is consistent with OIRA guidance on this topic. The overarching objective of the pretest was to ensure the instruments were clear and understandable to respondents. Specific pretest objectives for both instruments were to (1) identify problems related to communicating the intent or meaning of questions; (2) ensure the Spanish translation conveys the intended meaning; (3) determine whether respondents can accurately provide the information requested; and (4) evaluate the flow, question order, and respondent burden in terms of the number of questions and recall of difficult events such as natural disasters. The survey pretest also sought to assess the response options for relevance and adequate range of response options; identify problems with introductions, instructions, or explanations; and assess the cultural relevance of the questions.
As a result of the survey pretest, the team made several revisions to the survey including adjusting examples and clarifying skip patterns or instructions. The survey pretest confirmed that initial burden estimates for the survey were accurate. As a result of the in-depth interview pretest, the team made minor revisions to the protocol and identified a few items to address in the data collector training. The in-depth interview protocol pretest confirmed that initial burden estimates were accurate. Appendix O details the pretest methods and findings.
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
FNS consulted with a mathematical statistician from USDA’s National Agricultural Statistics Service (NASS), who reviewed the study methodology and procedures (see table B.5.1). The review from NASS and the study team’s response to NASS’s comments appear in Appendix N. FNS has contracted with Insight Policy Research to assist in conducting this study. Table B.5.1 lists the names and contact information of individuals consulted on statistical aspects of the design and the Insight team members responsible for the collection and analysis of the study data. The Project Officer for the contract providing funding for the evaluation, Dr. Kristen Corey, will be responsible for receiving and approving all contract deliverables.
Name |
Title |
Organizational Affiliation |
Contact Information |
Brent Farley |
Mathematical Statistician |
National Agricultural Statistics Service |
|
Kristen Corey |
Social Science Research Analyst |
USDA FNS |
|
Claire Wilson |
Director, Human Services |
Insight Policy Research |
|
Tracy Vericker |
Associate Director, Social Policy and Economics Research |
Westat |
|
Jill DeMatteis |
Vice President |
Westat |
|
Jennifer Kali |
Statistician |
Westat |
|
Anitza Cox |
Director, Analysis and Social Policy |
Estudios Técnicos |
1 Lohr, S. L. (2009). Multiple-frame surveys. Handbook of Statistics, 29, 71–88. Elsevier.
2 It is not possible to identify overlap prior to selection because the addresses in the area probability sample will be determined as interviewers are in the field collecting data.
3 Hansen, M. H., & Hauser, P. M. (1945). Area sampling—some principles of sample design. Public Opinion Quarterly, 9, 183–193.
4 A longer field period is required for the area probability frame to balance sample size with the number of required data collectors.
5 Kalton, G., & Flores-Cervantes, I. (2003) Weighting methods. Journal of Official Statistics, 19, 81–97.
6 Kalton, G., & Flores-Cervantes, I. (2003) Weighting methods. Journal of Official Statistics, 19, 81–97.
7 Lohr, S. (2014). When should a multiple frame survey be used? The Survey Statistician, 69, 17–21.
8 Deming, W. E., & Stephan, F. F. (1940). On a least squares adjustment of a sampled frequency table when the expected marginal tables are known. Annals of Mathematical Statistics., II, 427–444.
9 Potter, F., & Zheng, Y. (2015). Methods and issues in trimming extreme weights in sample surveys. Proceedings of the Joint Statistical Meetings. http://www.asasrms.org/Proceedings/y2015/files/234115.pdf
10 Wolter, K. (2007). Introduction to variance estimation. Springer.
11 Brick, J. M., Valliant, R., & Morganstein, D. (2000). Analysis of complex sample data using replication. https://www.researchgate.net/profile/David_Morganstein/publication/252297575_Analysis_of_Complex_Sample_Data_Using_Replication/links/55562a2e08ae6fd2d8235fbf/Analysis-of-Complex-Sample-Data-Using-Replication.pdf
12 FNS. (2020). SNAP web tables: Puerto Rico Nutrition Assistance Program. https://fns-prod.azureedge.net/sites/default/files/resource-files/PR%24Ben%23Part-7.pdf
13 Although the 2015–2019 ACS PUMS data are available as of the time of submission of this supporting statement, the 2014–2018 PUMS data were the most recent available at the time the sample design was finalized.
14 Acosta, R. J., Kishore, N., Irizarry, R. A., & Buckee, C. O. (2020). Quantifying the dynamics of migration after Hurricane Maria in Puerto Rico. Proceedings of the National Academy of Sciences of the United States of America, 117(51), 32772–32778.
15 Karakus, M., MacAllum, K., Milfort, R., & Hao, H. (2014). Nutrition assistance in farmers markets: Understanding the shopping patterns of SNAP participants. Westat. https://www.fns.usda.gov/sites/default/files/FarmersMarkets-Shopping-Patterns.pdf
16 Mercer, A., Caporaso, A., Cantor, D., & Townsend R. (2015). How much gets you how much? Monetary incentives and response rates in household surveys. Public Opinion Quarterly, 79(1), 102–129.
17 The study team has not predetermined what type of gift card will be used. As has been approved for previous FNS studies, the team typically asks a local contact (e.g., Estudios Técnicos) during the planning phase about the most useful type of gift card for respondents in that particular community. For example, in some communities, it might be a local grocery store, whereas in other communities, it might be a national chain store. Generic credit card gift cards, such as Visa and MasterCard, include an activation fee above and beyond the value of the card, which means participants would not receive the full benefit of the token of appreciation.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Elaine Ayo |
File Modified | 0000-00-00 |
File Created | 2023-08-27 |