ARRA II Study -- Part B June Update

ARRA II Study -- Part B June Update.doc

Study to Assess the Effect of SNAP Participation on Food Security in the post-American Recovery and Reinvestment Act (ARRA) Environment

OMB: 0584-0563

Document [doc]
Download: doc | pdf



Study to Assess the Effect
of Supplemental Nutrition Assistance Program Participation on Food Security in the
Post-American Recovery and Reinvestment Act Environment

Part B

March 8, 2011



Contract Number:

AG-3198-D-10-0051

Mathematica Reference Number:

06801.410

Submitted to:

Office of Research and Analysis

Supplemental Nutrition Assistance Program

Program Development Division, Certification Policy Branch

USDA Food and Nutrition Service 3101 Park Center Drive, Room 1014

Alexandria, VA 22302

Project Officer: Sarah Zapolsky

Submitted by:

Mathematica Policy Research

600 Maryland Avenue, SW

Suite 550

Washington, DC 20024-2512

Telephone: (202) 484-9220

Facsimile: (202) 863-1763
Project Director: Jim Ohls
Deputy Project Director: James Mabli
Survey Director: Dawn V. Nelson


Study to Assess the Effect
of Supplemental Nutrition Assistance Program Participation on Food Security in the
Post-American Recovery and Reinvestment Act Environment

Part B

March 8, 2011






CONTENTS

A JUSTIFICATION 1

A.1. Circumstances Making the Collection of Information Necessary 1

A.2. Purpose and Use of the Information 2

A.3. Use of Information Technology and Burden Reduction 4

A.4. Efforts to Identify Duplication and Use of Similar Information 5

A.5. Impacts on Small Businesses or Other Small Entities 6

A.6. Consequences of Collecting the Information Less Frequently 6

A.7. Special Circumstance Relating to the Guideline of 5 CFR 1320.5 6

A.8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside Agency 7

A.9. Explanation of Any Payment or Gift to Respondents 8

A.10. Assurance of Confidentiality Provided to Respondents 9

A.11. Justification for Sensitive Questions 10

A.12. Estimates of Hour Burden Including Annualized Hourly Costs 11

A.13. Estimates of Other Total Annual Cost Burden to Respondents or Record Keepers 12

A.14. Annualized Cost to Federal Government 12

A.15. Explanation for Program Changes or Adjustments 12

A.16. Plans for Tabulation and Publication and Project Time Schedule 12

1. Household Telephone Survey Data Analysis 13

2. Current Population Survey Analysis 22

3. In-Depth Interview Analysis 27

4. Project Schedule 28

A.17. Reason(s) Display of OMB Expiration Date Is Inappropriate 29

A.18. Exceptions to Certification for Paperwork Reduction Act Submissions 29

A.19. Customer Service Center 29

B COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS 29

B.1. Respondent Universe and Sampling Methods 29

B.2. Procedures for the Collection of Information 32

B.3. Methods to Maximize Response Rates and to Deal with Nonresponse 39

B.4. Test of Procedures or Methods to be Undertaken 44

B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 46

References 47



Legislative Authority

APPENDIX A: Telephone Survey - English

APPENDIX B: Telephone Survey – Spanish

APPENDIX C: In-depth Interview Guide

COPY OF FEDERAL REGISTER NOTICE

APPENDIX d: Comments received in response to federal register notice

APPENDIX E: FNS Response to Federal Register comments received

APPENDIX F: Respondent Advance Letter—Baseline Telephone Survey (English & Spanish)

APPENDIX G: Respondent Reminder Post Card (English & Spanish)

APPENDIX H: Respondent Advance Letter—Follow-up Telephone Survey (English & Spanish)

APPENDIX I: Respondent Thank You Letter—(English & Spanish)

APPENDIX J: IN-DEPTH INTERVIEW CONSENT FORM

Appendix K: NASS Comments

TABLES

A.1 Estimated Total Annual Hour Burden by Respondent Type 12

A.2 Percentage of Households That Are Food Insecure, by Length of SNAP
Participation 15

A.3 Regression Coefficients of the Effects of SNAP Participation and Household Characteristics on a Household’s Likelihood of Being Food Insecure 17

A.4 Regression-Adjusted Percentages of Households That Are Food
Insecure, by Length of SNAP Participation 18

A.5 Regression-Adjusted Percentages of Households That Are Food
Insecure, by Length of SNAP Participation and by Whether Household
Has Children 21

A.6 Percentage of SNAP Households That Are Food Insecure Before and
After the 2009 ARRA Benefit Increase, by CPS-FSS Sample 24

A.7 Percentage of Households That Are Food Insecure, by SNAP
Participation Status: Changes from 2008 to 2009 26

B.1 Data Collection Assumptions 31

B.2 Minimum Detectable Differences over Time in the
Longitudinal Design 36

B.3 Minimum Detectable Differences over Time in the Cross-Sectional
Design 39



This page has been left blank for double-sided copying.

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

B.1. Respondent Universe and Sampling Methods

To meet the objectives outlined in Supporting Statement Part A, Section A.2, we will select two representative samples, one comprised of newly certified SNAP households and the other of SNAP households which, in their current spell, have participated in the program from six to seven months. Both groups will contain interviews conducted in English and Spanish.

Outcomes for the newly certified SNAP households will be compared with outcomes for SNAP households that have participated for six to seven months (a cross-sectional analysis). In addition, outcomes for the newly certified households will be compared with outcomes for the newly certified households that continue to participate six months later (a longitudinal analysis).

Sampling methods. A key consideration in our approach to sampling is that the only practical way to obtain sample frames for a national sample of SNAP participants is through the state agencies that operate the program. The USDA’s Food and Nutrition Service does not have a national file with the information that is needed. In light of this, we will ensure efficiency in sampling for this project by drawing the sample of SNAP participants in a two-stage process. First, we will draw a sample of states, using probability proportional to size sampling. Second, we will draw samples of participant households from caseload files provided by participating states.

States will be selected from the 48 contiguous states and the District of Columbia. In sampling the states, the measure of size used will be the number of SNAP households. At the first stage, we will be selecting 30 states and all states with at least one-thirtieth of the national caseload (about 14 states) will be sampled with certainty. The rest will be sampled with probabilities proportional to size.

In sampling individual households within states from state-supplied caseload files, the sampling rates will be calculated according to the following principles:

  • For each certainty state, the sample size will be set proportional to the size of the state’s caseload. For instance, if state A and state B are both certainty states, and if the caseload in state A is 50 percent larger than in state B, then state A will have 1.5 times the sample of state B.

  • For the sample states not chosen with certainty, equal-sized samples will be taken.

Given the total sample size across all states, which is specified below, these sampling rules define a unique number of cases to be selected from each state.

The new entrant households will be used in both the cross-sectional and longitudinal analyses. To allow for attrition in the longitudinal sample over time, the sample Mathematica draws of new entrant households, which are to be re-interviewed approximately six months after their initial interviews, will be larger than the sample of six-month participants, which are to be interviewed only once.

As discussed further in Section B.2, all sample sizes are selected to achieve sufficient statistical power in the planned analysis. Table B.1 outlines the data collection assumptions that underlie the proposed starting survey sample sizes of 11,000 newly certified households at baseline and 6,100 current SNAP households. After applying assumed contact rates, eligibility rates, and cooperation rates, these sample sizes result in about 7,600 interviews of newly certified households at baseline and 4,000 current SNAP households. This assumes we have correct contact information for 90 percent of households and are then able to contact 90 percent of those households. It also assumes that once contacted, 5 percent of newly certified households will deny currently receiving SNAP and 10 percent of those who have participated about six months will deny receiving SNAP. Finally, it assumes that 90 percent of the eligible contacted sample will cooperate and complete an interview.

Conducting 7,600 newly certified SNAP baseline interviews is necessary to eventually arrive at approximately 4,000 completed follow-up interviews approximately six to seven months later (Table B.1). This assumes a 35 percent rate of attrition from SNAP in the first six months on the program, which is based on evidence from the most recent USDA-funded study of SNAP dynamics (Cody et

Table B.1. Data Collection Assumptions

New SNAP Entrants

Baseline
Survey

In-depth Interview

Follow-up Survey

Estimated Percentage

Data Collection Assumptions

Sample Size

Sample Size

Sample Size






11,000




Initial Sample Size of New Entrants

9,900



90%

Correct contact info (10% incorrect contact info & cannot locate by end of data collection)

8,910



90%

Contact rate

8,465



95%

Reported eligibility rate (once contacted, 5% deny receiving SNAP or report a previous SNAP spell)

7,618



90%

Cooperation rate (of eligible contacted sample)







60



Initial Sample Size of In-depth Interview Sample of New Entrants


45


75%

Response rate








7,558


Remaining respondents from initial survey



4,913

65%

Starting Sample Size for Follow-up Survey (35% attrition from the program after 6 months)



4,667

95%

Correct contact info (5% incorrect contact info & cannot locate by end of data collection)



4,434

95%

Contact rate



4,212

95%

Reported eligibility rate (once contacted, 5% deny 6 to 7 month SNAP spell)



4,001

95%

Cooperation rate (of eligible contacted sample)

Current SNAP Households






6,100




Initial Sample Size of Current SNAP Households

5,490



90%

Correct contact info (10% incorrect contact info & cannot locate by end of data collection)

4,941



90%

Contact rate

4,447



90%

Reported eligibility rate (once contacted, 10% deny being a current SNAP household)

4,002



90%

Cooperation rate (of eligible contacted sample)







60



Initial Sample Size of In-depth Interview Sample of Current SNAP Households


45


75%

Response rate

Note: Shaded cells represent the number of completed cases for each sample.

al. 2007). It also assumes higher rates of having correct contact information and higher rates of contact, reported participation, and cooperation than for the baseline sample of new entrants because these households have been located and participated previously in the baseline interviews.

Expected response rates. The resulting expected response rates for the newly certified households’ baseline interview and for that of the six-month participants are 73 percent and 72 percent, respectively, calculated using the American Association for Public Opinion Research response rate 3 (RR3) formula (AAPOR 2009). Primarily due to the rapport established during the baseline interview along with updated contact information, the expected response rate for the six-month follow-up interview of the newly certified households is greater (86 percent) also calculated using AAPOR’s RR3 formula.

As discussed in greater detail in Section B.3, even if nonresponse is at all higher than desired in the current study, the SNAP sample frames serve as an effective source of data for a nonresponse analysis and adjustment and will lay the basis for ensuring the national representativeness of the study population. Besides having basic demographic information available for both responders and nonresponders, including a measure of household resources, household records on the sample frame data have address information, enabling them to be linked to an extensive set of population characteristics from the American Community Survey (ACS) that describe the local areas in which the households live. We are confident that together the sample frame and ACS variables will be substantially correlated with nonresponse and help to construct adjustment factors that ensure national representativeness of the study population.

B.2. Procedures for the Collection of Information

Statistical methodology for stratification and sample selection. As noted in Section B.1, two groups of SNAP households are to be sampled, and they will be sampled from 30 states in a two-stage procedure. The states will be selected with probability proportional to size sampling, using procedures described in Section B.1, which take into account the states that are large enough to be sampled with certainty.

At the second stage, the participant households are to be selected using simple random sampling, based on case record files provided by the states. No stratification beyond identifying the two groups of SNAP participant households of interest (defined above) is planned.

Estimation procedure. As described in Supporting Statement Part A, Section A.16, the analysis will begin by comparing the percentages of households that are food insecure for different groups of households. The percentages of households that are food insecure with very low food security will also be examined. The first comparison will involve comparing new entrant SNAP households with SNAP households that have been participating for about six months, interviewed during the same calendar period (a cross-sectional analysis). The second main comparison will compare the percentage of households that are food insecure (and food insecure with very low food security) of the initial sample of new entrant households at the time they were just entering SNAP and of those same households when re-interviewed approximately six months later (a longitudinal analysis).

Following these descriptive analyses, we will attempt to control statistically for differences in household demographic and economic characteristics using regression methods. In particular, we will regress measures of food insecurity on a set of household and state characteristics, including the gender, race and ethnicity, age, current employment status, and marital status of the head of the household; household income, composition, and region of residence; and a variable indicating whether the household has participated in SNAP previously. The state characteristics will include economic measures related to the state unemployment rate and wage distribution. We will also include state policy variables that may affect households’ continued participation in the program, such as re-certification periods and simplified income reporting. We expect this regression analysis to control at least partly for possible selection bias resulting from households with different characteristics having different propensities to be food insecure independent of SNAP participation. We also expect it to control at least partly for changes in economic and other household factors over time (in the longitudinal analysis) that may be associated with changes in the food security status of participant households. Additional details pertaining to analysis methods are provided in Section A.16.

Degree of accuracy needed for the purpose described in the justification. As described in Section B.1, for the longitudinal part of the survey a starting sample size of approximately 11,000 new entrants is required so that we can complete approximately 7,600 telephone survey interviews at baseline and 4,000 telephone survey interviews at the six-month follow-up. For the cross-sectional part of the survey, a starting sample of 6,100 current SNAP households is required in order to complete telephone survey interviews with approximately 4,000 participants. These sample sizes will provide adequate precision, expressed in terms of minimum detectable differences (MDD), to support the evaluation.

The analytical objectives of the study are both descriptive and multivariate in nature, and they focus on estimating the impact of SNAP participation on household food security. The main outcome measure in the descriptive analysis is the percentage of households that are food insecure with very low food security.1 Based on Nord et al. (2009), approximately 27.5 percent of households with income below 130 percent of the federal poverty level that had received SNAP in the past 12 months experienced very low levels of food security. In computing MDDs, we specified that the minimum difference should be detectable 80 percent of the time (80 percent power) with a 95 percent level of confidence for the underlying statistical test, and that both increases and decreases should be detectable (that is, a two-tailed test). Our recommended sample sizes at the household level (for the full sample) are based mainly on FNS’s desire to detect differences of 3 to 4 percentage points for a given level of evaluation resources.2

Based on these considerations, the MDDs for detecting differences over time between the sample of new entrants at time 1 and the same households at time 2 are presented in Table B.2. Four sets of MDDs are displayed—one for analysis based on the full sample and three others for hypothetical subgroup analyses based on three-quarters, one-half, and one-quarter of the sample, respectively. The sample size of new entrants in the longitudinal sample is 7,600 at time 1 and 4,000 at time 2, but in Table B.2 we have restricted the baseline and follow-up samples to be equal in size to maximize the comparability of the two groups, as the characteristics of participant households just entering the program have been shown to differ substantially from those of households that have participated for six months. That is, the analysis based on the longitudinal sample will consist only of those households interviewed at both points in time.3 Thus, the overall precision level is smaller due to the smaller “effective” sample size at time 1, but the greater comparability across samples minimizes selection bias of the impact estimates.

For the households interviewed at both points in time, these sample sizes, with an MDD equal to 2.97 percentage points. MDDs are, of course, greater for the subgroup analyses shown, ranging from 3.20 to 4.65 percentage points.

Table B.2. Minimum Detectable Differences over Time in the Longitudinal Design


Baseline
Sample Size

Follow-Up
Sample Size

MDD

Full Sample

4,000

4,000

2.97

75% Subgroup

3,000

3,000

3.20

50% Subgroup

2,000

2,000

3.62

25% Subgroup

1,000

1,000

4.65


Note: MDDs are computed as 2.8*sqrt[ var(p1)+var(p2)-2cov(p1,p2)], where p1 and p2 are the probabilities of being food insecure at times 1 and 2, var(p1) is the variance of the outcome measure at time 1, var(p2) is the variance of the outcome measure at time 2, and cov(p1,p2) is the covariance of the outcome measures across time (assumed to be equal to 0.2 in the longitudinal design). Var(p1)=(deff_c1)*(deff_w)*p1*(1-p1)/n1, where deff_c1 is the design effect from clustering at time 1, deff_w is the design effect from weighting, and n1 is the sample size at time 1. Var(p2) is similarly defined. Cov(p1,p2)=n2*(p12-p1*p2)/(n1n2), where p12 is the probability of being food insecure at both time 1 and time 2 and is assumed to be equal to 20 percent. A smaller probability of being food insecure at both time 1 and time 2 would produce slightly larger MDDs. These formulas are taken from equation 12.4.12 in Kish (1965) and adjusted for design effects.

For a two-stage sample the design effect of clustering (Deff_c) is Deff_c = 1 + ICC (b – 1), where ICC is the intracluster correlation—a measure of relative homogeneity within primary sampling units (PSUs), which are states in our survey—and b is the average number of interviews per PSU. We used data on food security from the CPS food security supplement to estimate the intracluster correlation (ICC) across state food security incidence numbers as approximately 0.009. We also included an expected design effect of 1.2 due to unequal weights to adjust for nonresponse. In addition, we incorporated a finite population correction (FPC) based on the fact that a large proportion of states will be sampled The District of Columbia is counted as a state both in our sample and in the estimation of the finite population correction factor.

The MDDs for the cross-sectional design are presented in Table B.3. As in the longitudinal design calculations, we restrict the new entrant sample to those participant households that do not leave the program before their follow-up interview (six months later) to increase comparability between the new entrants and the six-month participant households at the time of baseline data collection. Thus, we use 4,000 participant households as the sample size of new entrants at baseline, despite 7,600 households being interviewed. For the full sample, these sample sizes yield high levels of precision, with an MDD equal to 3.72 percentage points. Once again, MDDs are greater for the subgroup analyses shown, ranging from 4.11 to 6.44 percentage points.



Table B.3. Minimum Detectable Differences over Time in the Cross-Sectional Design


Sample Size

MDD

Full Sample

4,000

3.72

75% Subgroup

3,000

4.11

50% Subgroup

2,000

4.80

25% subgroup

1,000

6.44


Note: MDDs are computed as 2.8*sqrt[ var(p1)+var(p2)-2cov(p1,p2)], where p1 and p2 are the probabilities of being food insecure at times 1 and 2, var(p1) is the variance of the outcome measure at time 1, var(p2) is the variance of the outcome measure at time 2, and cov(p1,p2) is the covariance of the outcome measures across the two participant groups (assumed to be equal to zero in the cross-sectional design). Var(p1)=(deff_c1)*(deff_w)*p1*(1-p1)/n1, where deff_c1 is the design effect from clustering at time 1, deff_2 is the design effect from weighting, and n1 is the sample size at time 1. Var(p2) is similarly defined. These formulas are taken from equation 12.4.12 in Kish (1965) and adjusted for design effects.

For a two-stage sample the design effect of clustering (Deff_c) is Deff_c = 1 + ICC (b – 1), where ICC is the intracluster correlation—a measure of relative homogeneity within primary sampling units (PSUs), which are states in our survey—and b is the average number of interviews per PSU. We used data on food security from the CPS food security supplement to estimate the intracluster correlation (ICC) across state food security incidence numbers as approximately 0.009. We also included an expected design effect of 1.2 due to unequal weights to adjust for nonresponse. In addition, we incorporated a finite population correction (FPC) based on the fact that a large proportion of states will be sampled. The District of Columbia is counted as a state both in our sample and in the estimation of the finite population correction factor.

The MDD analysis above is for the household survey samples. While information from the 90 households that will participate in the in-depth interviews after baseline data collection will be used for analytical purposes, it will not be used quantitatively to estimate program impacts. As described in Supporting Statement Part A, Section A.16, however, it will be very useful in generating insights about the relationships between food insecurity and SNAP participation that can then be tested more rigorously in future studies.

Who will collect the information and how it will be done. As FNS’s contractor, Mathematica Policy Research will conduct the household telephone survey and perform the in-depth, in-person interviews, with the latter being conducted under the guidance of senior consultant Kathy Edin of Harvard University.

Upon receipt of the sample files from the states, the entire file will be sent to a private locating company for address and telephone updates. Using the most current address information, the contractor will mail by first class postal mail an advance letter signed by a USDA official that includes a $2 prepaid incentive and a promised $20 additional incentive upon completion of the telephone survey. Approximately three days after advance letters are mailed to the sampled households, experienced, trained telephone interviewers will begin contacting the households and conducting interviews using the programmed computer-assisted telephone interview (CATI) instrument. An automated call scheduler will be used to manage the sample by controlling the delivery of cases to the interviewers.

Generally speaking, the goals of “call scheduling” are (1) to identify a time when someone will answer the telephone for an initial contact; (2) schedule and track appointments to interview the respondent; and (3) assign calls to interviewers with special skills (such as refusal conversion or language specialists) as appropriate. To achieve these goals, Mathematica will define time slots, create queues, and set rules for the initial call as well as rotating cases in and out of appropriate queues and through the time slots. Rules for the minimum and maximum number of calls will also be established, as will times when messages are to be left on answering machines and times when supervisors will review cases.

Quality control. To ensure high-quality data collection, all interviewers assigned to the project, both experienced and new, will receive project-specific training. This will include providing them with a description of the study and instructions on how to respond to respondent questions about it. In addition, a “walk through” of the instrument will explain the intent of each item and appropriate probes that are specific to the questions.

To ensure interviewers are performing as trained, telephone interviewers will be regularly monitored via a system that enables verbal and visual monitoring without either the interviewer’s or the respondent’s knowledge. Interviewers are informed that they will be monitored, but they do not know when observations will take place. Respondents are also informed, at the start of the interview, that the conversation may be recorded for quality control purposes.

To enable the close monitoring of production interviewing, the survey software will generate call disposition reports at the beginning and end of each shift for the shift supervisors. The outgoing shift supervisor will meet with the incoming manager to review the current disposition report, alerting the manager to any problems so that immediate corrective action can be taken at the beginning of the shift.

Mathematica undertakes an independent and expert quality assurance review of all major deliverables for every project. In line with these procedures, all analytical programs for this study will be reviewed by researchers working with the data. Specifically, Mathematica research staff will review a combination of debugging output and “data dumps” to ensure that program specifications are being executed correctly. A director of Mathematica’s Human Services Research Division will review all project analysis plans and all written deliverables submitted to the USDA. These quality assurance reviews will focus on whether the analytical approaches are appropriate and sufficient for answering the research questions, whether the analysis is thorough and sound, and whether the methods and results are described correctly and in ways that are accessible to a broad audience.

B.3. Methods to Maximize Response Rates and to Deal with Nonresponse

As noted earlier in Section B.1, the expected response rates are 73 percent for the newly certified households’ baseline interview, 86 percent for the newly certified households’ follow-up interview, and 72 percent for the six-month participant households interviewed at baseline. A variety of efforts will be undertaken to maximize response and minimize nonresponse bias, especially during the baseline interview. We are hopeful that these efforts, which we next describe, will enable us to achieve response rates higher than projected, particularly for the baseline interviews. However, should they not, we will conduct nonresponse bias analyses, described in detail in the pages that follow.

Strategic use of technology tools, including CATI, an automated call scheduler, and a survey management system (SMS) will help to maximize contact with households. An aggressive calling strategy will be followed—in general, four calls per day, depending upon the outcome of the last call. All calls will be attempted according to optimal contact schedules, based on Mathematica’s extensive experience in similar studies and on a review of SNAP administrative record information, such as household size, age and primary language spoken at home. In addition to identifying preferred time slots, the automated call scheduler will prioritize the sample based upon date of SNAP certification, with the earliest certified cases given highest priority.

Locating efforts will also be intense, especially at the onset of the baseline interview. Missing contact information will be obtained either from a private vendor (such as Accurint, a division of LexisNexis), or by Mathematica’s in-house locating staff, who are highly skilled in searching specialized online databases. To facilitate efficient contact during the follow-up interview, a series of questions to obtain additional contact information will be included at the end of the baseline interview. Further, the use of reminder postcards will provide updated addresses for those who have moved since the baseline interview.

As previously described, advance letters signed by a USDA official that include a $2 prepaid incentive and a promised $20 additional incentive upon completion of the telephone survey will be sent by first class postal mail to convince the sampled households of the value of the baseline survey and the importance of participation. This effort will be repeated for the six-month follow-up interview. The 60 participants in the in-depth interview will also receive a $30.00 post pay incentive for completing the 90-minute interview.

Experienced and highly skilled interviewers will be assigned to this project following completion of study-specific training. They will provide responses to questions that explain why the study is worthwhile, and that neither respondents’ jobs nor their benefits will be affected by their participation in the survey. To minimize nonresponse due to language barriers, all study materials will be translated into Spanish. We will attempt to identify Spanish-speaking respondents in advance, via sample frame information on the language used to complete the SNAP application, and offer to conduct these interviews in Spanish.

Nonresponse Analysis and Adjustment. A nonresponse analysis will be based on the SNAP administrative data included in the state sampling frame and data from the ACS. The administrative data from the sampling frame is an important potential data source for this because it contains information for both households that respond to the survey and those that do not. For example, if nonresponse was solely due to age of the household head (which is a variable on the sample frame), then one could adjust the survey weights in a straightforward manner to account for the differential response by age by using the percentage of sample frame households in each age category that responded to the survey. However, because the propensity to respond to the survey may differ according to a variety of characteristics, the ability to adjust for survey nonresponse is improved by including additional characteristics.4 In the current study, the sample frame will include several promising variables. In particular, the sample frames will contain the age and gender of the household head; the primary language spoken in the home; the date when the household’s SNAP case was most recently opened or certified; whether the household received expedited service to receive benefits quickly; and the household’s residential ZIP code and address information. We believe this is a valuable set of information for our planned nonresponse analysis.5

Some of these variables, such as the age of the household head, have been found to affect response rates in national surveys like the Survey of Income and Program Participation for households participating in SNAP (Bollinger and David 2001). Differential nonresponse by other variables, such as whether the household received expedited service, has not been explored, but we believe that this variable is an important measure of the level of need of the household, akin to having information on household income.

One of the most useful pieces of information on the sample frame will be the street address and zip code of households’ residential location for all responders and nonresponders. Using Geographic Information System (GIS) software, we will geocode these addresses and assign local area population characteristics to each household.6 A nonresponse analysis can then be conducted based on environmental and local area characteristics such as population, income, race and ethnicity, education, household size, and poverty.

To perform the GIS-based nonresponse analysis, we will first assign the appropriate Census block group indicator to each household and then merge a file of block-group-level population characteristics from the ACS to the household-level sample frame file. The ACS population data are currently available at the block group level, but not a smaller level of geography, for an aggregate file that spans five years from 2005 to 2009. Many population characteristics can be included. Among them are variables that are particularly likely to be correlated with both nonresponse and our food insecurity outcome measure, including the following (all are defined at the block group level):

  • The number of low-income households

  • The percentage of households in the population with aggregate income less than $20,000

  • The percentage of Hispanic individuals in the population

  • The percentage of households with female head with at least one child under 18

  • The percentage of individuals in the population with less than a high school education

  • The percentage of individuals in the population that are foreign born

  • The percentage of individuals in the population with income less than 200 percent of the federal poverty threshold

  • The percentage of housing units without a vehicle

  • The percentage of individuals in the population younger than 18.

Once block-group-level population characteristics are merged onto the household sample frame records, the characteristic will be treated in ways similar to the sample frame variables in analyzing nonresponse patterns and creating a nonresponse adjustment factor.

A nonresponse adjustment will be derived using a logistic regression model in which a binary variable that indicates whether the household responded to the survey is regressed on a set of household-level characteristics from the sample frame and local-area-population characteristics from the ACS. Household survey weights are then adjusted by the reciprocal of the likelihood of responding for those households that responded to the survey. A potential problem with the adjusted weights is that the adjustment factors themselves may become large, yielding excessive weights for a small number of observations. Should this occur, we either trim weights or form weighting classes. The latter method avoids having the adjustment for each weight based on a single predicted probability of responding and, instead, bases it on an averaged value within the weighting class. Both methods allow adjustment factors to be used to minimize nonresponse bias, while not having excessively large adjustment factors increase the variance of the weights and, therefore, reduce the precision of estimates substantially.

In summary, we are confident that the direct sample frame variables and the ACS-based variables will be substantially correlated with nonresponse. In addition, linking the residential address information on the sample frame to ACS data will provide accurate measures of local area population characteristics that are also substantially correlated with both nonresponse and food insecurity. The strengths of the SNAP sample frames and ACS data, both in terms of the information they contain and the large numbers of low-income households included on the files, make these effective sources of data for the nonresponse adjustment. This will lay the basis for ensuring the national representativeness of the study population, even if nonresponse is at all higher than expected in the current study.

B.4. Test of Procedures or Methods to be Undertaken

To test survey length as well as respondents’ understanding of survey items, Mathematica and its consultants at Harvard University, Kathryn Edin and Sara Greene, conducted pretests of all survey instruments for which OMB clearance is being requested:  (1) the English telephone survey; (2) the Spanish telephone survey; and (3) the In-depth interview guide.7 Each of the instruments was pretested on 9 or fewer respondents, consistent with our understanding of OMB guidelines regarding collection of information prior to receipt of OMB clearance.

English and Spanish Telephone Surveys

 The pretest interviews for the telephone survey took place between February 16 and February 28, 2011, and were conducted by Dawn Nelson and Betsy Santos with assistance from Valerie Williams at Mathematica. Four interviews were conducted using the English questionnaire and five interviews were conducted using the Spanish questionnaire. Most of the pretest respondents were given $30 cash for participating in the one-hour in-person interview, which included extra time for responding to debriefing questions about their perceptions of the interview during or after the interview. Two pretest respondents were not able to attend in person, so they participated in a shorter pretest interview over the telephone, for which they received a $25 check in the mail.

Most questions were seen as clear, appropriate, and not burdensome. None was seen as too sensitive, and there were no suggestions of additional questions to be asked. The most time consuming section was one which required recall of recent food shopping purchases. Ease or difficulty of response, level of detail available, and burden depended on how much shopping had occurred over the past week as well as overall household size. Thus, the time to complete the surveys was longer than planned. Originally designed to take 25 minutes, the English and Spanish versions of the survey took just over 33 minutes, on average, to administer during the pretest. Adjustments were made to address the burden issue, including eliminating certain questions in other sections of the survey, and rewording or reordering questions or response categories.

A second round of testing was undertaken on March 4, 2011, to confirm that the survey modifications had resolved the identified problems. Two pretest interviews were conducted on the revised English telephone survey and one on the revised Spanish telephone survey. To obtain timing estimates on the revised instruments, these interviews were conducted without any interruptions. However, debriefing questions were asked at the end to explore respondents’ understanding of the revised survey items. The minor changes made to the survey were well received by the four pretest respondents and no additional problems were identified. The two English respondents each took approximately 28 minutes to complete the survey while the Spanish respondent’s interview lasted 41 minutes. Additional adjustments were made to address the interview length, which included cutting several survey items from the survey instrument.

 In-depth Interview Guide

The in-depth interview guide pretest was conducted on March 2 and 3, 2011, at two Head Start locations in the East Boston neighborhood of Boston, Massachusetts. Five English speakers and one Spanish speaker were interviewed over the two days. Interviewers included Kathryn Edin, Sara Greene, and Betsy Santos, who conducted the interview in Spanish. All interviews were observed by one or two other project staff members, and respondents were informed of the purpose of the observers’ presence.

Rapport between interviewers and respondents was excellent throughout the pretest. Respondents were eager to participate and easily moved between a set of more general questions about their monthly expenditures and sources of income or in-kind assistance to more specific questions about shopping habits, nutritional views, general health, food hardship, and a set of detailed questions about SNAP and WIC. However, through experimentation with a variety of questions, the researchers were able to identify questions that worked much better than others.

Following the pretest, Kathryn Edin, in conjunction with Sara Greene, revised the qualitative interview guide. Questions were added to help clarify how long respondents' latest SNAP participation period had been and how many distinct SNAP spells respondents had had. Also added were questions about trigger events that had prompted the respondent to initially apply for SNAP, as well as questions about food storage capabilities and how this affects how much and what food respondents buy. New questions to clarify household roster were also needed, as well as questions clarifying respondents thoughts about SNAP verses WIC and how they use these two programs together.

B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The sampling procedures were developed by John Hall, Jim Ohls, and James Mabli of Mathematica Policy Research (609-799-3535) . Analysis plans were developed by James Mabli, Lisa Dragoset, Jim Ohls, and Allen Schirm. Dawn V. Nelson, Betsy Santos, Kathy Edin, and Sara Green will be closely involved in planning and overseeing the data collection for the structured telephone survey and the in-depth interviews as well as in analyzing the data obtained from those interviews.

Mark Nord of USDA ERS (202-694-5433) reviewed all the instruments and Sharyn Lavender of the Statistics Division, NASS/USDA (202-690-0901) reviewed sampling and statistical methodologies for the National Agricultural Statistical Service.

References

Bollinger, Christopher and Martin H. David "Estimation with Response Error and Non-response: Food Stamp Participation in the SIPP'' Journal of Business and Economic Statistics, April, 2001, Vol. 19, no. 2, pp. 129-141.

Cody, Scott, Laura Castner, James Mabli, and Julie Sykes. “Dynamics of Food Stamp Program Participation: 2001-2003.” Final Report Submitted to Food and Nutrition Service, USDA, Washington DC, November 2007.

Cohen, Barbara, Mark Nord, Robert Lerner, James Parry, and Kenneth Yang. “Household Food Security in the United States, 1998 and 1999: Technical Report.” E-FAN-02-010, prepared by IQ Solutions and USDA, Economic Research Service, 2002. Available at: http://www.ers.usda.gov/ publications/efan02010/.

Hamilton, W.L., J.T. Cook, W.W. Thompson, L.F. Buron, E.A. Frongillo, Jr., C.M. Olson, and C.A. Wehler. “Household Food Security in the United States in 1995: Technical Report.” Prepared for USDA, Food and Consumer Service. 1997. Available at: http://www.fns.usda.gov/oane/menu/ published/foodsecurity/tech_rpt.pdf/.

Kennedy, Eileen T., James C. Ohls, Steven Carlson, and Katherine Fleming. “The Healthy Eating Index: Design and Applications.” Journal of the American Dietetic Association, vol. 95, no. 10, October 1995, pp. 1103-1108.

Kish, Leslie. Survey Sampling. New York: John Wiley & Sons, Inc., 1965.

Kovac, Martha D., and Jason Markesich, “Pre-Paid Vs. Promised Incentives: Which Works Better for a Telephone Survey of Low-Income Respondents?” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Nashville, TN, May 2003.

Nemeth, P. Zero-Two-Five: Which Pre-Pay Amount Gets You More for Your Money?” Paper presented at the American Association for Public Opinion Research Annual Conference, Hollywood, FL, May 2009.

Nord, Mark, Alisha Coleman-Jensen, Margaret Andrews, and Steven Carlson. Household Food Security in the United States, 2009. ERR-108, U.S. Dept. of Agriculture, Econ. Res. Serv. November 2010.

Nord, Mark, and Anne Maria Golla. Does SNAP Decrease Food Insecurity? Untangling the Self-Selection Effect. Economic Research Report No. 85, U.S. Dept. of Agriculture, Economic Research Service. Oct. 2009.

Ohls, James, Larry Radbill, and Allen Schirm. “Household Food Security in the United States, 1995-1997: Technical Issues and Statistical Report.” Prepared by Mathematica Policy Research, Inc., for USDA, Food and Nutrition Service. 2001. Available at: www.fns.usda.gov/oane/MENU/ Published/FoodSecurity/FoodSecurityTech.pdf/.

Singer, Eleanor, J. Van Hoewyk, and M.P. Maher. “Experiments with Incentives in Telephone Surveys.” Public Opinion Quarterly, vol. 64, 2000, pp. 171-188.

The American Association for Public Opinion Research. 2009. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 6th edition. AAPOR.

U.S. Department of Agriculture, Food and Nutrition Service, Office of Research and Analysis, Characteristics of Supplemental Nutrition Assistance Program Households: Fiscal Year 2009, by Joshua Leftin, Andrew Gothro, and Esa Eslami. Project Officer, Jenny Genser. Alexandria, VA: 2010.







Improving public well-being by conducting high-quality, objective research and surveys

Princeton, NJ Ann Arbor, MI Cambridge, MA Chicago, IL Oakland, CA Washington, DC


Mathematica® is a registered trademark of Mathematica Policy Research

www.mathematica-mpr.com




1 In the multivariate analysis, it is the probability that a household is food insecure with very low food security. A multivariate analysis will improve precision levels in the tables in this section for a given sample size, but the degree of improvement depends on how much of the variation in the outcome measure (the likelihood of a household’s being food insecure with very low food security) is explained by household and state characteristics.

2 MDDs for subgroups are larger but appear to be adequate for FNS’ needs.

3 This decision is based on recent studies of SNAP program dynamics, but if the two groups of households have fewer differences than expected, we will utilize the full sample of new entrants in the analysis to increase precision. We will also use the full sample of 7,600 newly certified households at baseline to examine the food security status of households according to whether they subsequently leave the program prior to the follow-up interview.

4 Of course it is possible for the propensity for nonresponse to be correlated with unobserved characteristics, in which case adjustments based on observable characteristics are potentially less valuable.

5 It should be noted that while the sample frames will provide a substantial set of information, they will not contain the entirety of the data available from the state case records files. In particular, so as to minimize burden on states and gain their cooperation, we requested only a subset of the complete data that served to define and locate our sample.

6 Two Census boundaries are available for defining a geographic unit: Census tracts and block groups. We propose to use block groups which are the smaller of the two. Census boundaries vary in geographic size but have similar populations. Census tracts in the United States, Puerto Rico, and the Virgin Islands of the United States generally have between 1,500 and 8,000 people, with an optimum size of 4,000 people. Block groups generally contain between 600 and 3,000 people, with an optimum size of 1,500 people.

7 The in-depth interviews with Spanish households will be administered by professional bilingual survey researchers and policy analysts who fully understand the research questions and are authorized to adapt the discussions as appropriate. For this reason, the in-depth interview guide is not being translated into Spanish.


File Typeapplication/msword
AuthorAPitt
Last Modified Byszapolsky
File Modified2011-06-22
File Created2011-06-22

© 2024 OMB.report | Privacy Policy