Crosswalk for OMB Passback

Crosswalk for OMB passback on Food APS USDA ERS.pdf

National Household Food Acquisition and Purchase Survey

Crosswalk for OMB Passback

OMB: 0536-0068

Document [pdf]
Download: pdf | pdf
Passback for ICR Reference Number: 201112-0563-001:
National Household Food Acquisition and Purchase Survey
February, 2012
OMB remains concerned about the quality of the data that this study will yield. We’d like to discuss the following
issues with ERS/MPR on our calls later this week.
I.

Burden. We continue to have concerns about the overlap in content across instruments and events, and
observed item nonresponse in the field test. Accordingly, we believe additional streamlining would be
desirable before main study launch. As we recommended at the TWG and during our recent conference call,
the best way to accomplish the necessary streamlining would be to prioritize the need for different types of
information. We have yet to see the priorities assigned to different types of information presented in Table
A1, and despite the improvements made in the protocol, the burden is still extensive and the need for the
specificity and precision in each of the types of information requested from the participant is not clear. We
present examples of our major concerns immediately below.
Appendix A has been revised to include Table A2 with supporting text. The table prioritizes data
elements into one of four levels (1-4). Information for a number of data elements is obtained from
multiple sources within the National Food Study; where this occurs the table ranks the sources (a-d)
in terms of their expected ability to provide the needed data.
This prioritization of data elements has been used to review the instruments, survey procedures, and
interviewer and respondent training to make sure that the highest priority elements receive the
greatest attention throughout the survey process.
We acknowledge that survey burden is significant, but we believe this burden is not due to content
overlap across instruments. The burden is similar to that experienced in the field test, and ERS
interprets the results of the field test as verification that this ambitious data collection is capable of
collecting valid and reliable data on food acquisition patterns from households of different varying
characteristics.
Income: What is the level of precision that is necessary for the hierarchy of uses to which you see this
study being used? Based on our understanding of how the data might be used, it seems that simplifying
the collection of household income (i.e., using a less complex measure of household income, collected
only once at the time of screening, might address some of our concerns about nonresponse and error in
detailed income reporting, especially in the context of ERS’ proposal to use a worksheet that has not yet
been field tested. (Please remind us of the source of the revised household income question in the
screener.)
Total household income is a key variable for the study (priority level 2 in revised Appendix A) and will
be used in several different ways.
The income information collected by the screener is used primarily to determine to which of the
three non-SNAP quota groups the sampled household should be assigned. Measuring income
accurately takes more than a couple of questions, and it would take too long for a screener to collect

1

income at this level of accuracy, especially as the household’s eligibility to participate in the survey is
not determined until after income information is collected. The source of the income categories in
the screener is Household Interview #2 from the field test. Those questions came from a review of
the SIPP and NHANES questionnaires.
More detailed household income information is collected in the Final Household Interview with
assistance of the income Worksheet. Collecting information about components of income is needed
because reported income is more accurate when collected by source. Although more item nonresponse is observed when collecting income by source, this information likely would have been
overlooked in a question about total income. In this way item non-response provides additional
information about the extent to which computed total income may underestimate actual income.
We also need earned income to be reported separately in order to estimate monthly SNAP benefits
for households deemed eligible for SNAP but not participating. Furthermore, TANF income is handled
differently in the calculation of SNAP benefits than is other income. Income from interest and
dividends provides a way to proxy the level of financial resources available to the household without
asking questions about assets.
Food receipts: Since compliance with receipt collection was a challenge in the field test, and it appears
that imputation from food price lists was an adequate solution in the field test, and food item lists from
food books, bar code scanning (and produce list scanning) could be sufficient to then link to an external
data base for local prices, rather than using participant receipts, why is it necessary to continue to have
respondents collect receipts?
 Receipts are the primary source for information on food prices, and they are the only
source of information on use of manufacturers’ coupons and store discount cards. See
page 8 of the revised Part A and the revised Appendix A for further discussion.
o We also note in the memorandum of 1.19.11 that a comparison of blue pages, scanner data, and
receipts was not possible before the TWG meeting. If some limited analysis on the rate of match is
available prior to full launch of the main study, it may indicate that any one of these sources need not
be collected given marginal costs and benefit observed.
 As described in the revised Appendix A and in pages 7-9 of revised Part A, information
gained from scanned barcodes, blue pages, and receipts is complementary rather than
redundant. Every effort has been made to avoid collecting duplicate information.
If barcode scanning permits linkage to local price data, please remind us of the need for receipt
collection—is the latter meant more as a data supplement? What would be the estimated marginal
improvement in price completeness, based on field test results?
 Barcodes provide information on item descriptions and package size. It is possible to
match barcodes to extant data bases containing price information, but those extant
data bases generally contain data from multiple stores and multiple time periods. Thus,
the extant price data are averaged over space and time and represent an inferior
alternative to actual prices paid at a particular store on a particular date. The extant
data bases also do not include information on price discounts arising from household
use of coupons or store cards, whereas receipts do reflect these discounts. See pages 79 of revised Part A and revised Appendix A for further discussion.

2

Food amount: The 1.19.11 memorandum notes that the “size or amount” of food as reported by the
respondent can be imputed from published menus or NHANES data. If this is the case, will the agency
remove this question from the instruments for the main study?
 FAFH items are generally menu items for which “size or amount” is difficult to report.
We ask respondents to report “size or amount” only if they know it (for example, when
it is written on a package or menu). When size or amount is not provided, extant data
sources like menus and vendor websites will be used to fill in missing data. Otherwise,
information on size or amount will be imputed from within-sample or extant data. See
pages 8-9 of revised Part A for further discussion.
Scanned data: The 9.23.11 memorandum also indicated that the observed low rate of match between
scanned UPCs and Gladson data could be improved by incorporating alternate (additional) sources of UPC
data from large retailers or private label manufacturers, with improvements expected then in data entry
effort, data quality, and potentially, reduction in cost.
o Please confirm that the agency plans to adopt this approach.
The agency is investigating the use of alternative sources of UPC data.
o Regarding the use of the scanner, given its importance as an information collection tool, were
comments received as to its use in the field, by either participants or field workers?
Field interviewers reported that respondents liked the scanner. See page 12 of
revised Part A.
 Additionally, the 9.23.11 memorandum recommended that a web-based sample
management system be used to release cases in the field. This seems especially
important given the multiple phases and batches planned for this study to support
response rates (keep timely advance letters and initial contact; focus level of effort on
sample of non-respondents) and to improve the freshness of SNAP frames (see B.1) to
maintain efficiency of field operations. Please confirm that the agency plans to adopt
this approach.
MPR has developed a new web-based management system. See page 15 of
revised Part A for a description.
o

II.

We note that MPR recommended in a memorandum to the project officer (Rev. 9.23.11) that the
TWG suggested that questions on utility of expenditures be eliminated and the survey be
supplemented with averages by geographic area. Please confirm this suggestion was adopted.
This recommendation was not adopted because we need more precise
information on utility expenditures to estimate monthly SNAP benefits for lowincome households potentially eligible for SNAP. See new study objective 1-j
described in Table A1 of Appendix A. We have, however, eliminated all
expenditure questions in the Final Household Interview that are not needed in
estimating SNAP eligibility or benefit amount. See pages 25-26 of revised Part
A.

Training.
Interviewer training: We are concerned that there has not been field-testing the instruments and
procedures since they were substantially revised as a result of the pilot study. Although ERS initially
rejected having field interviewers practice collecting on ‘test participants,’ we urge ERS to reconsider.
The field interviewers could administer the protocol on test participants after the interviewer completes a

3

week doing the data collection on their own, and de-briefs with MPR/ERS trainers. A non-substantive
change could quickly be processed by OMB to accommodate the tweaks that come out of both the
interviewers collecting their own data for a week and the practice sessions.
 We do not believe that changes to instruments and procedures have been “substantially
revised” since they were field tested. See pages 10-13 of revised Part A. Field
interviewers will practice survey procedures with one another during their planned four
days of training. We note that each “test participant” could be trained only by one
interviewer because—once introduced to the survey materials—further test training
would not be realistic. The burden and logistics of findings “test participants” for the
175 field interviewers being trained are too great to adopt this approach.
Please describe how field staff will maintain ongoing fidelity to the study protocol after initial training has
been conducted.
Field interviewers will have written scripts to follow when training respondents and they
will use a pre-taped video during this training. Monitoring of training procedures is
planned as well. See page 9 of the revised Part A.
Please provide more detail on what the field managers will actually be doing, given that ERS states that
field interviewers will not be monitored. Will experienced interviewers/field managers join new
interviewers on the first few, or random, field visits to provide guidance on study choreography?
New interviewers will receive the same training as the original 175 interviewers; no joint
visits are planned. As described on page 9 of revised Part A, interviewers’ adherence to
training protocols will be monitored.
Field managers will review case productivity and case outcomes through MPR’s webbased sample management system. Field managers will also conduct refusal
conversions for households determined to be survey eligible but reluctant to
participate.
•

Respondent training: Do you think that “calling elderly and less educated households mid-week to
provide technical assistance” will sufficiently improve the quality of information provided by these groups
that the you documented in the pilot study to be less likely to complete the protocol? Since many of your
respondents are likely to fall in this group, given the focus of the study on SNAP and low income
households, we are concerned that by mid-week you will have lost the data from the beginning of the
week. Again, we encourage significant simplification of the protocol.
After the Initial Household Interview and training is completed on “Day 0,” the household will
begin recording its food acquisitions the next day (Day 1). On Day 2 a telephone interviewer will
be talking with the primary respondent to gather information on FAFH acquired on Days 1 and 2.
The telephone interviewers will be trained to ask respondents about potential problems and
respond with additional training, if needed. The sample management system also enables field
and phone interviewers to post messages to one another detailing problems a respondent may
be having with the protocol.

III.

Incentives.
Has MPR conducted the analyses requested by the TWG to examine whether the change in the incentive
from $50 to $100 had any impact on nonresponse bias?

4



The analyses have been completed and are discussed in revised Appendix
V (Field Test Nonresponse Bias Analysis). The key finding is that
household characteristics that have a statistically significant relationship
with response, at various stages, are either not significant among the high
incentive group, or they are significantly moderated by the higher
incentive. The implications are that the higher incentive is likely to
reduce bias due to nonresponse by reducing differential response rates
among sample subgroups.

Describe the type of gift cards to be used for household members ages 11-14 and 15 and over.
 Wal-Mart or Target gift cards will be provided to respondents according to the store
closest to the SSU where they reside. See page 21 of revised Part A.
Please describe the rationale for the difference in amount of the proposed incentive experiment for the
main study. It is not clear we would expect much, if any noticeable effect of these small differences in
incentives, so it is not clear that this is worth doing this experiment given the complications it will add to
the already complex field procedures.
 ERS has decided to drop this experiment.
IV.

Sampling.
Although there are four potential frames described in B.1, it appears that ERS and MPR plan to select or
integrate the applicable frames on an SSU by SSU basis; is this correct? How these multiple frames are
integrated from a field perspective (see also below) as well as an estimation perspective needs to be more
clearly described.
 See revised section B1.
In B.2, you provided confidence intervals for the estimates of percentages, but these should also be
provided for other key outcomes, such as cost or quantity or nutrition.
 New Table B.2 in revised Part B includes confidence intervals for weekly food
expenditures—a priority 1 data element (see Table A2 in revised Appendix A).
Furthermore, it would seem that a key goal of the study is to make comparisons among the four different
groups of households receiving or not receiving SNAP benefits identified in the first paragraph of B.1. For
these comparisons, what are the minimum effects that you will be able to detect with 80% power and
alpha =.05 for different key outcomes?
 As noted earlier, we are interested in three groups of households and three intergroup comparisons: SNAP vs low-income non-SNAP, SNAP vs SNAP-eligible nonparticipants, and low-income vs higher-income. New Table B.3 in revised Part B
presents the MDDs for these comparisons. As we do not know the number of nonparticipating households who may be eligible for SN AP, two different assumptions
are made and MDDs provided for each.
What are the field implications of the use of different sampling frames?
o We understand that SNAP agencies in several states will be unable to provide timely lists of SNAP
participant addresses due to budgetary constraints, and that in those states, the National Food
Survey will use only a commercial list for all addresses in the sampled SSUs. How will this additional
listing/screening activity affect field operations?
 As noted at the top of page 4 in revised Part B, more screening effort can be
expected in these states. We expect to be able to meet the same production rates
in these sates as elsewhere.

5

o

What steps have been taken to improve the likelihood of receiving timely SNAP address lists for field
work? What steps are in the works now? We understand that this is of particular concern given the
comparatively fast “aging” of lists per your field test experience.
Given the burden on State agencies of providing multiple updates of SNAP
addresses, ERS has decided not to ask for updates. This means that as the
survey period progresses, a larger fraction of SNAP households are likely to
come from the non-SNAP list and vice-versa. We do not plan to target
SNAP households at the beginning because that would create seasonal
differences between our SNAP and n on-SNAP groups.
o Regarding assessing the quality of the SNAP frame, the supporting statement language in B.1 seems
to suggest that this quality could not be assessed. However, it seems that other documents from the
agency have described the observed quality of the SNAP frame from field test data, and implications
for the main study; this content was described in B.4. Please incorporate here.
See footnote 8 in revised Part B.
o We note that MPR recommended in a memorandum to the project officer dated (Rev. 9.23.11) that
the TWG suggested that additional SNAP administrative data be processed for sampling for the
second half of the field period to improve the efficiency of the SNAP lists and field operations. Please
confirm that the agency plans to adopt this approach.
As noted above, we are not adopting this approach due to the extra
burden it would impose on State agencies.
Please provide rationale for the anticipated response rate of 75 percent of those completing the screener
and are eligible for the study.
Page 11 adds, “This estimate is based on a 65% participation rate in the
field test among the high incentive group, in two PSUs purposively selected
to provide challenging survey conditions.”
What is the basis of the estimation that 90% food reporting completion rate. We assume that this figure
does not mean that 90% complete every component of the study accurately.
See footnote 22 in the revised Part B. This does not mean that we expect 90% to
complete every component of the study accurately, but we do expect to obtain valuable
and usable data from 90%. With our planned improvements in instruments, training
procedures and sample management, the percentage may be greater than 90%.
Two-phase sample:
o It is not clear how the data collection procedures are being changed in the second phase. Simply
exerting more effort with the same protocol typically has much less benefit than making fundamental
changes in the second phase to bring in people who were not recruited in the first phase. For
example, the NSFG adds an incentive for screening in the second phase.
 See pages 6-7 of revised Part A.
o It is not clear whether the second phase is geared simply toward screening households that were not
screened in the first stage (and then attempting recruitment) or whether screened households that
refused to participate in the main study were also being targeted. If so, how are these data collection
procedures different than phase 1?
 The two-phase sampling is just for households that could not be contacted.

6

o

Sampling for phase 2 was simply described as “random,” but it would seem that you would want to at
least stratify by SNAP status (when available). Are you training interviewers to gather paradata to
help inform sampling of cases targeted to reducing bias (as is done in the NSFG) rather than just
boosting response rates? Has MPR implemented responsive designs in any other surveys?


V.

The Phase Two sample will be selected from the pool of households that could not
be contacted, using implicit stratification (sorting) by list source (SNAP vs. ABS) and
SSU. This assures that household groups that are under-represented in the
households contacted in Phase One will be over-represented in the Phase Two
sample and thus have a better chance of being proportionately represented in the
final pool of contacted households.

Public Use Data File:
What data elements will be in the public use data file, versus being used to generate the variables? In
the “Data Processing” section of Part A, ERS discusses the steps to develop the analytic files – is it
only the analytic files that will be made available to the public (i.e., attributed price and nutrient
variables and household descriptors (e.g., expense and demographic data)?
 ERS wants the public use files to be as complete and useful for outside researchers
as possible. To this end we expect the public use files to include both constructed
analytic variables (e.g., total household income) and the individual variables used to
construct the analytic variables. See the discussion on page 32 of Part A.
The list of items to be included in the “memorandum assessing data quality” seems to be more
focused on sample characteristics than on such things as inconsistencies in the data, missing data,
and the relative confidence ERS has in the different types of data that are being made available. ERS
should be able to link the expected and actual confidence it has in the data with its initial
prioritization of these types of data (see prioritization of items in Table A1 requested, above).
 See highlighted text on pages 31 and 32 of revised Part A.

VI.

Miscellaneous:
Regarding, “Higher-income households were not included in the field test of survey protocols
because the task of reporting food acquisitions over a seven-day period is not considered a
substantial cognitive burden for that population,” consider rephrasing as this may be misinterpreted
as indicating that higher-income households, rather than more highly educated households, would be
experiencing less cognitive load.
 See page 2 of revised Part B.
Please remind us if height and weight will be directly assessed by the interviewer, or self-reported by
the participant(s).
 Height and weight will be self-reported, see p 11 of revised Part A.
Please describe how the information collected in the self-administered respondent feedback form
will be used for this information collection. Will recommendations be implemented once the study
has launched?
 The feedback form is not designed to elicit recommendations. Instead responses
will provide contextual information about interpreting observed behaviors. See
page 11 of revised Part A.

7

Please remind us how culturally diverse foods may be captured in the standardized scanner books for
items that cannot be scanned.
 Variable weight products that cannot be scanned are pictured in the Primary
Respondent Book and include many culturally diverse foods (e.g., soy nuts,
pastrami, bean sprouts, bok choy, cactus leaves, figs, dandelion greens, specialty
mushrooms, and many others). For items not in the book, respondents are asked
to list the food items on their blue pages. See page 8 in revised Part A.
A.10 Assurance of Confidentiality to Participants: Please confirm that only IDs (and not respondent
names or contact information) will be on the exterior of the food books
 (Per the IRB, food books include a consent box with the member’s signature—see
page 7 of revised Part A.)
and that mailing envelopes do not cite the name of the study, the respondent name or contact
information.
 Mailing envelopes will list only ID numbers--see page 11 of revised Part A.
Please confirm the reading level of these materials is appropriate for this population. This may be
especially important to ensure high quality data receipt, given the complexity and burden of the
information to be collected from vulnerable populations.
 The field test demonstrated that households could follow the survey protocols.
Response to the respondent feedback form indicated that 70 percent of
respondents found the survey easy or very easy, with another 19 percent reporting
it was neither easy nor difficult. Field interviewers provided anecdotal reports that
respondents like the scanner. See page 12 of revised Part A.
A.12. Estimates of Annualized Burden Hours and Costs: [MS/BHK: Please advise if the agency needs to
update ROCIS so that burden hours agree—an increase of 10,000 burden hours, about 25 percent
higher.]
In addition to field interviewers providing potential participants with information regarding the
importance of the survey, requirements and incentive, please also provide the authority of the
agency to collect the information, the purpose and use of the information collected, the voluntary
nature of the data collection, the estimated length of participation, and any privacy or confidentiality
protections.
 This information has been added to the consent form and the study brochure.
A.16. Plans for Tabulation and Publication, and Project Time Schedule: Please confirm that variance
adjustment/creation of replicate weights is also a planned data processing activity. Please reference
plans for disclosure view in preparation of a public use file.
 See page 32 of revised Part A.
The participant training videos must be uploaded into ROCIS.
 We will upload a document that provides a website where the videos may be
viewed.
Have translated copies of all study materials been uploaded into ROCIS? If not, we can provide
conditional approval to the English language versions, and give final approval for the study to be
administered in other languages as those materials become available.
 We will upload to ROCIS translated copies of all study materials as they become
available.

8

Brochure
Under “You are Invited,” we wonder if this is a complete and current list of information collection activities
proposed. There will also be three telephone calls, currently, right?
This has been updated.
Consider adding the authority of the agency to conduct this information collection to be consistent with the
Paperwork Reduction Act.
Done
The length of the information collection cited in the brochure seems inconsistent with other study material, such
as the consent materials. Please check.
Information about length of information collection is now consistent across all study materials.
Consent
Please indicate whether assent forms for children ages 11-17 (or otherwise under the age of majority) will be used,
as well as parental consent forms for participation of their children in this data collection.
See page 7 of revised Part A.
Please add authority of the agency to conduct the collection to be consistent with the Paperwork Reduction Act.
Done.
How will different incentive amounts (per phase in the main study) be recorded on consent materials?
The incentive experiment has been dropped.
Consider rephrasing the following sentence as awkward: “You may choose to withdraw from the study at any time.
You will not receive the study incentives if you withdraw before the end of the 7 days for tracking foods you get.”
The sentence has been rephrased. See the revised Consent Form.

9


File Typeapplication/pdf
File Modified2012-03-12
File Created2012-03-12

© 2024 OMB.report | Privacy Policy