NASS comments

App G_Comments_Product_of_USA_Labeling_NASS Final.docx

Analyzing Consumers’ Value of ‘‘Product of USA’’ Labeling Claims

NASS comments

OMB: 0583-0186

Document [docx]
Download: docx | pdf

Appendix G:

Comments Received from NASS on the
“Product of USA” Labeling Study





Analyzing Consumers’ Value of “Product of USA” Labeling Claims Survey Review



Reviewer: Josh Parcel, NASS Statistician



*The survey design and methodology looks very well thought out and appropriate.



Section: P-USA Part A_ICR_final_3_24_clean.docx

*Page 1

1. Comment: This is probably taken directly from the “Policy Book,” but doesn’t point 2 fully imply point 1 in the section below?

The “Policy Book” states that labeling may bear the phrase “Product of USA” under one of the following conditions:

  1. if the country to which the product is exported requires this phrase, and the product is processed in the United States or


  1. if the product is processed in the United States (i.e., is of domestic origin).



2. Since the participant is told to actively review the product for 20 seconds prior to being asked what they saw on the item, is there any information available comparing how often labeling was identified or sought out when making purchase decisions when unprompted to review (on either current products with “Product of USA” labeling or other labeling such as country-of-origin labeling (COOL))? In early stages of implementation when the labeling’s purpose/presence may not be known by a large portion of the general public, will the WTP be more representative of a prompted review or unprompted? Note: even if applicable, it may still be infeasible to consider unprompted WTP under the scope of this survey, and may be somewhat currently addressed by already asking how import various features are and mentioned (page 33) additional analyses FSIS would conduct using scanner data, but mentioning as a consideration.

3. Since respondents won’t be asked any questions that are personal in nature it may be unlikely that open ended question responses or comments will contain Personally Identifiable Information; however I just thought I check that responses to open ended questions and comments would also be scrubbed for potential PII since page 10 mentions separation of databases for protecting against revealing of PII?


More specifically, the following data are in three separate databases:

  • panel member information

  • survey link file (linkage between survey-specific respondent identifier and respondent ID)

  • survey data

*Page 13

4. Regarding page 13, is Is the 220 days to when RTI provides FSIS the report from the starting point of OMB approval or is it from when data collection is completed? I may have misread, but I think the sentence on page 13 and the table A-2 may contradict on this point, as if it is 220 days from OMB approval, and data collection doesn’t begin until 105 days after OMB approval, this would allow a maximum of 115 days from end of data collection to when FSIS would receive the report from RTI (based on the Table A-2 below).

RTI will provide FSIS a report that summarizes the study methods and results within 220 days of completing the data collection.

Table A-2. Project Schedule

Date

Activity

Within 75 days following OMB approval

Begin pilot study

Within 105 days following OMB approval

Begin data collection for web-based survey/experiment

Within 220 days following OMB approval

Complete report on web-based survey/experiment





Section: ‘Appx A_ Instrument_3_24_22.docx’ and ‘P-USA Part B_ICR_final_3_24_22_clean.docx’



5. What is the target population? Is it only households with an adult that does at least half of the grocery shopping, or all U.S. households with a primary or shared shopper who purchased a meat product in the last 6 months? Would using the current screener question asking the respondent if they “do at least half of the grocery shopping for their household” potentially bias respondent selection by placing a lower probability of selection for all members in certain household structures, and would this be significant? Example: households with 1 adult (where the single adult may do all the shopping) versus a household with 4 adults where one adult does 40 percent of the grocery shopping, and the other three do 20 percent of the grocery shopping. Note: I initially thought planned post-data-collection raking of data to Census targets may alleviate some of the potential bias the screener may introduce if target population doesn’t entirely align with screener; however, I wasn’t sure if this was the case as there was a latter statement I read on page 11 that made me think the weight adjustment might be conditional on screener criteria (statements shown below in ‘Weight adjustment statement 1’ and ‘Weight adjustment statement 2’).



*The sampling frame for the web-based survey/experiment is the U.S. general population of adults (18 years or older) who are members of the KnowledgePanel and speak English or Spanish (the survey will be translated into Spanish) and who self-reported on their panel profile survey that they do at least half of the grocery shopping for their household.



*A selected sample of panel members will be invited to participate in the study via email (see Appendix B for the email invitation). Surveyed individuals will be adults (18 years of age or older) who speak English or Spanish (the survey will be translated into Spanish), have primary or shared responsibility for grocery shopping for the household, and are purchasers of meat products. To maximize the response rate, up to two email reminders will be sent to nonresponding sample members (see Appendix C).

* S2. Which of the following best describes your current role in grocery shopping at a store or online for all of your household?

  1. I do none of the grocery shopping (TERMINATE)

  2. I do some of the grocery shopping (TERMINATE)

  3. I do at least half of the grocery shopping

  4. I do all the grocery shopping

[Terminate if S2 = 1, 2, or refused]


Weight adjustment statement 1:

Once the study sample has been selected and fielded, and the survey data edited and made final, the survey data will be weighted. The weighting process starts by computing base weights to address any departure from an EPSEM design. The design weights will be adjusted for any survey nonresponse and for any under- or overcoverage imposed by the study-specific sample design. Depending on the specific target population for a given study, geodemographic distributions for the corresponding population will be obtained from the CPS, ACS, or in certain instances from the weighted KnowledgePanel profile data. For weighting adjustments, an iterative proportional fitting (raking) procedure will be used to adjust design weights to produce final weights that will be aligned with respect to all study benchmark distributions simultaneously

*When I read the section on page 11, shown below, I was less sure the post data collection weighting adjustment would be somewhat likely to mitigate possible bias introduced by screener question(s) if adjustment is potentially conditional on screener criteria. Just wasn’t sure about the specifics so mentioning.

Weight adjustment statement 2:

Standard weighting dimensions will be as follows, though adjustments will be made as needed based on the target population of interest (i.e., grocery shoppers):

Section: ‘P-USA Part B_ICR_final_3_24_22_clean.docx’

6. Is there the possibility that at least some respondents might only partially complete the survey or do all prior sections have to be completed to advance to subsequent sections and for it to be submitted? If partial completes are possible, how will they be handled; will only fully complete surveys be used and partial completes treated as unusable/nonresponse? Imputation was mentioned on page 36, is it going to be used and if so how?

Page 36

The best variables to use in the weight adjustment and the imputation processes are variables that are related to the response propensity and the values of key analytic variables.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorParcel, Joshua - REE-NASS, Washington, DC
File Modified0000-00-00
File Created2022-06-16

© 2024 OMB.report | Privacy Policy