Nass Comments To Assessment Of The Contribution Of An Interview To Snap Program Eligibility And Benefit Determination Study

NASS Comments.docx

Assessment of the Contribution of an Interview to SNAP Program Eligibility and Benefit Determination Study

NASS COMMENTS TO ASSESSMENT OF THE CONTRIBUTION OF AN INTERVIEW TO SNAP PROGRAM ELIGIBILITY AND BENEFIT DETERMINATION STUDY

OMB: 0584-0582

Document [docx]
Download: docx | pdf

3/22/12

Michael Jacobsen

OMB DOCKET FOR THE 2012 ASSESSMENT OF THE CONTRIBUTIONS OF AN INTERVIEW TO SUPPLEMENTAL NUTRITION ASSISTANCE PROGRAM ELIGIBILITY AND BENEFIT DETERMINATIONS STUDY

I have reviewed this OMB docket and have the following comments. This is a very well-organized docket. The authors provide a strong justification for this study and a highly detailed plan for capturing the data. However, I have questions about some parts of the study, most of which concern the sampling procedures.

In part B, the site selection procedures are somewhat hazy. What do you mean by “demonstration” and “comparison” sites? Do they have multiple offices for you to sample or does each site have only one office? What would happen if only one demonstration and one comparison site were selected? How would this affect the statistics? Furthermore, do you need an equal number of demonstration and comparison sites in each state? Would unequal samples of demonstration sites and comparison sites be allowable in the survey? My assumption is that they are not but this is not clear.

In addition, the footnote for the site selection procedures states that you will guide the states to select sites with equivalent characteristics? What do you mean by "equivalent"? Does this mean that each demonstration site will be paired with a comparison site that contains equivalent characteristics? Or will the demonstration and comparison sites be equivalent to other SNAP office districts in the state? I can see this affecting your analysis.

What is the sampling rate for client survey respondents in the comparison sites? Table B.1.2 shows the sampling rates for the demonstration sites within each state but not for the companion sites. Based upon the procedures outlined on page 3, I am assuming that, out of the 608 clients sampled, 304 will be from a demonstration site and the other 304 will be from the companion site. Is this correct? If so, it should be made more obvious in the docket.

Also, you indicated two demonstration sites in table B.1.2 but you mention on page 2 that the states will “… identify one or more localities to implement the no-interview model (demonstration sites).” Which number is correct? Do you plan on only having two demonstration sites?

This is not a real issue but I was confused about one of the examples you used to restrict focus group participants. You mention that you may exclude an applicant if his/her procedural denial occurred outside of a hypothetical time range. However, it seems to me that this might be a person that you would want to include in the focus group. Again, there is nothing wrong with this example; it just confused me.



In questions D1-D5 on your CATI survey, why is there no “neutral/indifferent” option? Furthermore, this is just a suggestion but could you rephrase the questions this way: “Please rank the following statements on a scale from 1 to 4, with 1 meaning ‘strongly agree’ and 4 meaning ‘strongly disagree’ …”?

In your Focus Group Guide, how would you handle someone asking about what would happen to their responses (e.g., recorded and stored; recorded, transcribed and destroyed; etc.)? This question could be on many participants’ minds.

Finally, there was a small grammatical error (two periods) at the bottom of page 17.



RESPONSES TO NASS’ COMMENTS


  1. What do you mean by “demonstration” and “comparison” sites?

Demonstration sites are the pilot-test sites and comparison sites are the sites with which the demonstration sites will be compared. We didn’t use the terms treatment and control sites because random sampling was not used in 2 of the 3 states. Demonstrations also indicate that the project will be implemented for a specific period of time and will not be a permanent change. It also indicates that the project will be evaluated.


  1. Do they have multiple offices for you to sample or does each site have only one office?

Multiple offices


  1. What would happen if only one demonstration and one comparison site were selected?

That will not be the case in any of the states. When we had the state orientation meeting, the states were advised to select multiple demonstration and comparison sites and they are doing so.


  1. Furthermore, do you need an equal number of demonstration and comparison sites in each state?

Yes, each state will have the same number of demonstration and comparison sites. Each pair of demonstration and comparison site will be similar in characteristics such as characteristics of the population, population density, SNAP participation trends, SNAP advocacy and outreach, level of modernization, economic indicators, and waivers and demonstrations


  1. The footnote for the site selection procedures states that you will guide the states to select sites with equivalent characteristics? What do you mean by "equivalent"?

The demonstration and comparison sites must be similar in characteristics of the population, population density, SNAP participation trends, SNAP advocacy and outreach, level of modernization, economic indicators, and waivers and demonstrations.


  1. Does this mean that each demonstration site will be paired with a comparison site that contains equivalent characteristics?

Yes


  1. What is the sampling rate for client survey respondents in the comparison sites? Table B.1.2 shows the sampling rates for the demonstration sites within each state but not for the companion sites. Based upon the procedures outlined on page 3, I am assuming that, out of the 608 clients sampled, 304 will be from a demonstration site and the other 304 will be from the companion site. Is this correct? If so, it should be made more obvious in the docket.

Demonstration Site #1” and “Demonstration Site #2” in Table B.1.2 should be “Demonstration Sites” and “Comparison Sites”. We will resubmit Part B with corrections to the table. 608 clients will be sampled in the demonstration sites, and 608 will be sampled in the comparison sites.


  1. Also, you indicated two demonstration sites in Table B.1.2 but you mention on page 2 that the states will “… identify one or more localities to implement the no-interview model (demonstration sites).” Which number is correct? Do you plan on only having two demonstration sites?

The mislabeling in Table B.1.2 might have led to this question. Both of the demonstration site states -- Oregon and North Carolina – intend to have multiple demonstration sites (more than two) each.


  1. This is not a real issue but I was confused about one of the examples you used to restrict focus group participants. You mention that you may exclude an applicant if his/her procedural denial occurred outside of a hypothetical time range. However, it seems to me that this might be a person that you would want to include in the focus group. Again, there is nothing wrong with this example; it just confused me.

A specific time period of 3 months is used for a couple reasons – 3 months prior will be within the demonstration period and it provides a sample frame from which the participants can be selected.


  1. In questions D1-D5 on your CATI survey, why is there no “neutral/indifferent” option? Furthermore, this is just a suggestion but could you rephrase the questions this way: “Please rank the following statements on a scale from 1 to 4, with 1 meaning ‘strongly agree’ and 4 meaning ‘strongly disagree’ …”?

In our opinion, these questions do not seem to lend themselves to needing a neutral response category. As a result of the pre-test, the text was revised to make it simpler for the respondents by first asking agree/satisfied or disagree/dissatisfied and then following up with the strength of the agreement or disagreement. There was no indication in the results of the pre-test that the questions are unclear or misleading without this neutral option.


In terms of changing the response categories to include a scale of 1 to 4, because this is a telephone interview, there’s concern that such a rephrasing could make the questions less clear. Respondents often need to be reminded what the numbers represent. This type of scale is sometimes included when doing in-person interviews and using show cards where the respondent can just point to their response. So based on past experience, it is believed that the current text may be cognitively easier for the respondents to remember during a phone interview.




  1. In your Focus Group Guide, how would you handle someone asking about what would happen to their responses (e.g., recorded and stored; recorded, transcribed and destroyed; etc.)? This question could be on many participants’ minds.

This information is provided on pages 13 – 15 of Part A.


  1. Finally, there was a small grammatical error (two periods) at the bottom of page 17.

This has been corrected.

5


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRosemarie Downer
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy