V. Interview Guide: State-Level Data Analysis Staff
OMB
Number: 0584-####
Expiration
Date: MM/DD/20YY
Good morning/afternoon. Thank you for taking the time to talk with me today. My name is [interviewer’s name], and I work for Westat, a private research company based in Rockville, Maryland. Joining me is my colleague, [name].
Purpose: The U.S. Department of Agriculture’s Food and Nutrition Service, or FNS, is interested in understanding tools used to identify cases likely to have a payment error. These tools may be known by different names, such as case-profiling tools, risk assessment tools, or error-prone profiling. After cases are flagged, they undergo a rigorous process to ensure accurate benefit decisions. FNS hired Westat to conduct a study to learn more about the development and implementation of these tools. The findings from the study will be used to inform the development of case-profiling tools FNS and State agencies use, identify best practices, and develop resources and technical assistance.
How you were selected: We first conducted an online survey of all SNAP State directors. We then worked with FNS to select six State agencies among survey respondents for more indepth case studies on their use of case-profiling tools. After we discussed with the SNAP State Director what we hope to learn from your State agency, the State Director identified you as someone who would have valuable input and should be interviewed.
Risks and privacy: We use all data we collect only for the research purposes we describe. FNS knows which State agencies were asked to participate in each case study but does not know the names or job titles of the individuals interviewed. We will report the results of these interviews for each State agency, but your name will not be linked to your responses. In our reports, we may include direct quotes, but they will be presented without the speaker’s name or title. FNS will receive a redacted copy of the transcript of this interview that has been stripped of identifying information, except for the name of your State agency.
Study costs and compensation: There is no cost to you to participate apart from the time you spend with us for this interview, and there is no compensation. The interview takes 60 minutes.
Voluntary participation: Your participation is entirely voluntary. Refusal to participate will not have any impact on your position, your State agency, or nutrition programs. You may take a break, skip questions, say something off the record, or stop participating at any time.
Questions: If you have questions about your rights and welfare as a research participant, please call the Westat Human Subjects Protections office at 1.888.920.7631. Please leave a message with your first name; the name of the research study you are calling about, which is the SNAP Risk Assessment study; and a phone number, beginning with the area code. Someone will return your call as soon as possible.
We have planned for this discussion to last 60 minutes, until [time]. Is that still okay?
With your permission, I would like to record this discussion to help us fill any gaps in our written notes. The recordings, transcripts, and any notes we have will be stored on our secure server and will be destroyed after the project is complete. FNS will not receive any audio recordings.
Do you have any questions? [Answer all questions]
May I turn on the audiorecorder now? [Turn on audiorecorder if gives consent]
Now that the audiorecorder is on, do you agree to participate? [Pause for response]
And do you consent to be audiorecorded? [Pause for response]
To start, please tell me how long you have worked at the agency and what your responsibilities are.
For the rest of this discussion, we will be talking mostly about the [tool name], which your State agency provided information about in the online survey. What was the nature of your involvement with [tool name]?
[Probe: Designed it, tested it? Is familiar with tool but was not involved in development?]
In the survey, the State agency said [tool name] is a [read survey response A18] that flags SNAP cases at risk of payment error using data from [read survey response A20]. Does that description still seem accurate?
[If no] How would you revise the description?
To figure out which case characteristics to have the tool focus on, how did you know where the majority of payment errors were coming from?
Tell me about any data you analyzed to determine where the errors were coming from.
[Probe: SNAP Quality Control data, vendor/contractor data, local agency data, other?]
[If they analyzed data] What sort of analysis did you do on those data?
[Probe: Descriptive statistics, modeling, machine learning?]
How did you decide on your analytical approach?
What outcome or outcomes did you examine?
[Probe: Presence of a payment error? Payment error as a continuous measure?]
How, if at all, did you test the accuracy of the analysis?
What challenges arose when using those data to inform how the tool would be designed?
How did you overcome those challenges?
Thank you. Those details gave me a really helpful background. Now I’d like to ask about the specific information the [tool name] looks for.
My understanding from the survey is that, when the [tool name] tries to identify which cases are at high risk of payment error, it looks at [read survey responses A9–A13, A15a]. Did I capture that information correctly?
[If no] What information does the [tool name] look at to flag SNAP cases at high risk of payment error?
Is there any documentation you could share on each factor in terms of whether they are categorical, continuous, or measured in some other way?
[Note: Follow up on this at the end, and request any documentation]
Can you recall how the decision was made to focus on those factors?
Who made the decision?
What, if anything, would you change about the factors the [tool name] focuses on?
Can you recall how the decision was made to measure each factor categorically, continuously, or some other way?
Who made the decisions?
What, if anything, would you change about how the factors are measured?
Were any factors considered for the tool that you didn’t end up using?
[If yes]
Which ones?
Why was it decided not to use them?
Have the factors the [tool name] focuses on changed over time? If yes, how?
[Probe to understand whether changes were to focus on different factors altogether or on how those factors were measured—continuous, categorical, etc.]
Why were the changes made?
[Probe to understand whether staff learned they needed to make adjustments as a result of monitoring or testing of the tool]
[If survey response A7 NA] Tell me about how the [tool name] was tested before going live.
What was being tested?
[Probe: Were they looking at the overall accuracy of the tool? Equity of the tool across subgroups? User-friendliness?]
Who did the testing?
Did those early tests reveal anything that needed to be fixed?
[If yes]
Tell me a little about what needed to be fixed.
How long did that take to resolve?
Was the tool also tested after going live? If yes, how?
[Probe: What were they looking for with those tests—accuracy, equity, user-friendliness?]
Did anything need to be fixed when testing the tool after it went live?
[If yes]
Tell me a little about what needed to be fixed.
How long did that take to resolve?
If a State wanted to explore whether the [INSERT TOOL NAME FROM A1a or A1b] flags SNAP cases at risk of a payment error in a way that unintentionally affects a particular race, ethnicity, gender, or other protected class more than others, how do you think they could go about that?
[Note: For example, a tool may disproportionately flag certain ethnic groups (e.g., Hispanic households) if it looks for households with 8+ people.]
Was that something your team considered during testing? Why or why not?
[If tested for unintentional effects]
How did the team go about that?
What were the findings?
What changes, if any, were made after reviewing the findings?
When you think about the whole development and testing process, to what extent did the team discuss the tool’s potential for unintentionally flagging a protected class?
[If discussed]
What terms did the team use to describe this concern? Unintentional bias? Something else?
How did these considerations factor into the tool’s construction?
[If not discussed]
Did the team consider whether the tool might be more accurate for some subgroups than others?
What data, if any, are tracked on the outcomes of the cases the [tool name] flags?
[Note: We are asking if they track any followup steps taken for these cases, such as efforts to find additional documentation on the household or calls to the household to ask followup questions]
[If track data]
Who tracks those data?
[If tool is used after benefit determination] Do the data indicate whether the flagged cases were actually found to have payment errors?
[If do not track data]
What data would you like to track, if any?
What, if anything, makes it difficult to collect data on the [tool name]?
What, if anything, do you report on the cases [tool name] flags?
What information is shared in these reports?
Who receives these reports?
[Probe: Local office staff, State-level staff, FNS staff?]
What do the recipients do with this information?
Would
you be able to share the latest of these reports with us?
[Note:
Ask for this report again at the end of the interview]
Aside from reporting, what other steps are you required to take based on the results of those reports?
[Probe to understand how these steps are carried out, when, and by whom.]
We know this can be hard to determine, but have you been able to ascertain whether the tool has had an impact on payment error rates, either good or bad?
[Probe to understand whether the impact is their perception or based in data]
This discussion has been very helpful.
If you were to talk to other State agencies considering implementing a similar tool, what advice would you give them?
Before we wrap up, are there any key challenges to building or implementing case-profiling tools like [tool name] that we haven’t already discussed?
We’ve reached the end of the interview. Thank you so much for taking the time to talk with us and share your experiences. The information you provided gave us valuable insights into how tools like [tool name] work.
[If applicable] I recall you mentioning that you would be willing to share [documents] with me. Those would be really helpful to see, so thank you for offering to send them. You can send them to me at [email address]. I can also set up a secure FTP site to receive the materials if the documents contain identifying information.
This information is being
collected to provide the Food and Nutrition Service (FNS) with key
information on case-profiling tools used by SNAP State agencies.
This is a voluntary collection, and FNS will use the information to
examine risk assessment tools in SNAP. This collection requests
personally identifiable information under the Privacy Act of 1974.
According to the Paperwork Reduction Act of 1995, an agency may not
conduct or sponsor, and a person is not required to respond to, a
collection of information unless it displays a valid OMB control
number. The valid OMB control number for this information collection
is 0584-####. The time required to complete this information
collection is estimated to average 0.75 hours (45 minutes) per
response. Send comments regarding this burden estimate or any other
aspect of this collection of information, including suggestions for
reducing this burden, to U.S. Department of Agriculture, Food and
Nutrition Service, Office of Policy Support, 1320 Braddock Place,
5th Floor, Alexandria, VA 22306 ATTN: PRA (0584-####). Do not return
the completed form to this address. If you have any questions,
please contact the FNS Project Officer for this project, Eric
Williams, at [email protected].
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Kelley Calvin |
File Modified | 0000-00-00 |
File Created | 2024-08-05 |