W. Interview Guide: Risk Assessment Tool Development Lead, Quality Assurance Director
OMB
Number: 0584-####
Expiration
Date: MM/DD/20YY
Good morning/afternoon. Thank you for taking the time to talk with me today. My name is [interviewer’s name], and I work for Westat, a private research company based in Rockville, Maryland. Joining me is my colleague, [name].
Purpose: The U.S. Department of Agriculture’s Food and Nutrition Service, or FNS, is interested in understanding tools SNAP State agencies use to identify cases likely to have a payment error. These tools may be known by different names, such as case-profiling tools, risk assessment tools, or error-prone profiling. After cases are flagged as high risk, they undergo a more rigorous process to ensure accurate benefit decisions. FNS hired Westat to conduct a study to learn more about the development and implementation of these tools. The findings from the study will be used to inform the development of case-profiling tools FNS and State agencies use, identify best practices, and develop resources and technical assistance.
How you were selected: We conducted an online survey of all SNAP State agencies and then worked with FNS to select six State agencies for more indepth case studies on their use of case-profiling tools. Your SNAP State Director identified you as someone who would have valuable input about your State agency’s case-profiling tool.
Risks and privacy: We use all data we collect only for the research purposes we describe. FNS knows which State agencies were asked to participate in each case study but does not know the names or job titles of the individuals interviewed. We will report the results of these interviews for each State agency, but your name will not be linked to your responses. In our reports, we may include direct quotes, but they will be presented without the speaker’s name or title. FNS will receive a redacted copy of the transcript of this interview that has been stripped of identifying information, except for the name of your State agency.
Study costs and compensation: There is no cost to you to participate apart from the time you spend with us for this interview, and there is no compensation. The interview takes 90 minutes.
Voluntary participation: Your participation is entirely voluntary. Refusal to participate will not have any impact on your position, your State agency, or nutrition programs. You may take a break, skip questions, say something off the record, or stop participating at any time.
Questions: If you have questions about your rights and welfare as a research participant, please call the Westat Human Subjects Protections office at 1.888.920.7631. Please leave a message with your first name; the name of the research study you are calling about, which is the SNAP Risk Assessment study; and a phone number beginning with the area code. Someone will return your call as soon as possible.
We have planned for this discussion to last 90 minutes, until [time]. Is that still okay?
With your permission, I would like to record this discussion to help us fill any gaps in our written notes. The recordings, transcripts, and any notes we have will be stored on our secure server and will be destroyed after the project is complete. FNS will not receive any audio recordings.
Do you have any questions? [Answer all questions]
May I turn on the audiorecorder now? [Turn on audiorecorder if gives consent]
Now that the audiorecorder is on, do you agree to participate? [Pause for response]
And do you consent to be audiorecorded? [Pause for response]
To start, please tell me how long you have worked at your [agency/organization/office] and what your responsibilities are.
For the rest of this discussion, we will be talking mostly about the [tool name] that your State agency provided information about in the online survey. What was the nature of your involvement with [tool name]?
[Probe: Designed it, built it, tested it, promoted it, other?]
In the survey, the State agency said the [tool name] is a [read survey response A18] that flags SNAP cases at risk of payment error using data from [read survey response A20]. Does that description still seem accurate?
[If no] How would you revise the description?
Has the [tool name] been modified in any way since it was first implemented?
[If yes]
Please explain how it evolved.
[Probe: When and why it evolved; who initiated those changes?]
What were the reasons for those changes?
[Probe: Prompted by staff feedback, review of data, civil rights complaint, other?]
I want to understand why and how the [tool name] was designed and built.
What motivated your State agency to develop the [tool name]?
The survey indicates that the following types of staff were involved in designing the tool: [read survey response A5]. How was it decided who would design it?
[If A5 = vendor/contractor] How much input did the State agency have in how the tool was developed?
Did the vendor offer a premade case-profiling tool that they already had available or did they have to create your State’s tool from scratch?
What were the pros and cons of that?
How much input did the State agency have on the final algorithm for the tool?
To figure out which case characteristics to have the tool focus on, how did you identify the sources of payment errors?
[Probe: analyzed SNAP Quality Control data or vendor/contractor data, other?]
[If they analyzed data to inform those decisions] What sort of analysis did you do on those data?
[Probe: Descriptive statistics, modeling, machine learning?]
Where were the data pulled from?
[Probe: Centralized SNAP State database, local agency databases, other?]
What challenges arose when using those data to inform how the tool would be designed?
Now I’d like to ask about the specific information the [tool name] looks for.
My understanding from the survey is that, when the [tool name] tries to identify which cases are at high risk of payment error, it looks at [read survey response A9–A13, A15a]. Did I capture that information correctly?
[If no] What information does the [tool name] look at to flag SNAP cases at high risk of payment error?
Is there any documentation you could share on how each variable is operationalized in terms of whether it’s categorical, continuous, or measured in some other way?
[Note: Make a note to follow up on this question at the end, and request that documentation]
Can you recall how the decision was made to focus on those variables?
Who made the decision?
What, if anything, would you change about the variables the [tool name] focuses on?
Can you recall how the decision was made to operationalize each variable, in terms of whether they’re categorical, continuous, or measured some other way?
Who made the decisions?
What, if anything, would you change about how the variables are operationalized?
Were any variables considered for the tool that you didn’t end up using?
[If yes]
Which variables?
Why did you decide not to use them?
Have the variables the [tool name] focuses on changed over time? If yes, how?
[Probe to understand whether changes were to focus on different variables altogether or how those variables were measured—continuous, categorical, etc.]
Why were the changes made?
[Probe to understand whether staff learned they needed to make adjustments as a result of monitoring or testing of the tool]
[If survey A7 NA] Tell me about how the [tool name] was tested before going live.
[Probe: Were they looking at the overall accuracy of the tool? Equity of the tool across subgroups? User-friendliness?]
Who did the testing?
Did those early tests reveal anything that needed to be fixed?
Was the tool also tested after going live? If yes, how?
[Probe: What were they looking for with those tests—accuracy, equity, user friendliness?]
Did anything need to be fixed when testing the tool after it went live?
If a State wanted to explore whether the [INSERT TOOL NAME FROM A1a or A1b] flags SNAP cases at risk of a payment error in a way that unintentionally affects a particular race, ethnicity, gender, or other protected class more than others, do you think they could go about that?
[Note: For example, a tool may disproportionately flag certain ethnic groups (e.g., Hispanic households) if it looks for households with 8+ people.]
Was that something your team considered during testing? Why or why not?
[If tested for unintentional affects]
How did the team go about that?
What were the findings?
What changes, if any, were made after reviewing the findings?
What terminology did you use to discuss this type of disproportionate flagging of certain protected classes? Unintentional bias? Something else?
When you think about the whole development and testing process, to what extent did the team discuss the tool’s potential for disproportionately flagging protected classes?
[If discussed]
How did the team define ‘disproportionately flagging protected classes’ in this context?
How did these considerations factor into the tool’s construction?
[If not discussed]
Did the team consider whether the tool might be more accurate for some subgroups than others?
My next set of questions will help me better understand how the [tool name] was actually implemented.
How did you develop the procedures for using the tool?
[Note: If the tool was a checklist, these procedures may relate to using the checklist to flag a case and conduct any followup steps. If the tool was an algorithm, these procedures may have been a written explanation of how and when the tool flags cases and any followup steps for staff on the flagged cases.]
Who was responsible for developing those procedures?
Was any training conducted to help staff understand the [tool name] and how to use it?
[If yes]
Who led the training?
Who attended the training?
[If mention local office staff, clarify whether it was at the supervisor/manager-level or the frontline worker level.]
What did the training cover?
My understanding from the survey is that the [tool name] flags SNAP cases thought to be at risk of payment error at [read survey response A19]. Is that correct?
[If no] When does [tool name] flag a SNAP case thought to be at risk of payment error?
[Probe: During certification process, after certification but before benefits are issued, after certification and before recertification, during recertification, other?]
If you could, would you adjust the [tool name] to flag cases at a different point in the process?
[If yes]
When would you prefer a case be flagged?
Why would that change be helpful?
After the tool flags a case as being at risk of payment error, what is supposed to happen next?
What is the timeframe in which those steps have to occur?
What kinds of staff are involved?
[Probe: Local office staff, State-level staff?]
What percent of the time would you estimate that the staff are able to complete those steps exactly as they are spelled out?
[if not 100% of the time] What makes it difficult for staff to complete those follow-up steps on cases that are flagged?
[If vendor created tool] If an aspect of the tool needs to be updated, is the vendor responsible for doing that or is it someone else?
How quickly are those updates typically made?
What challenges arise when making those updates?
What helps that process go smoothly?
What data, if any, are tracked on what happens to the cases the [tool name] flags?
[Note: We are asking if they track any followup steps taken for these cases, such as efforts to find additional documentation on the household or calls to the household to ask followup questions]
Who tracks those data?
[If tool is used after benefit issuance, per survey question A19] Do the data indicate whether the flagged cases were actually found to have payment errors?
What, if anything, do you report on the cases the [tool name] flags?
What information is shared in these reports?
Who receives these reports?
[Probe: Local office staff, State-level staff, FNS staff?]
What do the recipients do with this information?
Would you be able to share the latest of these reports with us?
[Note: Ask for this report again at the end of the interview]
Apart from what you just mentioned, do you know of any other ways the State agency uses the data on the cases flagged as being at risk of payment error?
[Probe to understand if the data are used to identify local offices that need additional training.]
What do you believe are the biggest challenges to implementing the [tool name]?
[Probe: IT issues, staffing challenges, training, other?]
We know this can be hard to determine, but have you been able to ascertain whether the tool has had an impact on payment error rates, either good or bad?
[Probe to understand whether the impact is their perception or based in data.]
[If tool had positive impact on error rates] Thinking back on the work involved in developing, testing, and implementing the tool, to what extent might those costs be balanced out by improvements to payment accuracy?
While the tool [is/was] in use, what other strategies, if any, has the State agency used to try to reduce payment error rates?
If you were to talk to another SNAP State agency considering implementing a similar tool, what advice would you give them before they roll it out?
[if applicable] Discontinuing the Tool
Now I’d like to ask a few questions about discontinuing the [tool name].
In what year was the tool discontinued?
Tell me how the decision was made to stop using the [tool name].
Who made the decision?
Were any data considered when making the decision? If yes, explain.
Have you been able to ascertain whether discontinuing the tool had an impact on payment error rates, either good or bad?
[Probe to understand whether the impact is their perception or based in data.]
This information has been very helpful. These last few questions will give me a little more understanding of the State and local context before we wrap up.
[Ask questions 32–35 if respondent is a State-level staff person]
Do you feel that your local offices have enough qualified staff to review cases and make eligibility determinations? Why or why not?
[If not enough staff] How long have local offices been short staffed?
[Probe: Since before COVID or more recently?]
Do you feel your local offices have enough funding to properly review cases and make eligibility determinations? Why or why not?
We know that COVID-19 drastically affected what were formerly standard practices when moving a SNAP case through the eligibility determination process. For instance, SNAP offices largely stopped conducting certification interviews. How else did COVID-19 change SNAP processes in ways that may have affected the State agency’s payment error rate?
[Note: Be clear on whether the impact on error rates was positive or negative]
[Ask if tool was implemented between 2017 and 2019, per survey questions A3 and B4] We see from the official payment error rates FNS released that your State agency’s payment error rate ranged from [X] to [Y] between 2017 and 2019. Can you think of anything happening at your State agency during that time that may have affected the payment error rate?
[Probe: New management information system, policy change, corrective action plans, significant staffing changes, other?]
[Note: Be clear on whether the impact on error rates was positive or negative]
Are there any key challenges to successfully building or implementing tools like [tool name] that we haven’t already discussed?
Can you think of anything that makes these tools more accurate in identifying SNAP cases at risk of payment error?
We’ve reached the end of the interview. Thank you so much for taking the time to talk with us and share your experiences. The information you provided gave us valuable insights into how tools like [tool name] work.
[If applicable] I recall you mentioning that you would be willing to share [documents] with me. Those would be helpful to see, so thank you for offering to send them. You can send them to me at [email address]. I can also set up a secure FTP site to receive the materials if the documents contain identifying information.
This information is being
collected to provide the Food and Nutrition Service (FNS) with key
information on case-profiling tools used by SNAP State agencies.
This is a voluntary collection, and FNS will use the information to
examine risk assessment tools in SNAP. This collection requests
personally identifiable information under the Privacy Act of 1974.
According to the Paperwork Reduction Act of 1995, an agency may not
conduct or sponsor, and a person is not required to respond to, a
collection of information unless it displays a valid OMB control
number. The valid OMB control number for this information collection
is 0584-####. The time required to complete this information
collection is estimated to average 1.75 hours (105 minutes) per
response. Send comments regarding this burden estimate or any other
aspect of this collection of information, including suggestions for
reducing this burden, to U.S. Department of Agriculture, Food and
Nutrition Service, Office of Policy Support, 1320 Braddock Place,
5th Floor, Alexandria, VA 22306 ATTN: PRA (0584-####). Do not return
the completed form to this address. If you have any questions,
please contact the FNS Project Officer for this project, Eric
Williams, at [email protected].
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Kelley Calvin |
File Modified | 0000-00-00 |
File Created | 2024-07-21 |