23_W. Interview Guide Risk Assessment

Understanding Risk Assessment in Supplemental Nutrition Assistance Program (SNAP) Payment Accuracy

23_W. Interview Guide Risk Assessment

OMB: 0584-0696

Document [docx]
Download: docx | pdf

W. Interview Guide: Risk Assessment Tool Development Lead, Quality Assurance Director

Shape1

OMB Number: 0584-#### Expiration Date: MM/DD/20YY







Introduction

Good morning/afternoon. Thank you for taking the time to talk with me today. My name is [interviewer’s name], and I work for Westat, a private research company based in Rockville, Maryland. Joining me is my colleague, [name].

Purpose: The U.S. Department of Agriculture’s Food and Nutrition Service, or FNS, is interested in understanding tools SNAP State agencies use to identify cases likely to have a payment error. These tools may be known by different names, such as case-profiling tools, risk assessment tools, or error-prone profiling. After cases are flagged as high risk, they undergo a more rigorous process to ensure accurate benefit decisions. FNS hired Westat to conduct a study to learn more about the development and implementation of these tools. The findings from the study will be used to inform the development of case-profiling tools FNS and State agencies use, identify best practices, and develop resources and technical assistance.

How you were selected: We conducted an online survey of all SNAP State agencies and then worked with FNS to select six State agencies for more indepth case studies on their use of case-profiling tools. Your SNAP State Director identified you as someone who would have valuable input about your State agency’s case-profiling tool.

Risks and privacy: We use all data we collect only for the research purposes we describe. FNS knows which State agencies were asked to participate in each case study but does not know the names or job titles of the individuals interviewed. We will report the results of these interviews for each State agency, but your name will not be linked to your responses. In our reports, we may include direct quotes, but they will be presented without the speaker’s name or title. FNS will receive a redacted copy of the transcript of this interview that has been stripped of identifying information, except for the name of your State agency.

Study costs and compensation: There is no cost to you to participate apart from the time you spend with us for this interview, and there is no compensation. The interview takes 90 minutes.

Voluntary participation: Your participation is entirely voluntary. Refusal to participate will not have any impact on your position, your State agency, or nutrition programs. You may take a break, skip questions, say something off the record, or stop participating at any time.

Questions: If you have questions about your rights and welfare as a research participant, please call the Westat Human Subjects Protections office at 1.888.920.7631. Please leave a message with your first name; the name of the research study you are calling about, which is the SNAP Risk Assessment study; and a phone number beginning with the area code. Someone will return your call as soon as possible.

We have planned for this discussion to last 90 minutes, until [time]. Is that still okay?

With your permission, I would like to record this discussion to help us fill any gaps in our written notes. The recordings, transcripts, and any notes we have will be stored on our secure server and will be destroyed after the project is complete. FNS will not receive any audio recordings.

Do you have any questions? [Answer all questions]

May I turn on the audiorecorder now? [Turn on audiorecorder if gives consent]

Now that the audiorecorder is on, do you agree to participate? [Pause for response]

And do you consent to be audiorecorded? [Pause for response]

Warmup and Context

  1. To start, please tell me how long you have worked at your [agency/organization/office] and what your responsibilities are.

  2. For the rest of this discussion, we will be talking mostly about the [tool name] that your State agency provided information about in the online survey. What was the nature of your involvement with [tool name]?

[Probe: Designed it, built it, tested it, promoted it, other?]

  1. In the survey, the State agency said the [tool name] is a [read survey response A18] that flags SNAP cases at risk of payment error using data from [read survey response A20]. Does that description still seem accurate?

    1. [If no] How would you revise the description?

  2. Has the [tool name] been modified in any way since it was first implemented?

[If yes]

    1. Please explain how it evolved.

[Probe: When and why it evolved; who initiated those changes?]

    1. What were the reasons for those changes?

[Probe: Prompted by staff feedback, review of data, civil rights complaint, other?]

Developing the Tool

I want to understand why and how the [tool name] was designed and built.

  1. What motivated your State agency to develop the [tool name]?

  2. The survey indicates that the following types of staff were involved in designing the tool: [read survey response A5]. How was it decided who would design it?

    1. [If A5 = multiple responses] How did the collaboration go between those different staff?

    2. What suggestions do you have for similar teams trying to create a tool like this?

    3. What challenges arose during the design phase?

  3. [If A5 = vendor/contractor] How much input did the State agency have in how the tool was developed?

    1. Did the vendor offer a premade case-profiling tool that they already had available or did they have to create your State’s tool from scratch?

      1. What were the pros and cons of that?

    2. How much input did the State agency have on the final algorithm for the tool?

  4. To figure out which case characteristics to have the tool focus on, how did you identify the sources of payment errors?

[Probe: analyzed SNAP Quality Control data or vendor/contractor data, other?]

  1. [If they analyzed data to inform those decisions] What sort of analysis did you do on those data?

[Probe: Descriptive statistics, modeling, machine learning?]

    1. Where were the data pulled from?

[Probe: Centralized SNAP State database, local agency databases, other?]

    1. What challenges arose when using those data to inform how the tool would be designed?

Now I’d like to ask about the specific information the [tool name] looks for.

  1. My understanding from the survey is that, when the [tool name] tries to identify which cases are at high risk of payment error, it looks at [read survey response A9–A13, A15a]. Did I capture that information correctly?

    1. [If no] What information does the [tool name] look at to flag SNAP cases at high risk of payment error?

    2. Is there any documentation you could share on how each variable is operationalized in terms of whether it’s categorical, continuous, or measured in some other way?

[Note: Make a note to follow up on this question at the end, and request that documentation]

  1. Can you recall how the decision was made to focus on those variables?

    1. Who made the decision?

    2. What, if anything, would you change about the variables the [tool name] focuses on?

    3. Can you recall how the decision was made to operationalize each variable, in terms of whether they’re categorical, continuous, or measured some other way?

      1. Who made the decisions?

      2. What, if anything, would you change about how the variables are operationalized?

  2. Were any variables considered for the tool that you didn’t end up using?

[If yes]

    1. Which variables?

    2. Why did you decide not to use them?

  1. Have the variables the [tool name] focuses on changed over time? If yes, how?

[Probe to understand whether changes were to focus on different variables altogether or how those variables were measured—continuous, categorical, etc.]

    1. Why were the changes made?

[Probe to understand whether staff learned they needed to make adjustments as a result of monitoring or testing of the tool]

  1. [If survey A7 NA] Tell me about how the [tool name] was tested before going live.

[Probe: Were they looking at the overall accuracy of the tool? Equity of the tool across subgroups? User-friendliness?]

    1. Who did the testing?

    2. Did those early tests reveal anything that needed to be fixed?

    3. Was the tool also tested after going live? If yes, how?

[Probe: What were they looking for with those tests—accuracy, equity, user friendliness?]

    1. Did anything need to be fixed when testing the tool after it went live?

  1. If a State wanted to explore whether the [INSERT TOOL NAME FROM A1a or A1b] flags SNAP cases at risk of a payment error in a way that unintentionally affects a particular race, ethnicity, gender, or other protected class more than others, do you think they could go about that?

[Note: For example, a tool may disproportionately flag certain ethnic groups (e.g., Hispanic households) if it looks for households with 8+ people.]

    1. Was that something your team considered during testing? Why or why not?

[If tested for unintentional affects]

      1. How did the team go about that?

      2. What were the findings?

      3. What changes, if any, were made after reviewing the findings?

      4. What terminology did you use to discuss this type of disproportionate flagging of certain protected classes? Unintentional bias? Something else?

  1. When you think about the whole development and testing process, to what extent did the team discuss the tool’s potential for disproportionately flagging protected classes?

[If discussed]

    1. How did the team define ‘disproportionately flagging protected classes’ in this context?

    2. How did these considerations factor into the tool’s construction?

[If not discussed]

    1. Did the team consider whether the tool might be more accurate for some subgroups than others?


Implementing the Tool

My next set of questions will help me better understand how the [tool name] was actually implemented.

  1. How did you develop the procedures for using the tool?

[Note: If the tool was a checklist, these procedures may relate to using the checklist to flag a case and conduct any followup steps. If the tool was an algorithm, these procedures may have been a written explanation of how and when the tool flags cases and any followup steps for staff on the flagged cases.]

    1. Who was responsible for developing those procedures?

  1. Was any training conducted to help staff understand the [tool name] and how to use it?

[If yes]

    1. Who led the training?

    2. Who attended the training?

[If mention local office staff, clarify whether it was at the supervisor/manager-level or the frontline worker level.]

    1. What did the training cover?

  1. My understanding from the survey is that the [tool name] flags SNAP cases thought to be at risk of payment error at [read survey response A19]. Is that correct?

    1. [If no] When does [tool name] flag a SNAP case thought to be at risk of payment error?

[Probe: During certification process, after certification but before benefits are issued, after certification and before recertification, during recertification, other?]

  1. If you could, would you adjust the [tool name] to flag cases at a different point in the process?

[If yes]

    1. When would you prefer a case be flagged?

    2. Why would that change be helpful?

  1. After the tool flags a case as being at risk of payment error, what is supposed to happen next?

    1. What is the timeframe in which those steps have to occur?

    2. What kinds of staff are involved?

[Probe: Local office staff, State-level staff?]

    1. What percent of the time would you estimate that the staff are able to complete those steps exactly as they are spelled out?

      1. [if not 100% of the time] What makes it difficult for staff to complete those follow-up steps on cases that are flagged?

  1. [If vendor created tool] If an aspect of the tool needs to be updated, is the vendor responsible for doing that or is it someone else?

    1. How quickly are those updates typically made?

    2. What challenges arise when making those updates?

    3. What helps that process go smoothly?

  2. What data, if any, are tracked on what happens to the cases the [tool name] flags?

[Note: We are asking if they track any followup steps taken for these cases, such as efforts to find additional documentation on the household or calls to the household to ask followup questions]

    1. Who tracks those data?

    2. [If tool is used after benefit issuance, per survey question A19] Do the data indicate whether the flagged cases were actually found to have payment errors?

  1. What, if anything, do you report on the cases the [tool name] flags?

    1. What information is shared in these reports?

    2. Who receives these reports?

[Probe: Local office staff, State-level staff, FNS staff?]

    1. What do the recipients do with this information?

    2. Would you be able to share the latest of these reports with us?

[Note: Ask for this report again at the end of the interview]

  1. Apart from what you just mentioned, do you know of any other ways the State agency uses the data on the cases flagged as being at risk of payment error?

[Probe to understand if the data are used to identify local offices that need additional training.]

  1. What do you believe are the biggest challenges to implementing the [tool name]?

[Probe: IT issues, staffing challenges, training, other?]

  1. We know this can be hard to determine, but have you been able to ascertain whether the tool has had an impact on payment error rates, either good or bad?

[Probe to understand whether the impact is their perception or based in data.]

    1. [If tool had positive impact on error rates] Thinking back on the work involved in developing, testing, and implementing the tool, to what extent might those costs be balanced out by improvements to payment accuracy?

  1. While the tool [is/was] in use, what other strategies, if any, has the State agency used to try to reduce payment error rates?

  2. If you were to talk to another SNAP State agency considering implementing a similar tool, what advice would you give them before they roll it out?

[if applicable] Discontinuing the Tool

Now I’d like to ask a few questions about discontinuing the [tool name].

  1. In what year was the tool discontinued?

  2. Tell me how the decision was made to stop using the [tool name].

    1. Who made the decision?

    2. Were any data considered when making the decision? If yes, explain.

  3. Have you been able to ascertain whether discontinuing the tool had an impact on payment error rates, either good or bad?

[Probe to understand whether the impact is their perception or based in data.]

State/Local Context and Wrap-Up

This information has been very helpful. These last few questions will give me a little more understanding of the State and local context before we wrap up.

[Ask questions 32–35 if respondent is a State-level staff person]

  1. Do you feel that your local offices have enough qualified staff to review cases and make eligibility determinations? Why or why not?

    1. [If not enough staff] How long have local offices been short staffed?

[Probe: Since before COVID or more recently?]

  1. Do you feel your local offices have enough funding to properly review cases and make eligibility determinations? Why or why not?

  2. We know that COVID-19 drastically affected what were formerly standard practices when moving a SNAP case through the eligibility determination process. For instance, SNAP offices largely stopped conducting certification interviews. How else did COVID-19 change SNAP processes in ways that may have affected the State agency’s payment error rate?

[Note: Be clear on whether the impact on error rates was positive or negative]

  1. [Ask if tool was implemented between 2017 and 2019, per survey questions A3 and B4] We see from the official payment error rates FNS released that your State agency’s payment error rate ranged from [X] to [Y] between 2017 and 2019. Can you think of anything happening at your State agency during that time that may have affected the payment error rate?

[Probe: New management information system, policy change, corrective action plans, significant staffing changes, other?]

[Note: Be clear on whether the impact on error rates was positive or negative]

  1. Are there any key challenges to successfully building or implementing tools like [tool name] that we haven’t already discussed?

  2. Can you think of anything that makes these tools more accurate in identifying SNAP cases at risk of payment error?

We’ve reached the end of the interview. Thank you so much for taking the time to talk with us and share your experiences. The information you provided gave us valuable insights into how tools like [tool name] work.

[If applicable] I recall you mentioning that you would be willing to share [documents] with me. Those would be helpful to see, so thank you for offering to send them. You can send them to me at [email address]. I can also set up a secure FTP site to receive the materials if the documents contain identifying information.

This information is being collected to provide the Food and Nutrition Service (FNS) with key information on case-profiling tools used by SNAP State agencies. This is a voluntary collection, and FNS will use the information to examine risk assessment tools in SNAP. This collection requests personally identifiable information under the Privacy Act of 1974. According to the Paperwork Reduction Act of 1995, an agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 0584-####. The time required to complete this information collection is estimated to average 1.75 hours (105 minutes) per response. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to U.S. Department of Agriculture, Food and Nutrition Service, Office of Policy Support, 1320 Braddock Place, 5th Floor, Alexandria, VA 22306 ATTN: PRA (0584-####). Do not return the completed form to this address. If you have any questions, please contact the FNS Project Officer for this project, Eric Williams, at [email protected].

Shape2

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKelley Calvin
File Modified0000-00-00
File Created2024-08-05

© 2024 OMB.report | Privacy Policy