RUDI Draft OMB Package Supp Statement B Post Pilot FINAL

RUDI Draft OMB Package Supp Statement B Post Pilot FINAL.docx

Rapid Uptake of Disseminated Interventions (RUDI) Evaluation

OMB: 0906-0079

Document [docx]
Download: docx | pdf

Rapid Uptake of Disseminated Interventions (RUDI) Evaluation: Supporting Statement B

Rapid Uptake of Disseminated Interventions (RUDI) Evaluation
Supporting Statement B

10/20/2023

OMB Control No. 0906-XXXX
New Information Collection Request

  1. Collections of Information Employing Statistical Methods

The HIV/AIDS Bureau (HAB), part of the Health Resources and Services Administration, requests approval from the Office of Management and Budget (OMB) for a new information data collection study, the Rapid Uptake of Disseminated Interventions (RUDI) evaluation. HAB’s goals of the mixed-methods RUDI evaluation are to systematically assess (1) how, where, and why HAB products are accessed and used by recipients of Ryan White HIV/AIDS Program (RWHAP) funding and (2) the usefulness and value of the disseminated resources and products created by HAB. The findings from the RUDI evaluation will help HAB maximize the uptake and impact of its disseminated resources and products to end the HIV epidemic in the United States.

This application seeks approval for the evaluation’s data collection activities, as follows: (1) A one-time national survey of all RWHAP providers and subrecipients (RUDI-P/S) (hereafter referred to as ‘RWHAP providers’) (2) a one-time national survey of all RWHAP Part A and B administrative recipients (RUDI-R) (hereafter referred to as ‘RWHAP recipients’), (3) virtual site visits (interviews) to a sample of 40 RWHAP provider sites, (4) interviews with a sample of 20 RWHAP recipients, and (5) interviews with 8 AIDS Education and Training Center (AETC) grantees. The mixed-methods approach is necessary to meet the goals of the study because the nature of the research questions require both quantitative, population-level survey data and qualitative, in-depth interview data. Survey methods will be used to gather data from a nationwide cohort of RWHAP providers and administrative recipients about how, where, and why they access (or don’t access) and use (or don’t use) intervention resources. These data will be used to statistically estimate the uptake and use of intervention resources. The qualitative interviews will allow us to gain an in-depth and nuanced understanding of the usefulness and value of intervention resources. Although the qualitative findings are not generalizable, the interviews will yield real-world examples of how providers, recipients, subrecipients, and AETC staff interact with and use intervention resources to help clients living with HIV.

We describe the data collection activities in this supporting statement. We briefly describe other activities under this project, such as using RWHAP Services Report (RSR) and Dental Services Report (DSR) data, Google Analytics data, and other available data, in Supporting Statement A to give OMB a sense of the entire scope of RUDI evaluation activities.

A. Respondent universe and sampling methods

1. RUDI-P/S Survey

HAB will conduct a web-based survey of RWHAP providers funded directly (Part C, D, and F recipients) and/or indirectly (Part A and B subrecipients) by the RWHAP. The primary purpose of the survey is to obtain information on (1) the extent to which RWHAP providers access HAB and non-HAB intervention resources, (2) the channels they use to access them, (3) the contribution they make to implementation success, (4) opportunities to strengthen the effectiveness of HAB’s dissemination activities, and (5) opportunities to strengthen the impact of HAB’s resources and products on care delivery and health outcomes. We will exclude from the survey RWHAP providers funded for legal support or financial assistance only because HAB does not focus its dissemination resources on those services. We will also exclude RWHAP providers involved in evaluation or TA activities only because they do not deliver care or services to clients.

We will field the web-based survey with a census of 2,131 RWHAP providers, anticipating approximately 1,066 respondents (~50 percent response rate). Since we are employing a census, sampling methods are not applicable. A census is preferred because RWHAP providers and subrecipients are diverse and have unique characteristics and needs based on their communities and clients. We anticipate that their access to and use of intervention resources also vary. Sampling may result in findings that are representative of a subset of RWHAP providers and subrecipients; a census will increase our chances of obtaining the widest range of experiences, as possible. We will obtain contact information for the universe of RWHAP providers from the 2021 RSR and DSR files (the latest year for which data are available). Once the survey period ends, we will compare RUDI-P/S responders to non-responders on variables included in the 2021 RSR and DSR files that may impact access to and use of intervention resources (for example, provider type, number of clients served, etc.). If respondents differ significantly from non-responders, we will develop non-response weights.

RWHAP providers responding to the RUDI-P/S survey will form the sample frame for the virtual site visits, as we describe below. The RUDI-P/S Survey can be found in Appendix A.

2. RUDI-R Survey

HAB will conduct a web-based survey of all Part A and B recipients funded by the RWHAP. The primary purpose of the RUDI-P survey is to obtain information on (1) the extent to which RWHAP recipients access and use of HAB and non-HAB intervention resources, (2) the channels they use to access them, (3) the contribution they make to internal knowledge and capacity, (4) the extent to which they are used to support providers and subrecipients as they implement interventions for clients living with HIV, and (5) implementation success, (4) opportunities to strengthen the effectiveness of HAB’s dissemination activities.

We will field the web-based survey with a census of 110 RWHAP recipients, anticipating approximately 56 respondents (~50 percent response rate). Like RWHAP providers, recipients have unique characteristics and needs based on their communities and clientele and, further, that of the subrecipients with which they engage. Sampling may result in findings that are relevant to a subset of RWHAP recipients; a census will increase our chances of obtaining information across the widest range of experiences, as possible.

We will obtain contact information for the universe of RWHAP recipients from an administrative list provided by HAB. Once the survey period ends, we will compare RUDI-R responders to non-responders on variables included in the 2021 RSR and DSR files that may impact access to and use of intervention resources (for example, type of funding, number of clients served, etc.). If respondents differ significantly from non-responders, we will develop non-response weights.

RWHAP recipients responding to the RUDI-R survey will form the sample frame for the interviews with Part A and B administrative recipients, as we describe below. The RUDI-R Survey can be found in Appendix B.

3. Virtual site visits with RWHAP providers

We will conduct virtual site visit interviews with individual informants or small groups of informants (depending on the respondents’ preferences) over Webex or Zoom. This will allow us to obtain views from multiple staff in the RWHAP provider organization. Interviews will yield more in-depth and nuanced information than we can obtain through a short survey about the ways that RWHAP providers access and use HAB and non-HAB intervention resources and products, the contribution of these materials to effective implementation of care delivery strategies, the perceived effects of the intervention on organizational capacity, and opportunities to strengthen the impact of HAB resources on care delivery and outcomes. We also plan to engage RWHAP providers that accessed but did not use TargetHIV resources to gain an in-depth understanding of why resources were not used and what would have made them useful.

RUDI-P/S respondents will be categorized into one of three groups, as follows:

  1. Group 1: RWHAP providers that accessed and used TargetHIV resources.

  2. Group 2: RWHAP providers that did not access nor use resources on TargetHIV but did access and use a non-TargetHIV resource.

  3. Group 3: RWHAP providers that accessed, but did not use, a TargetHIV resource, and did not use any resources from non-TargetHIV sites.

To help ensure that RWHAP providers participating in the interviews are nationally representative, we plan to stratify providers and subrecipients, within each of the three groups, by two characteristics: provider type (e.g., community health center, hospital or university-based clinic, etc.) and geographic region (four census regions). We will then draw a stratified random sample of 40 RWHAP providers and subrecipients. The allocation across the three groups will either be equal allocation (about 13 per group) or weighted toward Group 1 if that group proves to be large enough based on the survey responses, with final allocation determined in consultation with HAB after the survey is complete.

After the sample is drawn, we will review the RSR and DSR data for selected RWHAP providers. We will focus on data which may be predictive of resource access and use (that is, characteristics that could distinguish users from non-users and that HAB can use to improve access among providers least likely to download relevant disseminated materials). These include services provided (non-medical services and medical), client population served, size of client population. If the drawn sample is not sufficiently diverse, we will redraw the sample.

The guide for the virtual site visits with RWHAP providers is provided in Appendix C.

Interviews with RWHAP Part A and B Recipients (administrative entity only). Part A and B recipients use HAB resources both directly to improve their own knowledge to help achieve Ryan White program goals as well as to support RWHAP providers in their efforts to improve services for their clients. Through these interviews, which will last up to 60 minutes each, the research team will understand how this audience directly accesses and uses HAB resources (may be different from how providers and AETCs access and use them) and will collect the recipients’ observations and learnings that they report from their subrecipients (providers) regarding use of these materials.

We will select 20 administrative recipients for telephone interviews, 12 Part B and 8 Part A. We will draw the sample from the organizations that completed the RUDI-R Survey. To ensure a diverse mix of Part B administrative recipients, we will consider two factors likely to be associated with states’ and territories’ use of disseminated intervention materials: (1) number of individuals diagnosed with HIV and (2) geographic region (using the four census regions). We will include all 50 states, the District of Columbia, and the two territories with more than 500 diagnosed HIV cases (Puerto Rico and the Virgin Islands). We will first rank order the states and territories by number of individuals aged 13 and older diagnosed with HIV based on 2020 HIV prevalence data from the Centers for Disease Control and Prevention, and then divide them into high, medium, and low terciles. We will randomly select 4 states or territories from each tercile, striving for regional diversity across the terciles. We will also select a back-up state or territory from each tercile, in case the Part B recipient in a selected state cannot participate. We propose to oversample states in the South with a substantial rural burden by purposively selecting one of the seven states targeted by the Ending the HIV Epidemic in the United States initiative from both the high and medium terciles. This will ensure we capture the potentially unique needs and challenges they face accessing and using disseminated materials to improve the delivery of care.

For Part A administrative recipients, we will select one eligible metropolitan area or transitional grant area from within each of the 4 states chosen from the high-volume tercile, described above, to assess how Part A and Part B recipients collaborate to disseminate resource materials among their subgrantees. We select four additional metropolitan areas from 4 randomly select states from the medium tercile, described above, one from each region. The four states selected in the medium tercile for the Part B recipient interviews will be removed prior to selecting the four additional states. We will select a back-up metropolitan area in each of the 4 states, in case one or more of the initially selected are unable to participate. We will select no more than one Part A administrative recipient per state.

We believe this sampling approach will provide insights that complement the RUDI-R survey findings and offer an additional level of richness to the study results and recommendations. Although the responses will not be statistically representative of all Part A and Part B administrative recipients, the proposed approach nonetheless achieves a diverse mix of recipients and perspectives (reflecting differences across regions, number of diagnosed HIV cases, and overlapping Parts A and B jurisdictions) appropriate to the level of resources available for this data collection effort.

The guide for interviews with RWHAP Part A and B Recipients is provided in Appendix D.

AETC Interviews. The AETCs are another key user audience for HAB dissemination products, so without these interviews the team would miss the perspectives of another important user group. The team will interview a key contact in each of the eight regional AETCs identified by HAB, for up to 60 minutes, to learn how they use HAB dissemination products themselves, how they use these resources to support RWHAP providers in their region, what they have heard from providers in their region about the use and usefulness of the resources, and their perspectives on what could be improved to make them more useful.

The guide for AETC interviews is provided in Appendix E.

Exhibit 1 provides an overview of the data collection activities, including the sampling approach and analysis methods, for the RUDI evaluation.

Shape1

Exhibit 1. Sampling approach, topics, and analytic approaches

Features

Description

RUDI-P/S Survey

Sampling

Census of RWHAP providers with 1 response per provider. Assume 2,131 RWHAP providers with a 50 percent response rate (n = 1,066).

Topics

  1. Interventions implemented or replicated

  2. Extent of use of HAB and non-HAB resources

  3. Ways HAB and non-HAB resources were used

  4. HAB and non-HAB resources’ contributions to implementation success

  5. Opportunities to strengthen the impact of HAB resources

Analysis

All quantitative data will be analyzed using SAS or R. Descriptive statistics and multivariate analyses will be conducted to understand the characteristics of RWHAP providers and the relationship between their site-level characteristics and their responses. Quantitative analyses will use a threshold of p < 0.05 for determining statistical significance.

Precision (Degree of accuracy)

Assuming 1,066 respondents answer a binary question with half selecting one response and half selecting the other response, the anticipated 95 percent confidence interval around that sample estimate of 50 percent will be +/- 3 percentage points.

Generalizability

We will compare the characteristics of RWHAP providers that responded with those of RWHAP providers that did not respond to determine the extent to which findings are generalizable to all RWHAP providers. Depending on the results of this nonresponse bias analysis, we might develop weights to adjust for nonresponse.

RUDI-R Survey

Sampling

Census of RWHAP Part A and B recipients with 1 response per recipient. Assume 110 RWHAP recipients with a 50 percent response rate (n = 56).

Topics

  1. Extent to which RWHAP recipients access and use HAB and non-HAB intervention resources

  1. Channels they use to access resources

  2. Contribution resources make to internal knowledge and capacity,

  3. Extent to which resources are used to support providers as they implement interventions for clients living with HIV, and

  4. Opportunities to strengthen the effectiveness of HAB’s dissemination activities.

Analysis

All quantitative data will be analyzed using SAS or R. Descriptive statistics and multivariate analyses will be conducted to understand how and why RWHAP recipients access resources and how they use resources to support (or not support) subrecipients. Quantitative analyses will use a threshold of p < 0.05 for determining statistical significance.

Precision (Degree of accuracy)

Assuming 56 respondents answer a binary question with half selecting one response and half selecting the other response, the anticipated 95 percent confidence interval around that sample estimate of 50 percent will be +/- 13 percentage points.

Generalizability

We will use any administrative data available to compare the characteristics of RWHAP recipients that respond with those of RWHAP recipients that did not respond to determine the extent to which findings are generalizable to all RWHAP recipients. Depending on the results of this nonresponse bias analysis, we might develop weights to adjust for nonresponse.

Virtual site visit interviews

Sampling

Stratified random sample of RWHAP providers/subrecipients responding to the RUDI-P survey who 1) accessed and used TargetHIV resources, 2) did not access nor use resources on TargetHIV but did access and use a non-TargetHIV resource, and 3) accessed, but did not use, a TargetHIV resource, and did not use any resources from non-TargetHIV sites. Stratification will be based on provider type and geographic location. Replicate samples will be created and drawn upon if selected RWHAP providers opt not to participate or are unresponsive. We anticipate an average of 3 interviews at each of the 40 provider/subrecipient sites (n=120 interviews).

Topics

  1. Interventions implemented

  1. Ways HAB and non-HAB resources were used

  2. HAB and non-HAB resources’ contributions to implementation success

  3. Opportunities to strengthen the impact of HAB resources

Analysis

All qualitative data will be entered into NVivo 12 to allow for standardized coding by topic and theme. Dual coding (that is, two independent coders) will be completed for a subset of interview notes to establish reliability within the coding team. Using thematic codebook and NVivo 12 software, we will identify common themes and examine divergence and convergence of themes across interviewee types and other jurisdiction-level characteristics

Precision

Not applicable given qualitative nature of the data collection.

Generalizability

Qualitative findings are not intended to be generalizable to the full population.

RWAHP Recipient interviews

Sampling

Five large recipient sites with certainty, with approximately 3 interviews per site, then a simple random sample of 15 remaining RWHAP recipient sites responding to the RUDI-R Survey, with one interview per site involving an average of 2 people, for a total of 20 sites and 30 interviews, involving a total of approximately 45 individuals.

Topics

  1. Ways HAB and non-HAB resources were used

  1. Recipient observations and learnings from their engagement (or non-engagement) with subrecipients

  2. Opportunities to strengthen the impact of HAB resources

Analysis

All qualitative data will be entered into NVivo 12 to allow for standardized coding by topic and theme. Dual coding (that is, two independent coders) will be completed for a subset of interview notes to establish reliability within the coding team. Using thematic codebook and NVivo 12 software, we will identify common themes and examine divergence and convergence of themes across interviewee types and other jurisdiction-level characteristics

Precision

Not applicable given qualitative nature of the data collection.

Generalizability

Qualitative findings are not intended to be generalizable to the full population.

AETC Interviews

Sampling

1 interview at each AETC site (n=8).

Topics

  1. HAB and non-HAB resources were used

  1. HAB and non-HAB resources’ contributions to provider success in the region

  2. Opportunities to strengthen the impact of HAB resources

Analysis

All qualitative data will be entered into NVivo 12 to allow for standardized coding by topic and theme. Dual coding (that is, two independent coders) will be completed for a subset of interview notes to establish reliability within the coding team. Using thematic codebook and NVivo 12 software, we will identify common themes and examine divergence and convergence of themes across interviewee types and other jurisdiction-level characteristics

Precision

Not applicable given qualitative nature of the data collection.

Generalizability

Qualitative findings are not intended to be generalizable to the full population.

HAB = HIV/AIDS Bureau; RWHAP = Ryan White HIV/AIDS Program.

B. Procedures for the collection of information

1. RUDI-P/S and RUDI-R Surveys

We will program and deploy the RUDI-P/S and RUDI-R Surveys in QuestionPro over a six-week field period. QuestionPro software allowed us to design a visually appealing survey with integrated skip logic that guides each respondent through the survey instrument, yielding accurate, high quality data. We have pilot tested both survey instruments to ensure that they function correctly and accurately capture data.

Each survey will take 15-20 minutes to complete and no one individual will be asked to complete more than one survey. Both surveys are comprised primarily of closed-ended questions with an a few open-ended questions at the end to capture their direct suggestions for resources and for relevant improvements HAB could make (see surveys in Appendices A and B). We will use a 24-month lookback period for HAB and non-HAB resource access and use. This will allow us to focus on current practice patterns. Because of changes in familiarity with and access to technology, and because of constantly evolving intervention strategies, we believe providers’ and recipients’ access to and use of dissemination resources more than 24 months ago would not be helpful for understanding current (or future) access and use patterns.

Endorsement email. Ten to 14 days before invitation emails are sent for the survey, all recipient and providers in the survey census will receive an endorsement email from HAB leadership. The email will inform them about the survey and encourage their participation, legitimizing the data collection effort. The endorsement email offers two benefits. First, the email alerts recipients and providers/subrecipients to the upcoming data collection effort. Second, the email provides an opportunity for those receiving the email to identify a more appropriate contact, if applicable (or we might receive an undeliverable notice), allowing us to correct the contact information before sending the first invitation email. If any recipients or providers/subrecipients decline participation at this phase, we will remove them from the survey contact list.

Invitation email. We will then send an invitation email to each sampled RWHAP recipient and provider/subrecipient, using the person listed in the administrative files and RSR/DSR, respectively, as the point of contact. The email will include information about the RUDI evaluation and highlight the purpose of the survey. We will emphasize the importance of participating in the survey and make sure providers know that participation is voluntary. Because survey responses will be at the organization level (that is, the provider’s site), we include guidance in the email about who is most appropriate to complete the survey – that is, the person most knowledgeable about the resources disseminated by HAB related to interventions or initiatives that their organization has used during the past two years. The invitation email will include a link to the web survey and reference a preferred survey completion date. Also, we will include links to the endorsement email from HAB leadership as well as a brief video explaining how the survey results contribute to the overall RUDI evaluation.

Reminder emails. Throughout the six-week data collection period, we will send up to five reminder emails to both recipients and providers, each with progressively more urgent language regarding participation and completion. We may tailor the reminder emails so that the wording differs depending on whether the recipient or provider started the survey without completion. Also, we may tailor emails for specific subgroups of providers lagging in response (for example, providers in rural areas or recipients with a low number of subrecipients). Our daily review of survey responses will inform the timing of the reminder emails.

Last chance email. In week five or six of the data collection period, we will send a last-chance email. This email will serve as a reminder and will include an attached pdf version of the survey instrument. We will give recipients and providers the option of completing the survey on paper and faxing or scanning it back to the data collection contractor. This often results in a small increase in the response rate, particularly with organization-level surveys to which multiple people contribute information.

Incentive. We will send the first 560 RUDI-P respondents and first 28 RUDI-R respondents a $50 Amazon gift card as a thank you for participation. The remaining responders will receive $25. The recipient or provider responding to the survey will determine how best to allocate the incentive among staff who provided input for the survey. Research is limited on the impact of incentives on survey response rates among RWHAP providers. However, in recent studies of school principals and physicians, $50 incentives outperformed incentives of lower amounts.1,2 Further, in an experiment embedded into the US Panel Study of Income Dynamics, the group that was offered $150 for early response had higher response rates than the group that was offered $75, regardless of completion time.3 The survey will also include an option to decline the incentive payment.

Data management. We will transmit web survey data in real time to the contractor’s secure data server. We will generate daily reports so that we can assess in a timely and ongoing manner the overall response rate and the response rate for subgroups of providers. This will inform the timing of and language in the reminder emails. Also, we will review the data for the first 50 RUDI-P/S respondents and the first 5 RUDI-R survey responses. This will allow us to detect any data quality issues in a timely fashion. If any issues are identified, we will close the survey, address the issue(s), and re-open the survey. After than point in time, we will examine data on a weekly basis, ensuring that data are being captured as intended.

All survey-related correspondence can be found in Appendix F.

2. Virtual site visit interviews and interviews with recipients and AETC providers

Recruitment. We will recruit 40 RWHAP providers to participate in the virtual site visit interviews and 20 Part A and B recipients to participate in individual interviews. All AETC providers will be invited to participate in individual interviews (n=8). To recruit sites to participate in the virtual site visits, we will send providers a visually appealing email highlighting the purpose of the interviews, stressing the importance of participating, and outlining the content for the discussion. For RWHAP providers, this upfront information will help the main point of contact listed on the RSR/DSR data file identify the appropriate staff members for us to interview. As with the survey, we will ask HAB leaders to send an endorsement email about two weeks before sending the recruitment email to emphasize the importance of the data collection effort to Health Resources and Services Administration and encourage participation.

The recruitment email will ask for a simple emailed reply to express interest. Providers interested in participating will then receive an encouraging email with a request to identify the best days and times for the interview or interviews, as well as which staff are appropriate and willing to participate in interviews and a list of topics to be covered. We will offer providers who opt not to participate in the interviews an opportunity to email us with their reasons for nonparticipation; this information can be helpful for future data collection efforts. We will include a telephone number with all recruitment materials to allow interested providers to call and complete the brief web-based questionnaire by phone, if preferred.

We will follow-up electronically with nonresponding providers every three business days, up to three times, with different messages each time. If the provider does not respond, we will call once (and, if necessary, leave a voice mail). The phone follow-up can help us identify a new point of contact when the listed contact person no longer works at the clinic. If the provider remains nonresponsive, we will select a matched replacement from the replicate sample.

Scheduling and preparing for interviews. After a provider expresses an interest in participating, we will reach out within one business day to begin scheduling the interviews. To help plan an optimal interview schedule and develop a case-specific data collection plan, we will ask participating providers to provide some basic information about the interventions for which the disseminated materials were used during the preceding 12 months (for example, the type of interventions implemented and a brief overview of how they used the resource materials.)

We will schedule the interviews on the days and times that best meet the needs of the provider site (for example, some busy clinics might prefer to speak with us during the early morning before they begin treating patients or in the early evening after they finish). We will not ask for more than an hour from any one individual.

Preparing for the interview. Before the interview, we will ask the provider for any off-the-shelf information they can provide about their interventions to help our team to prepare. We will also review the provider’s website and RSR/DSR data, if applicable, so that we understand the types of services it provides, the types of patients it serves, and the type of funding it receives from RWHAP, among other characteristics, before the interviews.

Conducting the interviews. For RWHAP provider sites participating in virtual site visits, we plan to conduct one to five interviews per provider site, with each interview lasting 15 to 60 minutes, depending on the individual’s role and experience using dissemination materials to improve the delivery of care. After we identify the interviewees, we will suggest an interview schedule that covers all the topics included in our master discussion guide (Appendix C). We will also allow overlapping questions across respondents because each might have divergent and valuable perspectives to share.

A two-person team (a researcher who will lead the interview and a research associate who will take notes) will interview people on our Webex or Zoom platform. We will record the interviews with the permission of the respondent. After we complete the interview, the notetaker will clean the notes and then the researcher will review them. When all interviews at a site are complete, a summary of findings will be drafted within five business days of the last interview. To the extent that the interviews for a given site have to occur on different days to accommodate the informants’ schedules, we will add the notes and summary for each interview to the shared folder created for that site as they become available.

For the individual interviews with RWHAP recipients and AETC providers, one staff member will conduct the interview, with a second taking notes. As with the virtual site visit interviews, we will audio record the interviews with the permission of the respondent. After we complete the interview, the notetaker will clean the notes and then the researcher will review them.

In addition to the qualitative information collected through the interviews, we will collect quantitative information, when available, on implementation metrics. This information will add depth to the provider’s story of how they accessed and used the resources and whether quantitative information was used to inform implementation of the care delivery intervention. For example, if the provider used the dissemination resources to increase the effectiveness of its outreach activities for a particular population, a useful implementation metric might be how many clients they engaged in care.



Compensating providers. The virtual interviews will require about three hours of staff time per RWHAP provider organization, and we have budgeted a $380 incentive per provider to offset their effort. Because the $380 incentive covers multiple interviews, we will allocate the $380 across the individual informants proportional to the length of each interview or give the full amount to the main point of contact and remind them they can use the gift card to compensate the people who participated in the interviews. RWHAP recipients and AETC staff participating in interviews will receive a $125 incentive for their 1 hour of participation. The incentives are consistent with the $125 hourly wage for individuals in similar positions (see Exhibit 5, Supporting Statement A). That is, AETC staff participating in a one-hour interview will receive a $125 incentive. For virtual site visits that involve multiple people over multiple hours, a larger incentive is offered. A similar approach was used in a 2021 study that involved interviews with leaders in physician practices. Incentives ranged from $500 to $1,500 depending on practice size.4

Data management. With the interviewees’ verbal consent, we will audio record the interviews. We will transcribe the recordings and analyze the transcripts using a NVivo, a qualitative data analysis software. If the interviewee does not consent to be recorded, we will take written notes electronically during the interview. We will store the recordings, transcripts, and notes on a secure server with access limited to project staff.

Outreach materials and correspondence associated with the virtual site visit interviews and interviews with recipients and AETC providers can be found in Appendix G.

C. Methods to maximize response rates and deal with nonresponse

1. RUDI-R and RUDI-P/S Surveys

We will use five strategies to maximize survey response rates. First, both web surveys will contain primarily closed-ended questions, and surveys will take no more than 15-20 minutes to complete. Second, the web survey will have integrated skip logics so that respondents are not exposed to or confused by non-applicable questions. Third, we will run response rate reports daily to assess completion and identify additional outreach opportunities to encourage participation as needed (reminder and last chance email). The response rates and a comparison of respondent vs. non-respondent characteristics (from the RSR/DSR files) will let us know whether data collection can cease before the end of the planned six-week field period. Fourth, we will offer an incentive of $50 for survey completion. Research is limited on the impact of incentives on survey response rates among RWHAP providers. However, in recent studies of school principals and physicians, $50 incentives outperformed incentives of lower amounts.5,6 Further, in an experiment embedded into the US Panel Study of Income Dynamics, the group that was offered $150 for early response had higher response rates than the group that was offered $75, regardless of completion time.7 Finally, survey topics are relevant to potential respondents’ daily work, which increases the likelihood that staff will participate.

We anticipate a minimum response rate of 50 percent for both the RUDI-R and RUDI-P surveys. We will perform a nonresponse bias analysis to determine if we need to apply survey weight adjustments to reduce the potential bias in the responses. We will perform all weighting work using SAS® software.

2. Virtual site visits, recipient interviews, AETC interviews

We will minimize nonresponse among the site visit and individual interview participants by (1) providing a letter of support from HAB leaders; (2) providing clear and compelling recruitment materials (and following-up with additional emails to non-responders); (3) offering multiple ways to express interest in participating; (4) limiting interviews to one hour, with most lasting no longer than 30 minutes; (5) scheduling interviews at a day and time convenient for the interviewee; and (6) offering an incentive payment to providers. Offering interviews virtually, rather than in-person, allows greater flexibility for provider participants.

For all data collections listed above, if response is suboptimal, we may add additional reminder emails, either from HAB or from Mathematica, to further legitimize the survey effort and encourage response. Any added emails will use the same language as that used in emails contained within Appendix F (survey correspondence) and Appendix G (virtual site visit, recipient, and AETC interview correspondence).

D. Tests of procedures or methods to be undertaken

We pilot tested the RUDI-P/S and RUDI-R surveys, the RWHAP provider discussion guide, the Part A and B administrative recipient discussion guide, and the AETC discussion guide. Pilot test participants were selected from lists provided by HRSA project officers. Participants were selected so that there was diversity regarding geography, provider type (health department, community-based organization, etc.), and the RWHAP funding source. Below, we first describe the pilot test process. Then, we present the findings and resultant changes made to the survey instruments, discussion guides, and sampling and interview strategies.

  1. RUDI-P/S Survey. Nine geographically diverse RWHAP providers with differing organizational characteristics were asked to pilot test the web-based RUDI-P/S Survey. After each agreed to participate, we emailed a hyperlink to the survey on a rolling basis. After the first four RWHAP providers completed the survey, we reviewed the results and made a few changes to the survey instrument to improve the quality of the data collected and users’ experience completing the survey. After making updates, we emailed the remaining five providers a hyperlink to the updated version of the RUDI-P/S Survey.

  2. RUDI-R Survey. About one week after contacting the providers, we reached out to four Parts A or B recipients to request their assistance in testing the web-based RUDI-R Survey. Recipient recruitment began after provider recruitment. Because the RUDI-P and RUDI-R surveys had some questions in common, we made several changes to the RUDI-R Survey that we had previously made to the RUDI-P/S Survey. After updating the RUDI-R Survey, we emailed the four recipients a hyperlink to the survey.

  3. RWHAP provider discussion guide. We selected five of the nine RWHAP providers who completed the RUDI-P/S Survey pilot test to also participate in pilot testing the discussion guide. These five providers indicated on the RUDI-P/S Survey that they used resources from the TargetHIV website to improve care for people with HIV, which was the criteria we intended to use for selecting providers in the evaluation.

  4. Part A and B administrative recipient discussion guide. We pilot tested the recipient discussion guide with three Parts A and B recipients.

  5. AETC discussion guide. We piloted tested the AETC discussion guide with one regional AETC.

Pilot test results for the RUDI-P/S and RUDI-R surveys

We pilot tested the RUDI-P/S and RUDI-R surveys to ensure that 1) question wording and response options were appropriate and clear, 2) that the survey was easy to complete technologically, and 3) that the survey burden estimate was accurate. Exhibit 2 describes the key findings from pilot testing the survey instruments and subsequent changes made to improve data quality and reduce respondent burden.



Exhibit 2. Survey experiences and subsequent changes

Experience

Change(s)

  1. Interview participants required examples to understand the meaning ascribed to “resources” and “interventions” in the survey instruments.

  • Added screenshots from the TargetHIV website with circles around content denoting an intervention and a resource.

  • Because it is more intuitive to think of an intervention and its resources, we reordered the definitions so that intervention precedes resources.

  1. Reporting multiple interventions was time consuming, and having multiple windows open at one time was challenging for some respondents.

  • Requested detailed information about a single initiative or system-level initiative, rather than two.

  1. Providing URLs for interventions necessitated the opening of multiple Internet browser windows. Testers found the process difficult and raised concerns about inadvertent closing of the survey webpage.

  • Removed requests for URLs for interventions and resources but still allowed URLs to be provided in “Other (specify)” fields.

  • For interventions, added three items on type of intervention, area of HIV continuum addressed, and target population. These data were going to be populated after the survey based on the URL provided by the respondent. With the removal of the URL item, we need to ask respondents these questions.

  1. Reporting multiple resources, including URLs, was time consuming.

  • Requested the name of up to two resources along with information about how each resource is valuable.

  • Removed the request for a third resource.

  1. Length of time to complete the survey was excessive. Among the four providers in the first round of RUDI Provider Survey, one respondent took 13 minutes and one took 22 minutes. The remaining two took more than 30 minutes.

  • Limited inquiry to one intervention instead of two.

  • Requested the name and attributes of up to two resources rather than three.

  • Removed requirement to provide URLs for interventions and resources.

  • Cross-referenced content in survey with content in discussion guides and removed questions that could be adequately addressed during interviews.



Pilot test results for the discussion guides

We pilot tested the discussion guides to ensure that 1) questions were appropriate and easy to understand, 2) questions were relevant to the respondent’s interaction with and use of resources, and 3) interview burden estimate was accurate. Exhibit 3 describes the key findings from pilot testing the discussion guides and changes made to best align questions and reduce respondent burden. Exhibit 4 contains information about changes to the strategy used to select participants and schedule interviews.

Exhibit 3. Discussion guides experiences and subsequent changes

RUDI PROVIDER DISCUSSION GUIDE


Experience

Change(s)

  1. A small number of providers who were SPNS recipients did use HAB resources to implement the types of interventions listed on TargetHIV, but their experience with HAB resources were related to the SPNS collaborative infrastructure and support.

  • For providers that were or are SPNS grantees, we will also ask about non-SPNS interventions.

  1. Some providers fully implemented one intervention and provided detailed information on that experience, and others partially implemented one or more interventions and provided more general information on their experiences.

  • Allowed flexibility in the protocol so the interviewer can use the protocol, as written, to gather detailed information about one intervention, if there is sufficient detail, or explore the use of resources for up to three different interventions (TargetHIV or other), with briefer discussions of each.

  1. Some providers could not remember the specific intervention resources they used, even if they remembered the intervention.

  • Plan to look up interventions listed in the survey response on TargetHIV.com and offer examples of intervention resources to help respondents with recall.

  1. Some providers benefited from receiving definitions and examples of interventions and intervention resources.

  • Added screenshots to the survey from the TargetHIV website with content denoting intervention and intervention resource.

  • Added “projects” and “strategies” as examples of equivalent terms in the definition.

  • Added additional examples of interventions and intervention resources to the discussion guide, for use by the interviewer.

  1. Interviewers needed to shorten or reword a couple questions to make them easier to ask and understand.

  • Reworded Question 8 and Question 9

RUDI PART A and PART B ADMINISTRATIVE RECIPIENT DISCUSSION GUIDE

Experience

Change(s)

  1. A respondent explained that he thinks of providers as organizations that have a physician on staff and provide clinical care. He referred to those that provided wraparound services such as case management as subrecipients.

  • Updated references to “providers” in recruitment materials and survey instruments to “providers and subrecipients.” Discussion guide questions still use the term provider as shorthand. But the term is defined to include subrecipients in the introduction.

  1. To answer questions, some recipients benefited from receiving definitions and examples of “intervention,” “intervention resources,” and “system-level initiatives.” One respondent assumed too narrow of a definition of those terms when responding to questions (for example, they assumed intervention included only evidence-based interventions). Another respondent did not understand what we were asking for when we referenced system-level initiatives.

  • Added screenshots to the survey from the TargetHIV website with circles around content denoting an intervention and an intervention resource.

  • Added “projects” and “strategies” as examples of equivalent terms in the definition.

  • Added definition of “system-level initiatives” and examples of system-level initiatives to the discussion guide.

  1. The question “Did your subrecipients have a specific care delivery need they were trying to address?” assumes the subrecipients originated a request for resources, but some recipients identified processes they wanted their subrecipients to implement to improve care and pushed that information down.

  • Reworded the question to remove the assumption that the subrecipients sought the resources (that is, “What aspect of care were the resources trying to address?”).

  1. For the discussion of non-HAB resources used, we found that the discussion guide missed an opportunity to ask about why the recipient might choose to use HAB versus non-HAB resources. We tested a question on that topic during an interview and it resulted in a good discussion of the pros and cons of resources from different sources.

  • Added question to the discussion guide about when and why recipients choose to use resources from HAB versus other sources.

RUDI AETC DISCUSSION GUIDE

Experience

Change(s)

  1. We found there might not be one person who can respond to both the main topics in the discussion guide: (1) information flows from HAB to AETC and (2) information flows from AETC to providers. The AETC respondent said that the AETC regional central office is mostly administrative, and it might be appropriate to include staff from an AETC partner site that has provided training to clinics.

  • Updated introductory email to AETC to ensure we include the right people in the interview. We will start with the program director and ask about including staff that have provided training. This could be staff at the AETC partner site or the central office’s staff member who works with the Practice Transformation sites (that is, regional and local AETCs involved in projects to improve patient outcomes by integrating principles of the patient-centered medical home model and integrated HIV care and behavioral health services).

HAB = HIV/AIDS Bureau; RUDI-P/S = Rapid Uptake of Disseminated Interventions Provider and Subrecipient Survey; RUDI-R = Rapid Uptake of Disseminated Interventions Parts A and B Administrative Recipient Survey; RWHAP = Ryan White HIV/AIDS Program; AETC = AIDS Educational Training Centers; HAB = HIV/AIDS Bureau; RUDI = Rapid Uptake of Disseminated Interventions.



Exhibit 4. Changes in sampling strategy and interview schedule

RUDI PROVIDERS AND SUBRECIPIENTS


Experience

Change(s)

  1. Some providers reported using HAB resources but not for the implementation of interventions listed in TargetHIV.



These providers indicated that they used HAB resources more generally (for example, to stay informed on current practices for HIV care) and told us how future resources and dissemination efforts could be more helpful with implementation.

  • Changed plans for identifying RWHAP providers for the virtual site visits. Using RUDI-P/S survey responses, we will identify three groups of providers that might have different and equally useful thoughts to share about their experience accessing and using disseminated materials:

  • Group 1: Providers that accessed and used resources disseminated through TargetHIV to implement an intervention.

  • Group 2: Providers that did not use resources on TargetHIV but accessed and used a resource from another site or channel to implement an intervention.

  • Group 3: Providers that accessed a resource on TargetHIV but did not use a resource from TargetHIV or another site to implement an intervention.

  • Plan to adapt the master discussion guide for each virtual site visit to reflect the appropriate group perspective. The interviewer will remove sections that do not apply to a specific group. For example, the interviewer would drop the section on implementation and intervention outcomes for Group 3 providers.

  1. Providers offered a general response when asked about the intervention listed in the survey. For example, one respondent entered “Ending the HIV Epidemic (EHE)” in the survey rather than a specific intervention.

  • We will sample virtual site visit providers from different groups as noted above. For Group 1, we will make sure the interventions they list are recognizable TargetHIV interventions before selecting them for an introductory call.

  1. When providers receive RWHAP funds from multiple parts (A to F), staff working on different funding streams might have different experiences with TargetHIV interventions and resources.

  • Changed the introductory call to clarify that the people we seek to interview are as follows:

  • For Groups 1 and 2, staff knowledgeable about the specific interventions available via TargetHIV as evidenced in their survey responses. Interviewing staff with specific knowledge about the interventions listed in the survey will allow for more cohesive virtual site visit conversations.

  • For Group 3, staff with greatest understanding of how staff use TargetHIV and other HAB websites (if not to implement interventions) and their needs for future HAB resources and dissemination pathways.

  • When speaking with a subrecipient with multiple funds about the support they receive from their RWHAP recipient, we will include the name of the specific RWHAP recipient we are interested in.

RUDI PART A AND B ADMINISTRATIVE RECIPIENTS

Experience

Change(s)



During our interviews, we found that there might not be one person in the organization that can speak about both of these key topics: (1) information flows from HAB to recipient and from recipient to subrecipient and (2) recipient use of HAB resources to implement system-level interventions.

  • Updated introductory email to recipients selected for interviews to specifically identify these two areas we will need to cover in our interview, and invite them to list up to two others who may join them on the interview to contribute to the topics listed

  • Included an item in the survey to ask whether the respondent used input from others to complete the survey. This will signal to the team that the recipient might require an interview that includes more than one person.

  • Added a question to the discussion guide about the structure of the organization to better understand how information flows in the organization and who might best be able to answer our interview questions.

HAB = HIV/AIDS Bureau.

E. Individuals consulted on statistical aspects and individuals collecting and/or analyzing data

HRSA has contracted with Mathematica to perform the following tasks: design the RUDI evaluation, develop data collection instruments, conduct pilot testing, collect quantitative and qualitative data, and conduct data analyses. Engaging Mathematica will ensure that 1) the RUDI evaluation is conducted using appropriate methods and 2) information generated through the RUDI evaluation is unbiased and of high quality.

In addition to the Contracting Officer’s Representative, Mathematica consulted 17 HAB staff for for feedback on the overall design of the RUDI evaluation, the content of the RUDI Survey, the virtual site visit interview protocol, and the approach to integrating secondary data into the evaluation.

We also presented the RUDI evaluation approach and solicited input at HAB division meetings, with a total of 83 participations from the following HAB divisions including: Office of Program Support (n=12), Division of Policy and Data (n=28), Division of Community HIV/AIDS Programs (n=43). An additional 38 HAB staff attended division meetings, but their specific division within HAB was not provided.

HAB staff were consulted because they possessed important knowledge about RWHAP staff including how they engage (or do not engage) with resources and how best to interface with staff on data collection activities included in the RUDI Evaluation. These collections were not subject to PRA because feedback was gathered from federal employees and not of members of the public.



1 Coopersmith, J., Vogel, L. K., Bruursema, T., & Feeney, K. (2016). Effects of incentive amount and type of web survey response rates. Survey Practice, 9(1).

2 Keating, N. L., Zaslavsky, A. M., Goldstein, J., West, D. W., & Ayanian, J. Z. (2008). Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care, 46(8), 878-881.

3 Mcgonagle, K. A., Sastry, N., & Freedman, V. A. (2022). The Effects of a Targeted “Early Bird” Incentive Strategy on Response Rates, Fieldwork Effort, and Costs in a National Panel Study. Journal of Survey Statistics and Methodology, smab042.

4 Khullar, D., Bond, A. M., Qian, Y., O’Donnell, E., Gans, D. N., & Casalino, L. P. (2021). Physician practice leaders’ perceptions of Medicare’s Merit-based Incentive Payment System (MIPS). Journal of General Internal Medicine, 1-7.

5 Coopersmith, J., Vogel, L. K., Bruursema, T., & Feeney, K. (2016). Effects of incentive amount and type of web survey response rates. Survey Practice, 9(1).

6 Keating, N. L., Zaslavsky, A. M., Goldstein, J., West, D. W., & Ayanian, J. Z. (2008). Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care, 46(8), 878-881.

7 Mcgonagle, K. A., Sastry, N., & Freedman, V. A. (2022). The Effects of a Targeted “Early Bird” Incentive Strategy on Response Rates, Fieldwork Effort, and Costs in a National Panel Study. Journal of Survey Statistics and Methodology, smab042.

8

Mathematica® Inc. 8

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorAdministrator
File Modified0000-00-00
File Created2023-11-01

© 2024 OMB.report | Privacy Policy