HPOG 2.0 Supporting Statement B_Revised_NonSubChange_November 2020_Tracked to ACF_Revised_clean

HPOG 2.0 Supporting Statement B_Revised_NonSubChange_November 2020_Tracked to ACF_Revised_clean.docx

OPRE Evaluation - National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants [descriptive evaluation, impact evaluation, cost-benefit analysis study, pilot study]

OMB: 0970-0462

Document [docx]
Download: docx | pdf



Supporting Statement for OMB Clearance Request


Part B


National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants


0970-0462


Revised April 2019

Revised July 2019

Revised April 2020

Revised June 2020

Revised November 2020


Submitted by:

Office of Planning,
Research & Evaluation

Administration for Children & Families

U.S. Department of Health
and Human Services


Federal Project Officers:

Hilary Bruck

Nicole Constance

Amelia Popham

Instruments

Revised Instruments Included in this Request

  • Instrument 5a: HPOG 2.0 National Evaluation welcome packet and contact update form_REV



Previously Approved Instruments

  • Instrument 1: PAGES Grantee- and Participant-Level Data Items List

  • Instrument 2: HPOG 2.0 National Evaluation Screening Interview

  • Instrument 3: HPOG 2.0 National Evaluation first-round telephone interview Protocol

  • Instrument 4: HPOG 2.0 National Evaluation in-person implementation interviews

    • Instrument 4A HPOG 2.0 National Evaluation In-Person Implementation Interview

    • Instrument 4B HPOG 2.0 National Evaluation In-Person Implementation Interviews Basic Skills Training

    • Instrument 4C HPOG 2.0 National Evaluation In-Person Implementation Interviews Career Pathways

    • Instrument 4D HPOG 2.0 National Evaluation In-Person Implementation Interviews Work-Readiness

    • Instrument 4E HPOG 2.0 National Evaluation In-Person Implementation Interviews Sustainability

  • Instrument 5: HPOG 2.0 National Evaluation welcome packet and participant contact update forms

  • Instrument 5b: HPOG 2.0 National Evaluation participant contact update letter and form

  • Instrument 6: HPOG 2.0 Tribal Evaluation grantee and partner administrative staff interviews

  • Instrument 7: HPOG 2.0 Tribal Evaluation program implementation staff interviews

  • Instrument 8: HPOG 2.0 Tribal Evaluation employer interviews

  • Instrument 9: HPOG 2.0 Tribal Evaluation program participant focus groups

  • Instrument 10: HPOG 2.0 Tribal Evaluation program participant completer interviews

  • Instrument 11: HPOG 2.0 Tribal Evaluation program participant non-completer interviews

  • Instrument 12: HPOG 2.0 National Evaluation Short-term Follow-up Survey

  • Instrument 13:HPOG 2.0 Screening Interview Second Round

  • Instrument 14:HPOG 2.0 Second Round Telephone Interview Guide

  • Instrument 15:HPOG 2.0 Program Operator Interview Guide for Systems Study

  • Instrument 16:HPOG 2.0 Partner Interview Guide for Systems Study

  • Instrument 17:HPOG 2.0 Participant In-depth Interview Guide

  • Instrument 18: HPOG 2.0 Intermediate Follow-up Survey_ REV_June2020

    • Instrument 18a: HPOG 2.0 Intermediate Follow-up Survey_Critical Items Only

  • Instrument 19:HPOG 2.0 Phone-based Skills Assessment Pilot Study Instrument

  • Instrument 20:HPOG 2.0 Program Cost Survey



Attachments

Previously Approved Attachments

  • Attachment A: References

  • Attachment B: Previously Approved Informed Consent Forms

    • Attachment B: National Evaluation Informed Consent Form B (Lottery Not Required)

    • Attachment B: National Evaluation Informed Consent Form D (Lottery Not

  • Attachment B: New Informed Consent Forms, Updated Time Period

    • Attachment B: National Evaluation Informed Consent Form A (Lottery Required__REV

    • Attachment B: National Evaluation Informed Consent Form C (Lottery Required)_Verbal_REV

    • Attachment B2: Tribal Evaluation Informed Consent Form A (SSNs)

    • Attachment B3: Tribal Evaluation Informed Consent Form B (Unique identifiers)

    • Attachment B2: Tribal Evaluation Informed Consent Form C(SSNs)_Verbal

    • Attachment B3: Tribal Evaluation Informed Consent Form D (Unique identifiers)_Verbal

  • Attachment C: 60 Day Federal Register Notice

  • Attachment D: Previously Approved Sources and Justification for PAGES Grantee- and Participant-Level Data Items

  • Attachment E: Previously Approved Final Updated Attachment E PPR Data List and Mockup

  • Attachment F: First Round of HPOG Grantees Research Portfolio

  • Attachment G: Previously Approved Participant Contact Information Update Letter and Form (Obsolete, replaced by Instrument 5a and 5b)

  • Attachment H: HPOG Logic Model

  • Attachment I: Previously Approved Focus group participant consent form

  • Attachment I: New Focus Group Participant Consent Form_Remote

  • Attachment J: Previously Approved Interview Verbal Informed Consent Form

  • Attachment J: New Interview Verbal Informed Consent Form_Remote

  • Attachment K: HPOG 2.0 National Evaluation Short-term Follow-up Survey Advance Letter

  • Attachment L: HPOG 2.0 National Evaluation Short-term Follow-up Survey Sources

  • Attachment M: HPOG 2.0 National Evaluation Short-term Follow-up Survey Trying to Reach You Flyer

  • Attachment N: HPOG 2.0 National Evaluation Short-term Follow-up Survey Email Reminder

  • Attachment O: Research Questions for Previously Approved Data Collection Efforts (National Evaluation and Tribal Evaluation)

  • Attachment P: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Advance Letter

  • Attachment P: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Advance Letter_Rev

  • Attachment Q: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Sources

  • Attachment Q: HPOG 2.O Intermediate Survey Sources_REV

  • Attachment R: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Trying to Reach You Flyer

  • Attachment S: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Email Reminder

  • Attachment S: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Email Reminder

  • Attachment T: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot flyer

  • Attachment U: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot grantee letter

  • Attachment V: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot participant letter

  • Attachment W: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot recruitment script

  • Attachment X: Complete list of previously approved data collection instruments

  • Attachment Y: 60-day Federal Register Notice

  • Attachment Z: Participant Interview Recruitment Materials









Part B: Statistical Methods

This document served as Part B of the Supporting Statement for the third revision of data collection for The Health Profession Opportunity Grants 2.0 (HPOG 2.0) National and Tribal Evaluation (OMB Control No. 0970-0462), approved in July 2019, with non-substantive changes approved in April 2020 and June 2020. The HPOG 2.0 National and Tribal Evaluation is sponsored by the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS). The federal evaluations of the HPOG 2.0 National and Tribal grantees will evaluate postsecondary career pathway programs focused on the healthcare sector that target Temporary Assistance for Needy Families (TANF) recipients and other low-income individuals. The intended use of the resulting data is to improve ACF’s research, evaluation, and program support of the HPOG 2.0 program and others like it. The revisions in this submission reflect non-substantive changes to one previously approved information collection instrument—Instrument 5a Welcome to the Study Packet and Contact Update Form, and modest increases in the number of respondents for Instrument 5a and 5b Participant Contact Update Letter and Form. ACF is considering funding a new survey cohort of participants enrolled in HPOG 2.0 National Evaluation after the onset of the COVID-19 pandemic and through the end of the HPOG 2.0 grant period, and these changes are needed to support that effort if it is funded. Should ACF choose to fund that effort, ACF will submit the survey instrument and supporting materials as part of a complete information collection request, with an associated public comment period, to OMB once the design work is complete.

Exhibit B-1 summarizes prior requests to OMB for approval of new instruments and the most recent non-substantive change request, approved in April 2020. The exhibit then summarizes the changes to several previously approved instruments, attachments, and procedures reflected in this request. Justification for these non-substantive changes can be found in the supplementary document OMB#0970-0462_NonSubstantiveChange Request Memo_November2020.docx.

Exhibit B-1: Clearance Requests and Instruments for HPOG 2.0 (OMB Clearance No. 0970-0462)

Request

Instrument(s)

Request Date

Approval Date

Link to Supporting Statement

Original

Participant Accomplishment and Grant Evaluation System (PAGES) (Instrument #1)

5/13/15

8/6/15

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201505-0970-002

1st Rev.

Various baseline, process and contact update forms (Instruments #2-5b for the National Evaluation; #6-11 for the Tribal Evaluation)

10/26/16

6/27/17

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201610-0970-012

2nd Rev.

National Evaluation Short-term Follow-Up Survey (Instrument #12)

2/5/18

6/8/18

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201802-0970-001

3rd Rev.

Additional National Evaluation data collection tools:

Descriptive evaluation protocols (Instruments #13-17);

Intermediate Follow-up Survey (Instrument #18);

Phone-based Skills Assessment Pilot (Instrument #19); and

Program Cost Survey (Instrument #20).

4/23/2019

7/24/19

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201904-0970-006


Non-substantive Change Request (this submission)

Non-substantive change request for minor changes and modest increase in the number of respondents to two previously approved instruments in support of the HPOG 2.0 National Evaluation:

  • National Evaluation

  • Minor revisions to the National Evaluation Welcome to the Study (Instrument 5a);

  • Increase in the number of respondents for Instrument 5a; and Increase in the number of respondents for Instrument 5b: Participant Contact Update Form.


November 2020

TBD




All of the information collections requested under the third revised submission and approved in July 2019 were discussed in Part A of this supporting package. Part B of this supporting statement focused on the five approved collections of information that involved complex sampling and/or analysis procedures.1 Three of the five instruments are for the descriptive evaluation:

  1. Program Operator Interview Guide for systems study (Instrument 15)

  2. Partner Interview Guide for systems study (Instrument 16); and

  3. Participant In-depth Interview guide (Instrument 17).



The remaining two instruments are part of the impact evaluation:

  1. Intermediate Follow-up Survey (Instrument 18); and

  2. Phone-based Skills Assessment Pilot (Instrument 19).

The original consent forms were previously approved under this OMB Control Number in August 2015, revised in January 2016, and renewed most recently in July 2019 (see previously approved Attachments B, B2 and B3). Due to the COVID-19 outbreak, most HPOG programs have changed to operate their programs virtually, including asking staff to work remotely and offering participant trainings online. These restrictions have brought the face-to-face intake and enrollment sessions to a halt; thus programs do not have the ability to obtain written consent face-to-face. In April 2020, OMB approved ACF’s use of a new set of alternative informed consent forms to allow eligible participants to verbally consent to be in the study. These revised consent forms will only be used until grantees can return to their worksites and resume normal program operations. At that time, grantees will return to using the previously approved consent forms for written consent. OMB approved one minor revision to the National Evaluation Informed Consent Forms A (Lottery Required) and C (Lottery Required_Verbal).

OMB also approved changes to the data collection procedures for the Tribal Evaluation, to permit remote data collection when in-person data collection is not feasible. These procedural changes are described in more detail in Section B2. This request also seeks approval for revised versions of the Tribal Evaluation data collection consent documents, previously approved Attachments B2, B3, I, and J.

OMB approved the third revised submission in July 2019 and the changes noted above in June 2020. Data collection is complete for all but Instrument 18, the Intermediate Follow-up Survey. This submission seeks approval for minor changes to the Instrument 5a Welcome to the Study Packet and Contact Update Form, previously approved in June 2017. It also seeks approval for minor increase in the number of respondents for Instrument 5a and Instrument 5b Participant Contact Update Letter and Form—approved in June 2017 and renewed in each subsequent OMB revision. Justification for these changes can be found in the supplementary document OMB#0970-0462_Non-substantive Change Request_November2020.docx and summarized throughout Supporting Statement A and sections B1, B2, and B3 of this document.



B.1 Respondent Universe and Sampling Methods

This non-substantive change request does not require any changes to the respondent universe nor to the sampling methods. Thirty-two HPOG 2.0 grants were awarded to government agencies, community-based organizations, post-secondary educational institutions, and tribal-affiliated organizations in September 2015. Of these, 27 were awarded to non-tribal entities and five were awarded to tribal organizations. The 27 non-tribal grantees operate 38 unique HPOG 2.0 programs. The instruments approved in July 2019 under the third revised submission concern only the 27 non-tribal grantees participating in the National Evaluation. Sampling procedures for the three instruments to support the National Evaluation descriptive evaluation are described below, followed by a discussion of the sampling procedures for the two National Evaluation impact evaluation instruments.

Descriptive evaluation. This section describes the sampling methods for the three information collection requests under the National Evaluation descriptive evaluation that involved complex sampling and/or analysis procedures: Program Operator Interview Guide, Partner Interview Guide, and Participant In-Depth Interview Guide.

Program Operator and Partner Organization Interview Guides. The systems study component of the descriptive evaluation included interviews with two respondent groups: Program Operators (Instrument 15) and Partner Organizations (Instrument 16). The evaluation team purposively selected 15 HPOG 2.0 programs (out of 38 programs) and 2 to 7 partner organizations from each selected program for inclusion in the HPOG 2.0 Systems Study. Selection focused on their experiences and perspectives on the local service delivery system over the course of the HPOG grant—with the goal of identifying programs that range in the types and intensity of systems activities that could influence how the system works rather than exploring collaboration across all HPOG programs. Purposive sampling allowed for the exploration of a range of experiences and perspectives on activities and partnerships that may contribute to or hinder systems development and improvement. It also provided opportunities to understand variations in service delivery systems across HPOG. Because selected programs offer a range of types and intensity of systems activities, the research team expects to gain perspectives on both positive and negative experiences with conducting systems activities.2

As part of the selection process, the evaluation team reviewed PAGES data to identify the prevalence of training in various healthcare occupations (e.g., nursing assistant versus health care information technology). This allowed the evaluation team to better understand variation in networks of partners and experiences with those partners across types of training programs. During the process of selecting programs for the systems study the evaluation team took into consideration the degree to which selected programs overlapped with those selected for the previously approved focus area site visits and with other data collection activities to minimize burden on any one program.

Program Selection

The evaluation team drew from information collected during the first-round telephone interviews (previously approved in June 2017 under this OMB Control Number), and information available in other documents (such as grant applications and evaluation design documents, and the PAGES system) to help with the program selection. To select programs, the evaluation team used a purposive selection strategy based on information on the types and intensity of system activities under the local service delivery systems and HPOG 2.0, geographic area, lead organization type, whether or not the grantee was an HPOG 1.0 grantee/program operator, occupation(s) of training, new or enhanced programs, program enrollment, and target population to ensure the sample includes variation in experiences and perspectives by different types of programs. A total of 87 respondents participated in the systems study—15 program operators (one operator per program for the 15 programs selected) and 72 partner organization staff for the 15 programs selected).

Partner Organization Selection

Purposive sampling was also used to select partner organizations. The strategy allowed the evaluation team to examine a range of experiences and perspectives on systems activities and partnerships. Partner organizations that did not engage at all in the HPOG program were excluded from the sample as respondents should have some knowledge of the program. The evaluation team used several sources of information to select partners.

  • First, for each selected program, the team used data from the First-Round Telephone Interviews to develop a list of partners and their involvement in the HPOG program operations.

  • Second, during the program operator interview, the team asked respondents to discuss partners that were highly involved and those that were less involved. Program operators were asked to recommend a mix of both highly and less involved partners for interviews.

Three to seven partners per program were selected based on program operators’ recommendations as to which partners represent different partner organization types (e.g., nonprofit organization, government agency, employer, and education and training provider) and were best suited to answer questions. For each program, the evaluation team created a matrix of partners that grouped partners by whether they were highly or less involved in HPOG operations and by organization type. The team selected a range of organization types, typically avoiding the same organization type as the program operator unless, in the program operator’s opinion, the partner had a useful perspective on systems activities. The evaluation team sought to include employers and employer representatives, such as industry associations, to ensure we gather perspectives on employer and industry engagement, an important component of the HPOG 2.0 Program.

Participant In-depth Interviews. The study team conducted in-depth interviews with 153 participants across 14 programs using the Participant Interview Guide (Instrument 17). Researchers first selected programs and then participants. Researchers used data from the first-round telephone interviews with programs to select 14 programs for inclusion in the participant interviews. In consultation with ACF, the evaluation team selected programs that represented a range of locations, program size and structure, grantee organizational types, and program characteristics. The purposive sampling strategy maximized variation in participant and program characteristics as much as possible. Interviewers travelled to conduct the interviews with selected participants over a four-day period. The interviews were conducted in a central area—at the program offices or another centrally located quiet place such as a local library or community center. If those were not feasible, interviews took place at the respondent’s home. The purposive sampling strategy took into account where program participants reside—to look at how geographically dispersed they were and ensure that program participants’ geographic locations are practical for conducting site visits. For example, some programs did not have sufficient participants located in a geographically central location to facilitate a successful data collection site visit.

Once the 14 programs were selected, the evaluator selected participants. The goal in sampling was to recruit roughly equal numbers of participants who completed their training and who were still in the training program, as well as some who had dropped out before completing training. The evaluation team attempted to select an equal number of participants to attempt to interview across the selected programs. Researchers reviewed the participant data available in PAGES to select an initial pool of 45 treatment group members in each program according to the following criteria:

  1. Participant Stage in the Training Program to ensure a mixture of participants who have successfully completed their training (approximately 40 percent), participants who are still in a training program (approximately 40 percent), and participants who have dropped out of a training program (approximately 20 percent).

  2. Demographic and Socio-Economic Characteristics to interview a sample representative of the demographic and socio-economic characteristics of that particular program’s participant population.

To select the 45 treatment group members, the evaluation team chose: the most recent 25 participants who successfully completed their training; 25 participants who were currently at least four months into their training program but not yet completed; and 12 participants who had dropped out of the training program within the last six months. Participants were selected randomly within each group.3 From this selection of participants, the evaluation team looked at demographic and socio-economic characteristics of the group and selected participants to create a sample with variation similar to the demographic and socio-economic characteristics of the program’s overall participant population.

The evaluation team used that pool of 45 participants per program to select 15 participants in each program using stratified sampling to ensure representation from each group of interest. Evaluation team members attempted to recruit selected participants to conduct an interview. The expected overall response rate was 67 percent which would result in 140 completed interviews across all selected programs (10 completed interviews at each of the 14 programs).4 The response was slightly better than expected—72.8 percent—resulting in 153 completed interviews.

Impact evaluation. This section first describes the sample selection for the expanded contact update requests and then describes the previously approved sampling methods for the two information collection requests under the National Evaluation impact evaluation: the Intermediate Follow-up Survey and the Phone-based Skills Assessment Pilot.

Welcome to the Study and Contact Update Requests (Instruments 5a and 5b). ACF is considering fielding a survey of HPOG 2.0 study participants enrolled after the onset of the COVID-19 pandemic and through the end of the HPOG 2.0 grant period; that is, those participants enrolled between May 2020 and September 2021. The experiences of study participants enrolled during this period will likely vary greatly from those enrolled in the pre-pandemic phase. In order to understand the effect of COVID-19 on study participants and observed program impacts, if funded, ACF plans to survey all those enrolled during this period. The sample size is estimated to be 6,400 participants.

Intermediate Follow-up Survey (Instrument 18). The evaluation team in collaboration with ACF selected 13,118 study participants—all of the participants enrolled between March 2017 and February 2018—for inclusion in the Short-term Follow-up Survey sample (previously approved under this OMB Control Number in June 2018). A subset—up to 5,000—of those participants, from a compact set of randomization cohorts, will be included in the Intermediate Follow-up Survey sample. The evaluation team completed the Short-term Follow-up Survey in November 2019. The evaluation team revised their response rate estimate for the Intermediate Follow-up Survey based on their Short-term Follow-up Survey experience. The evaluation team now estimates a 75.7 percent completion rate (3,785 completed interviews) for the full survey instrument. The evaluation team also estimates an additional 215 completes to the new version of the Intermediate Follow-up Survey (Instrument 18a), the critical items only version.

Several aspects of this sampling plan deserve attention: (1) How was the subsample size chosen?; (2) Why do we want to select a subsample of those interviewed in the Short-term Follow-up Survey?; and (3) Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? Each of these questions is answered below.

  1. How was the subsample size chosen? The subsample size of 5,000 was chosen because it allows reasonable power to detect national pooled impacts. The much larger sample size for the Short-term Follow-up Survey was chosen because of the need to measure variation in program implementation from the student perspective and to measure variation in effects on education outcomes. These activities are not planned for the Intermediate Follow-up Survey.

  2. Why do we want to select a subsample of those selected for participation in the Short-term Follow-up Survey? We want to select a subsample of those selected for the Short-term Follow-up Survey for several reasons. First, selecting from those who participated in the Short-term Follow-up Survey will allow the construction of longer case histories as we will have thirty-six months of employment and training history instead of just fifteen months. Second, it will reduce nonresponse and cost because the continuous updating of contact information will provide the evaluation team with a more robust history of contact information over the 36-month follow-up period than would be available if a new sample was selected. Drawing from the Short-term Follow-up Survey sample also allows the evaluation team to build upon the rapport established with study participants during the follow-up period. Finally, using a subsample of the Short-term Follow-up Survey sample will allow more powerful adjustments for nonresponse to the Intermediate Follow-up Survey since the Short-term Follow-up information can be used both to study the potential for nonresponse bias and to make adjustments in the event that evidence for nonresponse bias in unadjusted statistics is found. However, in the selected randomization cohorts we will attempt to interview all participants selected for the short-term follow-up as part of the Intermediate Follow-up Survey. That is we will not exclude participants who were included in the Short-term Follow-up Survey sample, but not interviewed.

  3. Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? The Short-term Follow-up Survey sample included participants enrolled over 12 monthly cohorts—March 2017-February 2018. We want to select a compact set—or subset—of cohorts because of the substantial time and cost efficiencies associated with larger workloads for interviewers over a compressed field period. We plan to select four or five of the 12 monthly cohorts included in the Short-term Follow-up Survey for inclusion in the Intermediate Follow-up Survey data collection.

At the conclusion of the Short-Term Follow-up Survey, all study respondents were asked to update their contact information to aid in future data collection efforts. Study participants selected for the Intermediate Follow-up Survey continue to receive periodic contact update requests via the previously approved contact update form (Instrument 5b) every three months between the Short-Term and Intermediate Follow-up Survey efforts.

Phone-based Skills Assessment Pilot (Instrument 19). This assessment was a pilot study. Results from it will not be published as a formal part of the evaluation of HPOG 2.0.5 Rather, the results from this effort were used to identify a narrow set of survey questions that were incorporated into a ten-minute module within the Intermediate Follow-up Survey.6 Given the intended usage, the evaluation team attempted to identify a volunteer sample of 500 HPOG 2.0 participants randomized outside the window for the Short-term Follow-up Survey. The team recruited about 400 participant volunteers with the help of grantees and completed 300 pilot assessments.7 Most grantees were be asked to recruit and refer potential volunteers to the evaluation contractor. Ideal candidates were HPOG 2.0 study participants who met three key criteria:

  1. They were from cohorts that are not part of our short-term survey sample pool (enrolled prior to March 1, 2017 OR after May 31, 2018);

  2. They were nearly ready to start occupational classes or currently taking lower level occupational classes; and

  3. They had complete contact information (address, phone number, and email) in PAGES.

A sample of volunteers was adequate for the purpose of psychometric testing of the draft skills assessment. Thus, the pilot design targeted a particular number of completed interviews as opposed to a certain response rate. The evaluator estimated that 300 completed pilot assessments were needed in order to yield useful results on the reliability and validity of the items. The purpose of the pilot was to sort the relative difficulties of the assessment items. By having grantees recruit participants that met the above criteria and wanted to participate, the evaluation team was able to meet these objectives.

Several national and international surveys have been developed to assess adult numeracy and literacy, but almost all of these rely on face-to-face interviewing (a mode too expensive for most OPRE evaluations) or online administration (a mode infeasible for many OPRE evaluations due to a higher lack of computer access among low-income populations). Since most OPRE evaluations use a mix of methodologies, identifying a short battery of questions that could be administered by phone in about 10 minutes would offer four benefits: (1) it would be more cost effective than in-person or online administration; (2) it would be easily adaptable for in-person or online administration reducing burden on administrators and respondents; (3) the short duration of the module would also reduce burden on respondents—potentially increasing response rates or at least minimizing break-offs, and (4) it could be easily shared across other studies. The pilot assessment data collection was conducted in Fall 2019. The findings are reflected in Section A16 of Supporting Statement A.

Exhibit B-2 presents the sampling methods and target response rates for each of the HPOG 2.0 National and Tribal Evaluation respondent subgroups. The instruments where data collection is complete are labelled as such in the exhibit.





Exhibit B-2: HPOG 2.0 National and Tribal Evaluation Respondents with Status Updates

Respondent Universe

Respondent Subgroup

Sampling Methods and Target Response Rates

Data Collection Strategies

National HPOG 2.0 Evaluation


Grantees, partners, and employers

Grantees

Evaluation team members review the topics of interest with grantees using the HPOG 2.0 Screening Interview to identify appropriate respondent(s) based on who is most knowledgeable about the topics of interest. (See Instrument 2).

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to both rounds of data collection.

Semi-structured telephone interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instruments 13-16) (COMPLETE)

Program Cost Survey (Instrument 20) (COMPLETE)



Managers and staff

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instruments 14-15) (COMPLETE)

Program Cost Survey ( Instrument 20) (COMPLETE)


Partners

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instrument 16) (COMPLETE)


Employers

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instrument 16) (COMPLETE)

Descriptive evaluation participants

Selected treatment group participants

A pool of 45 participants in each of 14 sites will be identified to recruit for the participant interviews.

Up to 15 participants per site were recruited; the team achieved a better than expected response and completed interviews with 72.8 percent of those selected (153 in all.)

Semi-structured participant interview guide administered in-person

(Instrument 17) (COMPLETE)

Impact evaluation participants selected for the Contact Update Sample

A sample of participants (treatment and control groups)

Up to 19,518 study participants, beginning with those enrolled between March 2017 and January 2018 and those enrolled between May 2020 and September 2021 will be part of the participant contact update efforts.

The team estimated that 35 percent of the respondents will respond to each quarterly participant contact update effort. The contact updates are ongoing. The current return rate is 24 percent.

Contact updates by mail, online portal, or telephone

(Previously approved Instruments 5a and 5b)

Impact evaluation participants selected for Short-term Follow-up Survey sample

A sample of participants (treatment and control groups

13.087 study participants, beginning with those enrolled in March 2017 will be part of the Short-term Follow-up survey.

Data collection is over and the evaluation team completed interviews with 74.2 percent of the sample (9,710 interviews in total).

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Previously approved Instrument 12) (COMPLETE)

Impact evaluation participants selected for Intermediate Follow-up Survey sample

A sample of participants (treatment and control groups

Up to 5,000 study participants, from select cohorts of participants randomized between March 2017 and February 2018 will be part of the Intermediate Follow-up survey.

The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 4,000 completes.

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Instrument 18)

Impact evaluation participants selected for the phone-based Skills Assessment Pilot

Treatment group participant volunteers

Up to 500 participants will volunteer to be part of the phone-based Skills Assessment Pilot.

The team achieved its target of 300 completed interviews.

Telephone interviews conducted by local interviewers with CAPI technology

(Instrument 19) (COMPLETE)


Tribal HPOG 2.0 Evaluation

Grantees, partners, and employers

Grantees

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to date and expects a 100 percent response rate going forward.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Management and Staff

A very high response rate (at least 80 percent) is expected among grantee staff. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Partners

Partners have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Employers

A very high response rate (at least 80 percent) is expected among HPOG employers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 8)

Participants

Program participants (current)

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team achieved response rates ranging from 25-50 percent from current program participants across sites to date, and expects the same trend to continue.

In-person focus groups or telephone/virtual

(Previously approved Instrument 9)


Program completers

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team expects a 25-50 percent response rate from program completers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 10)


Program non-completers

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team has experienced difficulty recruiting participants for this information collection—achieving closer to 10 percent response in prior rounds. The team still expects a 10-25 percent response rate from program non-completers for the upcoming information collection.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 11)

HPOG National and Tribal Evaluation Participant Accomplishment and Grantee Evaluation System (PAGES)

Participants

National Evaluation (Non-Tribal) HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 52,000

Baseline and ongoing participant level data

(Previously approved Instrument 1)


Tribal HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 2,663

Baseline and ongoing participant level data

(Previously approved Instrument 1)



B.2 Procedures for Collection of Information

This submission includes minor changes to previously approved Instrument 5a, Welcome to the Study Packet. It does not include any changes to data collection procedures. The changes pertaining to the National Evaluation Welcome to the Study Packet (Instrument 5a are summarized here.

        1. HPOG 2.0 National and Tribal Evaluation Informed Consent Procedures

OMB approved a new set of informed consent forms that will, on a temporary basis, allow participants to verbally consent into the study. This will allow grantee staff to continue to enroll new participants in healthcare training and serve their communities without disruption due to the COVID-19 outbreak. Under these revised procedures, grantee staff will email participants a copy of the new verbal consent forms at least one day prior to their intake meeting. This will serve as the participant’s copy of the consent forum. Advance receipt of the form will also give a prospective participant time to review the form in advance, identify anything they have questions about, and follow along as grantee staff review the consent form with them during the intake and consent appointment. During the intake meeting grantee staff will review the form with participants over the phone and address any questions as needed. Once all questions are addressed, the grantee staff member will ask participants to describe the study and other parts of consent form to check for comprehension. The grantee staff member will clarify any information as needed and then read the consent statement to participants and ask for verbal consent to participate. The grantee staff member will indicate in the evaluation’s management information system (known as Participant Accomplishment and Grant Evaluation System, or PAGES) that consent was obtained and proceed through the remainder of enrollment normally.

It is unclear whether grantee staff will have access to printers during the period of remote work, which may make it difficult for the staff member to sign the attestation of consent on a physical form. Depending on their printing capabilities, grantee staff members will follow one of two steps:

  • If the grantee staff member is able to print consent forms, they will enter the participant’s unique study identifier on the form, then sign and date the attestation indicating that the participant provided verbal consent. The grantee staff member will retain the signed form in a safe location until they return to their worksite when normal HPOG program operations resume.

  • For grantee staff members who are not able to print forms while working remotely, the evaluation team will provide a list of study identifier numbers for the participants enrolled during this period after staff return to their worksites. Grantee staff members will then print and sign a consent form following the procedures above. (If the signature happens after grantee staff return to the worksite, they will enter the date of consent.)

Once normal operations resume, the grantee staff member will make a copy of the signed form for program records and send the original form, with the attestation signed by the grantee staff member, to the evaluation contractor for the evaluation’s records. Verbal consent will substitute for written consent for these participants; there will be no need for the grantee to ask these participants to sign a consent form. Grantee staff will return to written consent procedures using the previously approved informed consent forms once normal operations resume.

        1. HPOG 2.0 National Evaluation Descriptive Evaluation Data Collection Procedures

The primary data collection approach for the descriptive evaluation is two rounds of semi-structured interviews conducted by telephone and one round of in-person site visits with program directors, case managers, and other relevant grantee staff. The first round of telephone interviews (now complete) focused on early implementation efforts. The second round (also now complete) updated the earlier round and collected information to help lay the groundwork for the systems and cost studies. Site visits (now complete) were to programs implementing promising approaches to program components of specific interest to ACF. Telephone and site visit data collection are supplemented with data from PAGES and other existing site-specific materials developed earlier by the National Evaluation team.

The data collection procedures for the previously approved descriptive evaluation instruments can be found in the first revision to OMB Control # 0970-0462, approved in June 2017. The same procedures used in the first-round interviews were followed for the second-round telephone interviews (Instruments 13 and 14). The procedures used to conduct the now completed descriptive evaluation data collection components—the systems study interviews and the participant in-depth interviews—are discussed here.

The descriptive evaluation systems study will describe how local service delivery systems (i.e., the economic and service delivery environment in which specific HPOG programs operate) may have influenced HPOG program design and implementation and how HPOG implementation may have influenced these local systems, based on the perspectives of program operators (i.e., the lead organization directly responsible for the administration of an HPOG program) and partners engaged in systems activities. The systems study partner interviews included interviews with partners outside of HPOG. Telephone interviews for the systems study focused on coordination of grantees and their partners within the local service delivery system (Instruments 15 and 16). Two-person teams administered the semi-structured interviews—one acted as the lead interviewer and the other as the note taker. Each interview took approximately 60 minutes based on the depth of knowledge of the respondent. The interviewers spent an additional 15 minutes with the program operator respondent to identify partners for interviews and obtain contact information. The primary mode for the interviews was telephone but the interview team offered videoconferencing (via Skype, Zoom, Go-To Meeting, or other technology) if the respondent preferred a more visual interaction.

The descriptive evaluation also includes one round of in-person interviews with HPOG program participants. The National Evaluation team conducted in-depth interviews with HPOG 2.0 program participants to gain insight into their motivations, decision making, expectations, and experiences (Instrument 17). The team worked with the PAGES data to identify up to 45 study participants in the treatment group, in each of the 14 selected programs for recruitment. Interviewers sent a letter to the selected sample to explain the participant interview requirements (See Attachment Z).

One interviewer conducted all of the interviews at a given site during a five-day visit. (No interviewer traveled to more than three sites.) Interviews were completed in-person either at the program office or at another agreed upon location. Each interviewer conducted the participant interviews and recorded—with the participant’s permission—the interviews for later transcription and analysis.

        1. HPOG 2.0 National Evaluation Impact Evaluation Data Collection Procedures

The impact evaluation participant-level data collection efforts include the previously approved informed consent documents (Attachment B Informed Consent Form A and Informed Consent Form B approved in January 2016, and recently updated in April and June 2020), a Welcome to the Study packet (Instrument 5a, approved in June 2017—and the subject of this non-substantive change request), the collection of quarterly contact updates (also previously approved under this OMB Control Number in June 2017, still ongoing and the other instrument covered in this request), the Short-term Follow-up Survey (previously approved in June 2018, and now complete), as well as the data collection procedures for the Intermediate Follow-up Survey (Instrument 18 now underway) and phone-based Skills Assessment Pilot (the now complete Instrument 19 of the previously approved third revised submission). The procedures for conducting the ongoing data collection using previously approved contact update forms (Instrument 5b) and the Short-term Follow-up Survey (Instrument 12) are described in the first and second revisions to OMB Control Number 0970-0462 approved in June 2017 and June 2018 respectively.

ACF received approval for a new set of alternative informed consent forms in April 2020. The new forms allow grantee staff to obtain verbal consent from participants enrolling in the evaluation. In instances where face-to-face enrollment is not feasible due to issues related to COVID-19, flexibility in how consent is obtained will ensure that participant enrollment can continue and that grantees are able to serve participants and sustain program operations during this time. Verbal consent is vital to maintaining the rigor of the evaluation in order to ensure that participants enrolled during this period can be included in the impact and descriptive evaluations of the National Evaluation and in the Tribal Evaluation. These participants are not part of any survey data collection efforts at this time. In particular, the administrative data from the National Directory of New Hires (NDNH) is crucial for measuring employment and earnings outcomes for this group of participants.8

The data collection procedures for the Intermediate Follow-up Survey (Instrument 18) will be identical to those approved for use in the Short-term Follow-up Survey (Instrument 12)—local interviewers will attempt to interview respondents first by telephone and then in-person, using computer assisted personal interviewing (CAPI) technology. Since the procedures are the same, the specific details of how the data collection will be done are not repeated here. Please refer to the second revision to OMB Control No. 0970-0462, approved June 8, 2018, for a full description of the survey procedures. The third revised submission, approved in July 2019, focused on the procedures for the phone-based Skills Assessment Pilot because they were not covered by earlier OMB approvals.

As noted in Supporting Statement A and the Non-substantive Change Request memo, ACF is considering fielding a new survey to HPOG 2.0 study participants enrolled after the onset of the COVID-19 pandemic and through the end of the HPOG 2.0 grant period. That instrument is still in the design stage. ACF will submit the instrument and supporting materials for full OMB review and approval, with an associated public comment period, in a subsequent revision. However, it is important that the evaluation team notify the selected participants that they may be chosen to be part of this survey and to begin updating their contact information. This request seeks minor modifications to Instrument 5a Welcome to the Study Packet and Contact Update Form, and minor increase in the number of respondents for both Instrument 5a and 5b (the Participant Contact Update Letter and Form), as described in Supporting Statement A, Section A12.

The purpose of the phone-based Skills Assessment Pilot is to narrow a set of 45 potential survey questions intended to assess literacy and numeracy skills down to a set that can be used in a short module within the Intermediate Follow-up Survey. Because the follow-up survey can be conducted either by phone or in person, administration of the assessment module has to “work” in either mode. There is a long history of successful skills assessments for in-person data collection, but very little history of skills assessment administration over the phone. For this reason, all of the pilot assessments were conducted by phone.

The phone interviewers were drawn from the same staff of local interviewers used for the Short-term Follow-up Survey. This ensured that the interviewers were fully trained on the HPOG 2.0 program, the goals of the evaluation and have experience working with the HPOG 2.0 participant population. Evaluation site team liaisons worked with the grantees to identify a pool of HPOG 2.0 participants that wanted to volunteer to complete the skills assessment pilot. Once identified, interviewers reached out to volunteer participants to explain more about the pilot, obtain their consent, conduct the pilot, and capture respondent feedback on the process. The evaluation team identified about 400 volunteers with the help of grantees, and completed 300 interviews. The approach to data collection did not include a specific response rate target; rather the plans for this pilot were based on a target number of completed interviews. No estimates about skill levels for any population will be published based on this pilot.9 Furthermore, the goals for the pilot do not include a demonstration of what response rate is achievable—only an assessment of whether it is possible to conduct a brief skills assessment by phone. The evaluation team expected that since the sample will consist of volunteers recruited by grantees that they would be easier to locate and still interested in participating. Assessment interviews were completed using CAPI technology. Interviewers were encouraged to quickly close out cases that were difficult to contact and move on to the next case in order to expediently complete this assessment pilot process. Ultimately, they reached the target number of completes. The final assessment module, based on analysis of the pilot test results are reflected in the revised version of Instrument 18, Section J, included with this non-substantive change request.

HPOG 2.0 National Evaluation Cost-Benefit Analysis Study Data Collection Procedures

The Program Cost Survey (Instrument 20) was administered to each of the staff at all 27 non-tribal grantees—to capture cost data for each of the 38 HPOG 2.0 programs. The survey captured data on costs for staff, overhead, direct provision of training, and provision of support services. The evaluation team asked grant managers from each of the 38 HPOG 2.0 programs to determine which staff members were the most knowledgeable about cost information. Selected staff members attended an informational webinar to introduce the cost-benefit analysis (CBA), learn about the concepts used in the survey, and ask preliminary questions.10 Upon request, CBA staff called individual programs to discuss any questions before the survey. Such guidance can improve accuracy because each program has its own structure and service offerings and so may need specific information on different survey components. Program staff completed the survey using web-based software. The evaluation team reviewed the submitted documents and followed-up on missing data items as needed.

Tribal Evaluation Implementation Study Data Collection Procedures

The original data collection procedures for the Tribal Evaluation data collection instruments are described in revision number 1 of OMB Control Number 0970-0462, previously approved in June 2017. The COVID-19 outbreak has required the Tribal Evaluation team to rethink the procedures for conducting these interviews now that face-to-face interviewing is not a viable option. OMB approved a request to revise the previously approved data collection procedures to allow for remote data collection as needed in June 2020. The previously approved procedures are restated here, with updates incorporated.

The sample frame for the HPOG 2.0 Tribal Evaluation includes all five tribal grantees. No statistical methods will be used for stratification and sample selection. The HPOG 2.0 Tribal Evaluation exclusively uses purposive sampling since it is a descriptive study. The tribal evaluation team will use multiple sources of data for the process and outcome evaluation, primarily centered on annual data collection which will include semi-structured interviews with grantee and partner administrative staff, program implementation staff, and local employers; focus groups and follow-up interviews with program participants, including program completers and non-completers; and program operations data collected through PAGES.

Annual Data Collection Data collection will take place at each of the five tribal grantees and their program sites on an annual basis. The tribal evaluation team will conduct four in-person site visits at each grantee in Years 2-5 of the evaluation. If data collection cannot occur in person, interviews and focus groups will be conducted remotely, by telephone or by virtual meeting. The tribal evaluation team will discuss logistics for the site visit (e.g., scheduling, travel, where to host focus groups and interviews) with each grantee. In addition, we will work closely with each grantee to recruit respondents.

        1. Protocols

The evaluation team has developed semi-structured interview and focus group protocols for the collection of qualitative data during the initial evaluation site visits and follow-up site visits (Instruments 6-11). All protocols will begin with a brief introductory script that summarizes the overall evaluation, the focus of each interview, how respondent privacy will be protected, and how data will be aggregated. The evaluation team will obtain written informed consent in-person prior to participant focus groups (Attachment I_Focus Group Informed Consent Form) and verbal informed consent from interview participants (Attachment J_ Interview Verbal Informed Consent Form). If interviews and focus groups are conducted remotely, verbal informed consent will be obtained for both interview and focus group participants (Attachment I_Focus Group Informed Consent Form_Remote and Attachment J Interview Verbal Informed Consent Form_Remote).The senior member of the evaluation team will be responsible for seeking consent from participants. The tribal evaluation team will collect data from the interviews and focus groups via extensive detailed notes. If interviews and focus groups are conducted remotely, they will be recorded. The remote versions of the consent forms ask for participants’ permission to record.

Grantees, partners, and employers

The evaluation team will conduct in-person or remote interviews on an annual basis to gain insight from grantee and partner administrative staff on high-level program strategies, program development, and lessons learned (Instrument 6). Partners may include public and private health care employers, education and training organizations, community-based organizations, labor organizations, and national, state or local foundations that provide assistance to American Indians and Alaska Natives (AI/ANs). The team anticipates that interviews will focus on the overall strategic approach of the program as well as processes used to develop the program curricula; any evidence behind the structure of the program and special considerations and modifications for tribal populations; and on program modifications, overall challenges and successes, as well as lessons learned which, at a high level, will inform the evaluation team’s assessment of the success of the program as a whole. The team anticipates that the total number of grantee and partner administrative staff will be limited (3-7) for each grantee program (total of 15-35 across all Tribal HPOG grantees annually).

The evaluation team will conduct in-person or remote interviews with staff responsible for coordinating and implementing the program at each site (Instrument 7). These individuals may include, but are not limited to, program instructors, recruitment and orientation staff, and providers of supportive services. These interviews will focus on program processes including recruitment, orientation and program implementation and on program modifications, overall challenges and successes in implementation, as well as lessons learned which, at a high level, will inform the evaluation team’s assessment of the success of the program as a whole. Depending on the structure of the program, multiple staff may be interviewed at each implementation site. The evaluation team anticipates that the number of implementation staff will vary by grantee program and expects to interview at least 3-10 staff per grantee (for a total of 15-50 across all Tribal HPOG grantees annually).

The tribal evaluation team will work with grantee sites to identify potential employers in the region and conduct interviews with appropriate staff at these facilities. These interviews will be used to assess employers’ general impressions of program graduates, their degree of awareness of the program as a whole, and their views on the extent to which programs are making an impact on the creation of a workforce that is equipped to meet the demands of the current health care needs of AI/AN communities (Instrument 8). The evaluation team plans to target at least 3-6 employers per grantee (for a total of 15-30 across all Tribal HPOG grantees annually); however, this total may vary based on the number of employers located and identified by grantees.

Participants

The evaluation team will conduct in-person or remote focus groups with program participants during annual data collection (Instrument 9). The focus groups will gather program participants’ perceptions around the following key evaluation topics: program design and curriculum; supportive services; recruitment and orientation; quality of instruction; participant educational attainment; and satisfaction with the HPOG program. To prepare for the focus groups, the team will discuss with each grantee the best and most culturally appropriate recruitment techniques as well as whether or not they would prefer evaluation team staff to contact potential participants or if they would prefer to contact participants themselves. If the grantee provides a list of student participants, the tribal evaluation team will reach out to the potential participants using recruitment letters and follow-up phone calls if necessary. Should the grantee prefer to have potential participants contacted by program staff, the team will provide recruitment materials to facilitate outreach activities. In addition, the team will further consult with each grantee about how best to conduct the focus group in a culturally competent manner that stimulates discussion and full participation (e.g., allowing opportunities for self-reflection and privacy in composing responses to questions). The team anticipates that there will be 5-9 participants in each focus group and between 1-3 focus groups per grantee, depending on the number of locations where students are enrolled at each grantee (total of 25-135 focus group participants across all Tribal HPOG grantees annually). Once the evaluation team has obtained consent, the facilitator (a member of the tribal evaluation team) will introduce participants to the overall purpose and structure of the gathering. In addition, the facilitator will re-emphasize to the assembled group that their comments will be aggregated in the site visit summaries and reports and not directly attributable to them. The tribal evaluation team will collect data from the focus groups via extensive detailed notes. First names of focus group participants will be collected upon registration and will be used for note-taking purposes. If focus groups are conducted remotely, they will be recorded if all participants give their permission for recording.

In order to obtain information on key program outcomes related to educational attainment and employment, the evaluation team will conduct in-person or remote interviews with participants who have completed a training program (Instrument 10). The purpose of these interviews is to assess the current employment status of the participants who have completed training programs and to capture their voice and perspectives on whether the program adequately prepared them for employment in the health sector and, if applicable, to serve AI/AN communities. The team anticipates that some participants in the annual focus groups will also have participated in the interviews in prior years; others may have had no previous contact with the tribal evaluation team. The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period and conduct interviews during annual data collection. The team anticipates conducting interviews with approximately 20 students per grantee annually (total of 100 across all Tribal HPOG grantees annually).

In order to understand factors that led to non-completion, the evaluation team will conduct in-person or remote interviews with participants who did not complete a training program through HPOG (Instrument 11). The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period and conduct interviews during annual data collection. The team anticipates conducting interviews with approximately 10 students per grantee annually (total of 50 across all Tribal HPOG grantees annually). Topics to be addressed in the interview include: reasons for leaving the program; challenges experienced; elements of the program that were effective or non-effective; identification of any short-term outcomes resulting from program participation; how the program could be improved; and whether the non-completer plans to re-enroll in the program or pursue an alternative course of study.

        1. HPOG Program Performance Report Based On Grantee-Level and Ongoing Participant-Level Data

The data collection procedures for the previously grantee-level and ongoing participant-level data collection done under the PAGES system are described in original submission of OMB Control Number 0970-0462, previously approved in June 2017.

        1. Procedures with Special Populations

The study documents—including the recruitment materials, advance letters, and flyers developed for the National Evaluation participant level data collection efforts—were designed at an 8th-grade readability level. This ensures that the materials can be understood by most study participants. The Intermediate Follow-up Survey will be administered in both English and Spanish.

The procedures used to ensure that special populations can understand the various instruments that were previously approved are described in the information collection requests approved in June 2017 and June 2018.

B.3 Methods to Maximize Response Rates and Deal with Nonresponse

This non-substantive change request builds upon the previously approved methods to maximize response rates and deal with nonresponse. This section first describes the planned methods to maximize response rates for the Intermediate Follow-up Survey. It then describes two additional methods intended to maximize response rates and address nonresponse. Both were approved as part of the non-substantive change request approved in June 2020. The section then describes the procedures used to maximize response rates for the descriptive study participant interviews (Instrument 17), previously approved under the third revised submission in July 2019.

        1. National Evaluation impact study Intermediate Follow-up Survey

The methods used for the Intermediate Follow-up Survey will be nearly identical to those approved for use in the Short-term Follow-up Survey. Specifically, the evaluation team will use the following methods to maximize response to the Intermediate Follow-up Survey effort:

  • Participant contact updates and locating;

  • Incentives;

  • A new revised short version of the Intermediate Follow-up Survey; and

  • Sample control during the data collection period.

(See the second revision to OMB Control No. 0970-0462, approved in June 2018 for more details on these methods.) Using those same procedures and the new, shorter version of the Intermediate Follow-up Survey (Instrument 18a), the evaluation team anticipates being able to achieve about a 75 percent response rate for the Intermediate Follow-up Survey overall, but closer to the targeted 80 percent response rate for those items included in the shortened version of the Intermediate Follow-up Survey.

          1. Participant Contact Updates, Welcome to the Study Packet, Advance Letters, and Email Reminder Text

The HPOG 2.0 National Evaluation impact evaluation team will continue participant contact update efforts (previously approved in June 2017) between the Short-term and Intermediate Follow-up Survey efforts only for those participants who will be part of the Intermediate Follow-up Survey data collection. The evaluation team intends to include in the target sample all study participants within the enrollment cohorts selected for the Intermediate Follow-up Survey, regardless of whether or not they responded to the Short-term Follow-up Survey.11This is consistent with how the evaluation team has handled non-respondents on similar Career Pathways Studies (PACE, OMB Control No. 0970-0397 and HPOG 1.0 Impact OMB Control No. 0970-0394).

ACF reviewed the supporting materials for the Intermediate Survey, specifically Attachment P (the HPOG 2.0 Intermediate Survey Advance Letter) and Attachment S (email reminder text), to determine if there were changes that should be made to those materials to help improve respondent cooperation. The proposed revisions to the advance letter are shown in Attachment P_HPOG 2.0 Intermediate Survey Advance Letter_REV, and intended to streamline the letter. A more streamlined advance letter will allow the interviewers better opportunity to explain the study to the respondents and address questions during the introductory interaction. The revised advance letter will alert participants about the study so they are more receptive to the interviewer outreach, without providing so much detail that the survey may appear overwhelming.

The proposed email reminder text revisions are found in Attachment S_HPOG 2.0 Intermediate Survey Email Reminder Text) REV. These revisions streamline the language and emphasize the ability to complete by telephone. The revisions also offer greater flexibility in how the text can be used. The revised email text can be used by interviewers as a direct email to study participants or as a letter, sent from the survey director to all survey non-responders. OMB approved the proposed revisions to Attachments P and S in June 2020.

ACF is requesting approval of minor modifications to the Welcome to the Study Packet (Instrument 5a). These changes update the size of the impact study survey sample to include those selected for the COVID-19 Cohort Survey. The instrument also reminds the participant to update the attached contact update form. Because more than one month may have elapsed between enrollment and receipt of the Welcome to the Study packet, we replaced “Last month, you applied...” with “Recently, you applied…”

          1. Incentives

The evaluation team offered an incentive valued at $25 for each participant that responded to the phone-based Skills Assessment Pilot. The incentive was a way to thank the participant for their help in ensuring that the assessment instrument was feasible to administer by phone and to identify which items were most useful in assessing literacy and numeracy skills. The incentive also helped to offset any costs incurred as a result of their participation such as cell phone minutes or child care costs. The approved incentive for this pilot was smaller than the incentives for some other instruments in the HPOG 2.0 impact evaluation because of both the lower burden on respondents and the fact that this was a single administration: that is, we do not repeat the skills assessment pilot data collection with respondents at a later date.

OMB previously approved the evaluation team’s plan to offer an incentive for completion of the Intermediate Follow-up Survey. Respondents will receive a $45 gift certificate. The following factors helped determine the amount of the incentive: the target response rate of 80 percent, the projected 60 minute length of the survey, the smaller sample size (only 5,000 of the 13,000 selected for the Short-term Follow-up Survey), and the duration of the follow-up period. The team also took into account the incentive amounts approved for previous rounds of data collection on OPRE’s prior Career Pathways studies (PACE and HPOG 1.0 Impact, OMB control numbers 0970-0397 and 0970-0394 respectively), to ensure that the planned amount is comparable. As with the contact update forms and Short-term Follow-up Survey, respondents will receive an email with customized instructions showing them how to log in to a secure study portal where they can redeem a $45 gift card from their choice of approved vendors.

Without an incentive of this magnitude, the impact evaluation study is unlikely to meet the quality targets defined by OMB and the Information Quality Act12 (see Supporting Statement A, Section A9 for more information).

Incentives at one or more phases of data collection have been used successfully on a number of similar federally-sponsored surveys such as PACE (OMB control number 0970-03970) and the HPOG 1.0 Impact Study (OMB control number 0970-0394.) These two studies are similar in nature to HPOG 2.0 both programmatically and in terms of respondent characteristics. We cite these two previously approved studies not to justify the use of incentives, but rather our choice of the proposed amount of the incentive. The planned incentive amount is comparable to what was offered for the follow-up survey efforts for both of those studies.

          1. Reduced Survey Instrument--Critical Items Only (Instrument 18a)

OMB approved a new version of Instrument 18 (Instrument 18a) that is shorter to administer and includes only the items most critical to the study in June 2020. ACF requests approval to administer this “critical items” instrument to reluctant respondents as a way to test the viability of this approach for maximizing response rates. Doing so would help maximize the overall completion rate and minimize nonresponse for the key outcomes of interest. This information will also be used to improve imputation procedures for outcomes not collected in the shortened version of the survey.

Version 18a of Instrument 18 can be completed in just 20 minutes, and will be administered to individuals who refuse to respond to the full Intermediate Follow-up Survey. The shortened version of Instrument 18 will be offered as the last refusal conversion effort.

          1. Sample Control

Finally, the team does not rely solely on the contact updates or the use of incentives to maximize response rates and reduce nonresponse bias. The evaluation team will use the same sample control procedures for monitoring survey production—well-trained interviewers, clear disposition codes, Spanish-language options, a variety of communication tools for interviewers that were approved for the Short-term Follow-up Survey. (See the ICR approved under OMB Control No. 0970-0462 approved in June 2018 for more details on these sample control procedures.)

        1. National Evaluation descriptive study participant interviews

The descriptive study participant interview data are not intended to be representative in a statistical sense, in that they will not be used to make statements about the prevalence of experiences for the full HPOG 2.0 population, nor the broader TANF-eligible population. However, it was important to secure participants from a wide range of programs, with a range of background characteristics, to capture as diverse a set of possible experiences with HPOG 2.0 experiences as possible. Given the time required for an in-person interview, incentives for participation were a useful tool in helping to maximize response rates. Those who completed the in-depth participant interview received a non-cash honorarium valued at $40, via email. Participants received an email with instructions to log in to a secure study portal where they redeemed the gift certificate to one of the approved vendors (see procedures for redeeming procedures described under the Intermediate Follow-up Survey). OMB previously approved similar use of incentives for the HPOG 2.0 Tribal Evaluation (participant focus groups and interviews) in June 2017 under this OMB Control Number (0970-0462) and the Pathways for Advancing Careers and Education (PACE) study (OMB Control Number 0970-0397). Without the use of incentives, we are concerned that we will not reach the target number of completed interviews in each category, which could jeopardize the utility of the participant interview data.

        1. Nonresponse Bias Analysis and Nonresponse Weighting Adjustment

If interviewers achieve a response rate below 80 percent for the Intermediate Follow-up Survey, the research team will conduct a nonresponse bias analysis and, if necessary, create nonresponse weighting adjustments using the same protocols approved for the Short-term Follow-up Survey.

The phone-based Skills Assessments Pilot effort is not subject to non-response bias. The effort targets a certain number of completes—as opposed to the standard 80 percent response rate—with no restrictions on things such as site or intervention group. As described in section B1 above, the target sample for this effort is comprised of volunteers. If the target number of completed assessments is not reached with the first batch of volunteer sample, the evaluation team can work with grantees to obtain additional volunteers.

B.4 Tests of Procedures

This non-substantive change request does not require any tests of procedures. The remainder of this section discusses the tests of procedures already conducted for the instruments that were included in the third revised submission, approved in July 2019. The section first addresses the HPOG 2.0 National Evaluation descriptive evaluation, then the impact evaluation, followed by the cost-benefit analysis study. The section then provides references to the previously approved information collection requests under this OMB Control Number (0970-0462) should reviewers want to learn more about the tests of procedures for previously approved instruments.

        1. HPOG 2.0 National Evaluation Descriptive Evaluation

The evaluation team conducted pretest interviews with fewer than ten study participants to ensure that the participant interview guide was working as intended. The participant interview guide (Instrument 17) that is part of this information collection request reflects changes needed based on that pretest feedback.

The other new descriptive study instruments are similar in content and structure to the previously approved descriptive study instruments. We are relying on the pretests done for those original instruments this time. See previously approved information collection request under this OMB Control No. approved in 2017 for more on the tests of procedures for the other descriptive study instruments.

        1. HPOG 2.0 National Evaluation Impact Evaluation

This section discusses the two instruments for the impact evaluation that are the subject of the information request.

          1. Intermediate Follow-up Survey

In designing the Intermediate Follow-up Survey, the evaluation team included items used successfully in other national surveys, particularly the HPOG 2.0 Short-term Follow-up Survey (OMB Control No. 0970-0462), and the PACE and first round of HPOG impact follow-up surveys (OMB control numbers 0970-0397 and 0970-0394) respectively. Consequently, many of the survey questions have been thoroughly tested on large samples.

If time allows post OMB approval of this data collection, the instrument will be programmed prior to pretesting and a sample of 15 to 20 participants will be used to ensure that the length of the instrument is consistent with the burden estimate. Otherwise the evaluator will pretest the survey with up to nine participants, using paper forms rather than CAPI. During internal pretesting, all instruments are closely examined to eliminate unnecessary respondent burden and questions deemed unnecessary were eliminated.

In June 2020, OMB approved a new revised version of the Intermediate Follow-up Survey. This new version is shorter in length (20 minutes as opposed to 55 minutes). It is intended to serve as a tool to maximize response rates by offering participants that would likely become a final refusal the opportunity to complete a shorter version of the instrument. As noted above, there are other studies that have used shorter versions of the instrument to help measure nonresponse bias. There is little available in the literature about the effects of a shorter version of the instrument as a refusal conversion technique to help maximize response for the key outcomes of interest.

          1. Phone-based Skills Assessment Pilot

The phone-based Skills Assessment Pilot is itself a test. The pilot was used to sort items by difficulty. Assuming that the pilot is successful, the intent is to make the assessment module for the intermediate survey adaptive. That is, the CAPI program being used to conduct the surveys will dynamically vary the set of items presented to the respondent based on prior responses. One possible plan is to have three item difficulty groups for the vocabulary items and three for the math items. The software might present four medium difficulty items and then follow up with a set of four easy or four hard items based on the respondent’s performance on the four medium difficulty items. The pilot will provide sufficient information to refine these broad plans. Critical information that will be provided by the pilot includes the time required for each item. Because the instrument also includes information about earned credentials and use of basic skills in everyday life, the evaluator will also be able to select the items that correlate better with these measures. The version of the Intermediate Follow-up Survey approved in July 2019 included all possible items developed for the pilot. Based on the findings from the pilot, the evaluation team identified a set of 11 literacy and 11 numeracy questions, with varying degrees of difficulty, for inclusion in the Intermediate Follow-up Survey. The other items have been dropped from the version of the Intermediate Follow-up Survey submitted with this request.

        1. HPOG 2.0 National Evaluation Cost-Benefit Analysis Study

The cost-benefit analysis study requires detailed information from grantees and stakeholders. It is possible that multiple people at each organization will need to provide the information. The evaluation team reached out to one individual at five grantee programs to ask them to review the data collection items of interest to the evaluation team and provide an assessment of the feasibility of collecting that information and the level of effort required to do so. The evaluation team reviewed the feedback from grantees and adjusted protocols accordingly. The instruments in this information collection request reflect the changes that resulted from that feedback.

        1. HPOG 2.0 Tribal Evaluation

See previously approved information collection request under this OMB Control No approved in 2017 for more on the tests of procedures for the Tribal Evaluation.

        1. PAGES

See previously approved information collection request under this OMB Control No approved in 2015 for more on the tests of procedures for the PAGES system.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

With ACF oversight, Abt and its partners MEF Associates, the Urban Institute and Insight Policy Research are responsible for conducting the HPOG 2.0 National Evaluation. This team has drafted an Impact Evaluation Design Plan with considerable detail on planned analytic procedures. It will be published in 2019. Prior to analyses, an even more detailed analysis plan will be prepared and published.

The individuals listed in Exhibit B-3 below made a contribution to this information collection request.


Exhibit B-3: Contributors

Name

Role in HPOG 2.0 National and Tribal Evaluation

Organization/Affiliation

Gretchen Locke

National Evaluation Project Director

Abt Associates

Jacob Klerman

National Evaluation Co-Principal Investigator

Abt Associates

Bob Konrad

National Evaluation Co-Principal Investigator

Abt Associates

Robin Koralek

National Evaluation Deputy Project Director

Abt Associates

Larry Buron

National Evaluation Project Quality Advisor

Abt Associates

David Judkins

National Evaluation Director of Impact Analysis

Abt Associates

Debi McInnis

National Evaluation Site Coordinator

Abt Associates


Inquiries regarding the statistical aspects of the HPOG 2.0 National Evaluation design should be directed to:

Gretchen Locke, Project Director

Abt Associates

10 Fawcett Street, Suite 5

Cambridge, MA 02138

(617) 349-2373


The following HHS staff—including the HHS project officers Hilary Bruck, Nicole Constance, and Amelia Popham—have overseen the design process and can be contacted at:

Hilary Bruck

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 619-1790


Nicole Constance

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 401-7260


Amelia Popham

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 401-5322

1 Two of the instruments for the descriptive study associated with the second round of telephone interviews (Instrument 13 and 14) and the program cost survey (Instrument 20) for the cost-benefit analysis study do not require any sampling.

2 Systems study data collection is complete, but analysis is still underway.

3 Where there are insufficient participants who have dropped out of the training program within the last six months, we extended the time period to 12 months since dropping out of the program.

4 We expected to complete interviews with 10 of the 15 participants selected – a 67% response rate. While we had expected to be able to adjust the recruitment strategy to adjust for differences in response rate by site, we found that the response rate varied quite considerably from site to site – some sites had only 8 completed interviews out of the 15 scheduled, and others had 13 or 14 completed interviews. As a result it was hard for us to predict whether we would get the full sample of 140 interviews, until the last 3 site visits. We decided to complete interviews at these last site visits with the respondents who showed up. These sites had a relatively high completion rate. As a result we ended up completing 153 interviews, an average response rate of 72% across all sites.



5 The evaluation team will prepare a short methods report on the pilot assessment study that might be published as a white paper or serve as the basis for a journal paper—explaining the process followed to develop the short skills pilot and incorporate it into the Intermediate Follow-up Survey. The results will not be analyzed as part of the impact study findings.

6 The draft Intermediate Follow-up Survey included all of the items from the pilot assessment, to ensure that we had OMB approval for each item. Based on the findings from the pilot, we retained the questions that best meet the needs of the assessment—11 numeracy and 11 literacy questions, with varying degrees of difficulty.

7 In the event that fewer than 300 volunteers respond to the initial assessment pilot outreach effort, the team will reach out to grantees to identify additional volunteers. Since the sample is based on volunteers, we do not expect a second recruitment effort will be necessary.

8 The Office of Child Support Enforcement (OCSE), which maintains the NDNH data, typically requires written consent to allow the evaluation contractor to match study participant identifiers to the NDNH data. OCSE has agreed to amend the study’s Memorandum of Understanding to temporarily allow the collection of NDNH data with verbal consent.

9 If there are any papers published based on the pilot, they would only concern the psychometric properties of the assessment.

10 Four HPOG 2.0 grantees provided the evaluation team with feedback on an earlier draft of the instrument. The team introduced the CBA and the program cost survey at the annual meeting for HPOG grantees in August 2018.

11 The only exceptions will be those who were confirmed deceased or asked to withdraw from future data collection efforts.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorKatheleen Linton
File Modified0000-00-00
File Created2021-05-14

© 2024 OMB.report | Privacy Policy