Attachment AF: HPOG 2.0 Previously Approved Sample Selection for the STS and ITS

Attachment AF_Sample Selection_Previously Approved Submissions.docx

OPRE Evaluation - National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants [descriptive evaluation, impact evaluation, cost-benefit analysis study, pilot study]

Attachment AF: HPOG 2.0 Previously Approved Sample Selection for the STS and ITS

OMB: 0970-0462

Document [docx]
Download: docx | pdf



B.1 Respondent Universe and Sampling Methods HPOG 2.0 Short-Term Follow-Up Survey (Approved June 2018)

Thirty-two HPOG grants were awarded to government agencies, community-based organizations, post-secondary educational institutions, and tribal-affiliated organizations in September 2015. Of these, 27 were awarded to non-tribal entities and five were awarded to tribal organizations.

All 32 grantees will participate in this federally sponsored evaluation. There is no statistical sampling required for the HPOG 2.0 National Evaluation descriptive evaluation or the HPOG 2.0 Tribal Evaluation. Evaluators will work closely with grantees to identify participants in the respective studies. Under the National Evaluation impact evaluation, the evaluators will select up to 13,000 study participants beginning with the cohort of participants enrolled in March 2017 for inclusion in the follow-up survey sample. Rather than randomly sampling from everyone randomized, the surveys will sample only those randomized in a narrow time period. To ensure that programs have gotten over initial implementation issues and to ensure that survey results are available as soon as possible, the Short-Term Follow-up Survey will sample all of the projected 13,000 people randomized from March 2017 through March 2018 (i.e., 13 monthly cohorts of approximately 1,000 cases per month). The evaluators waited to begin survey sample selection until the March 2017 cohort in order to maximize efficiency for the survey data collection effort (i.e., to lower survey costs relative to taking a true random sample of everyone randomized) and to allow all programs time to complete start-up activities and reach steady-state operations. Allowing time for all programs to mature helps to alleviate some of the challenges typically associated with early enrollment cohorts on random assignment studies, such as very small monthly enrollment cohorts, or grantees modifying eligibility criteria or intake processes. Compressing the length of the field period was the most efficient way to ensure that evaluators could meet the survey sample size requirements within the available resources. The evaluators will rely on baseline equivalency testing to determine whether there are significant differences in participant characteristics between those enrolled prior to March 2017 and those enrolled after. The evaluators will also use post-randomization administrative data from the National Student Clearinghouse (NSC) and National Directory of New Hires (NDNH) to determine if impact on college persistence and earnings vary by period. If noteworthy differences by enrollment period are discovered, then appropriate caveats will be added to impact findings based on survey outcomes.

Study participants will receive contact update requests every three months leading up to the Short-Term Follow-up Survey. Once the Short-Term Follow-up Survey data collection period ends, the contact update requests will resume in preparation for the Intermediate Follow-up Survey (to be conducted 36-months after randomization, under a later OMB information collection request.

All five tribal grantees will participate in the federally sponsored HPOG 2.0 Tribal Evaluation. For the HPOG 2.0 Tribal Evaluation, there are two major respondent universes: (1) Tribal HPOG 2.0 grantees, partners, and employers; and (2) Tribal HPOG participants, including program completers and non-completers. Exhibit B-1 presents the sampling methods and target response rates for each of the HPOG 2.0 National and Tribal Evaluation respondent subgroups. The respondent subgroup and instrument (the Short-Term Follow-up Survey) shown in bold font are the subjects of this information collection request. All other instruments and their corresponding subgroups were previously approved under this OMB control number.

Exhibit B-1: HPOG 2.0 National and Tribal Evaluation Respondents

Respondent Universe

Respondent Subgroup

Sampling Methods and Target Response Rates1

Data Collection Strategies

National HPOG 2.0 Evaluation


Grantees, partners, and employers

Grantees

Evaluation team members review the topics of interest with grantees using the HPOG 2.0 Screening Interview to identify appropriate respondent(s) based on who is most knowledgeable about the topics of interest. (See Instrument 2).

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate.

Semi-structured telephone interviews

(Instruments 2, 3 and 4)


Managers and staff

A very high response rate (at least 80 percent) is expected among grantee managers and staff.

Semi-structured in-person interviews

(Instruments 2, 3 and 4)


Partners

A very high response rate (at least 80 percent) is expected among grantee partners.

Semi-structured in-person interviews

(Instruments 2, 3 and 4)


Employers

A very high response rate (at least 80 percent) is expected among employers.

Semi-structured in-person interviews (Instruments 2, 3 and 4)

Impact evaluation participants selected for Short-Term Follow-up Survey sample

A sample of participants (up to 13,000) beginning with those enrolled in March 2017

Up to 13,000 study participants, beginning with those enrolled in March 2017 will be part of the participant contact update efforts.

The team expects that 35 percent of the respondents will respond to each quarterly participant contact update effort.2

Contact updates by mail, online portal, or telephone

(Instruments 5a and 5b)

Impact evaluation participants selected for Short-Term Follow-up Survey sample

A sample of participants (up to 13,000) beginning with those enrolled in March 2017

Up to 13,000 study participants, beginning with those enrolled in March 2017 will be part of the Short-Term Follow-up survey.

The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 10,400 completes.

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Instrument 12)

Tribal HPOG 2.0 Evaluation

Grantees, partners, and employers

Grantees

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate.

Semi-structured in-person interviews

(Instruments 6 and 7)


Management and Staff

A very high response rate (at least 80 percent) is expected among grantee staff.

Semi-structured in-person interviews

(Instruments 6 and 7)


Partners

Partners have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate.

Semi-structured in-person interviews

(Instruments 6 and 7)


Employers

A very high response rate (at least 80 percent) is expected among HPOG employers.

Semi-structured in-person interviews

(Instrument 8)

Participants

Program participants (current)

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team expects a 25-50 percent response rate from current program participants.

In-person focus groups

(Instrument 9)


Program completers

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team expects a 25-50 percent response rate from program completers.

Semi-structured in-person interviews

(Instrument 10)


Program non-completers

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team expects a 10-25 percent response rate from program non-completers.

Semi-structured in-person interviews

(Instrument 11)

HPOG National and Tribal Evaluation Participant Accomplishment and Grantee Evaluation System (PAGES)

Participants

National Evaluation (Non-Tribal) HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Baseline and ongoing participant level data


Tribal HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Baseline and ongoing participant level data



PAGES includes the applicant population of the anticipated 32 organizations that received HPOG funding. As discussed, the system provides data at the grantee- and individual- level. Thus, data is collected and will continue to be collected from the 32 grantees on their program designs and offerings, from all eligible applicants on their baseline characteristics, and from all of the individuals the grantees serve on their individual participation and outcomes.

Approximately 44,163 individuals are expected to complete the baseline data collection across the 32 grantees during the HPOG 2.0 grant period. The grantees under the National and Tribal evaluations will enroll participants over a four and a half year period.3 The National Evaluation team expects the impact evaluation sample to include up to 40,000 individuals who apply to participate in the HPOG programs operated by 27 non-Tribal HPOG 2.0 grantees (13,333 controls and 26,667 treatments). The Tribal Evaluation team expects the tribal grantees will enroll 2,663 participants. This represents an increase from our previous submission as it now includes the full program enrollment. Supporting Statement A, Section A15 provides more detail on this increase. These projected enrollment numbers suggest an additional 9,400 National Evaluation participants will complete the PAGES baseline intake form than was previously estimated. Approximately 1,500 participants from the first round of HPOG grants are expected to receive additional services under HPOG 2.0. Thus, the total National Evaluation sample is estimated at 41,500 participants. Further, it is anticipated that up to 2,663 individuals will apply to participate in the HPOG programs operated by five HPOG 2.0 Tribal grantees over life of the grant. No sampling techniques will be employed for PAGES data collection.





B.1 Respondent Universe and Sampling Methods HPOG 2.0 Intermediate Follow-Up Survey (Approved June 2020)

This non-substantive change request does not require any changes to the respondent universe nor to the sampling methods, but it does reflect changes to the response rate estimates for the Intermediate Follow-up Survey. Thirty-two HPOG 2.0 grants were awarded to government agencies, community-based organizations, post-secondary educational institutions, and tribal-affiliated organizations in September 2015. Of these, 27 were awarded to non-tribal entities and five were awarded to tribal organizations. The 27 non-tribal grantees operate 38 unique HPOG 2.0 programs. The instruments approved in July 2019 under the third revised submission concern only the 27 non-tribal grantees participating in the National Evaluation. Sampling procedures for the three instruments to support the National Evaluation descriptive evaluation are described below, followed by a discussion of the sampling procedures for the two National Evaluation impact evaluation instruments.

Descriptive evaluation. This section describes the sampling methods for the three information collection requests under the National Evaluation descriptive evaluation that involved complex sampling and/or analysis procedures: Program Operator Interview Guide, Partner Interview Guide, and Participant In-Depth Interview Guide.

Program Operator and Partner Organization Interview Guides. The systems study component of the descriptive evaluation included interviews with two respondent groups: Program Operators (Instrument 15) and Partner Organizations (Instrument 16). The evaluation team purposively selected 15 HPOG 2.0 programs (out of 38 programs) and 2 to 7 partner organizations from each selected program for inclusion in the HPOG 2.0 Systems Study. Selection focused on their experiences and perspectives on the local service delivery system over the course of the HPOG grant—with the goal of identifying programs that range in the types and intensity of systems activities that could influence how the system works rather than exploring collaboration across all HPOG programs. Purposive sampling allowed for the exploration of a range of experiences and perspectives on activities and partnerships that may contribute to or hinder systems development and improvement. It also provided opportunities to understand variations in service delivery systems across HPOG. Because selected programs offer a range of types and intensity of systems activities, the research team expects to gain perspectives on both positive and negative experiences with conducting systems activities.4

As part of the selection process, the evaluation team reviewed PAGES data to identify the prevalence of training in various healthcare occupations (e.g., nursing assistant versus health care information technology). This allowed the evaluation team to better understand variation in networks of partners and experiences with those partners across types of training programs. During the process of selecting programs for the systems study the evaluation team took into consideration the degree to which selected programs overlapped with those selected for the previously approved focus area site visits and with other data collection activities to minimize burden on any one program.

Program Selection

The evaluation team drew from information collected during the first-round telephone interviews (previously approved in June 2017 under this OMB Control Number), and information available in other documents (such as grant applications and evaluation design documents, and the PAGES system) to help with the program selection. To select programs, the evaluation team used a purposive selection strategy based on information on the types and intensity of system activities under the local service delivery systems and HPOG 2.0, geographic area, lead organization type, whether or not the grantee was an HPOG 1.0 grantee/program operator, occupation(s) of training, new or enhanced programs, program enrollment, and target population to ensure the sample includes variation in experiences and perspectives by different types of programs. A total of 87 respondents participated in the systems study—15 program operators (one operator per program for the 15 programs selected) and 72 partner organization staff for the 15 programs selected).

Partner Organization Selection

Purposive sampling was also used to select partner organizations. The strategy allowed the evaluation team to examine a range of experiences and perspectives on systems activities and partnerships. Partner organizations that did not engage at all in the HPOG program were excluded from the sample as respondents should have some knowledge of the program. The evaluation team used several sources of information to select partners.

  • First, for each selected program, the team used data from the First-Round Telephone Interviews to develop a list of partners and their involvement in the HPOG program operations.

  • Second, during the program operator interview, the team asked respondents to discuss partners that were highly involved and those that were less involved. Program operators were asked to recommend a mix of both highly and less involved partners for interviews.

Three to seven partners per program were selected based on program operators’ recommendations as to which partners represent different partner organization types (e.g., nonprofit organization, government agency, employer, and education and training provider) and were best suited to answer questions. For each program, the evaluation team created a matrix of partners that grouped partners by whether they were highly or less involved in HPOG operations and by organization type. The team selected a range of organization types, typically avoiding the same organization type as the program operator unless, in the program operator’s opinion, the partner had a useful perspective on systems activities. The evaluation team sought to include employers and employer representatives, such as industry associations, to ensure we gather perspectives on employer and industry engagement, an important component of the HPOG 2.0 Program.

Participant In-depth Interviews. The study team conducted in-depth interviews with 153 participants across 14 programs using the Participant Interview Guide (Instrument 17). Researchers first selected programs and then participants. Researchers used data from the first-round telephone interviews with programs to select 14 programs for inclusion in the participant interviews. In consultation with ACF, the evaluation team selected programs that represented a range of locations, program size and structure, grantee organizational types, and program characteristics. The purposive sampling strategy maximized variation in participant and program characteristics as much as possible. Interviewers travelled to conduct the interviews with selected participants over a four-day period. The interviews were conducted in a central area—at the program offices or another centrally located quiet place such as a local library or community center. If those were not feasible, interviews took place at the respondent’s home. The purposive sampling strategy took into account where program participants reside—to look at how geographically dispersed they were and ensure that program participants’ geographic locations are practical for conducting site visits. For example, some programs did not have sufficient participants located in a geographically central location to facilitate a successful data collection site visit.

Once the 14 programs were selected, the evaluator selected participants. The goal in sampling was to recruit roughly equal numbers of participants who completed their training and who were still in the training program, as well as some who had dropped out before completing training. The evaluation team attempted to select an equal number of participants to attempt to interview across the selected programs. Researchers reviewed the participant data available in PAGES to select an initial pool of 45 treatment group members in each program according to the following criteria:

  1. Participant Stage in the Training Program to ensure a mixture of participants who have successfully completed their training (approximately 40 percent), participants who are still in a training program (approximately 40 percent), and participants who have dropped out of a training program (approximately 20 percent).

  2. Demographic and Socio-Economic Characteristics to interview a sample representative of the demographic and socio-economic characteristics of that particular program’s participant population.

To select the 45 treatment group members, the evaluation team chose: the most recent 25 participants who successfully completed their training; 25 participants who were currently at least four months into their training program but not yet completed; and 12 participants who had dropped out of the training program within the last six months. Participants were selected randomly within each group.5 From this selection of participants, the evaluation team looked at demographic and socio-economic characteristics of the group and selected participants to create a sample with variation similar to the demographic and socio-economic characteristics of the program’s overall participant population.

The evaluation team used that pool of 45 participants per program to select 15 participants in each program using stratified sampling to ensure representation from each group of interest. Evaluation team members attempted to recruit selected participants to conduct an interview. The expected overall response rate was 67 percent which would result in 140 completed interviews across all selected programs (10 completed interviews at each of the 14 programs).6 The response was slightly better than expected—72.8 percent—resulting in 153 completed interviews.

Impact evaluation. This section describes the sampling methods for the two information collection requests under the National Evaluation impact evaluation: the Intermediate Follow-up Survey and the Phone-based Skills Assessment Pilot.

Intermediate Follow-up Survey (Instrument 18). The evaluation team in collaboration with ACF selected 13,118 study participants—all of the participants enrolled between March 2017 and February 2018—for inclusion in the Short-term Follow-up Survey sample (previously approved under this OMB Control Number in June 2018). A subset—up to 5,000—of those participants, from a compact set of randomization cohorts, will be included in the Intermediate Follow-up Survey sample. The evaluation team completed the Short-term Follow-up Survey in November 2019. The evaluation team revised their response rate estimate for the Intermediate Follow-up Survey based on their Short-term Follow-up Survey experience. The evaluation team now estimates a 75.7 percent completion rate (3,785 completed interviews) for the full survey instrument. The evaluation team also estimates an additional 215 completes to the new version of the Intermediate Follow-up Survey (Instrument 18a), the critical items only version.

Several aspects of this sampling plan deserve attention: (1) How was the subsample size chosen?; (2) Why do we want to select a subsample of those interviewed in the Short-term Follow-up Survey?; and (3) Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? Each of these questions is answered below.

  1. How was the subsample size chosen? The subsample size of 5,000 was chosen because it allows reasonable power to detect national pooled impacts. The much larger sample size for the Short-term Follow-up Survey was chosen because of the need to measure variation in program implementation from the student perspective and to measure variation in effects on education outcomes. These activities are not planned for the Intermediate Follow-up Survey.

  2. Why do we want to select a subsample of those selected for participation in the Short-term Follow-up Survey? We want to select a subsample of those selected for the Short-term Follow-up Survey for several reasons. First, selecting from those who participated in the Short-term Follow-up Survey will allow the construction of longer case histories as we will have thirty-six months of employment and training history instead of just fifteen months. Second, it will reduce nonresponse and cost because the continuous updating of contact information will provide the evaluation team with a more robust history of contact information over the 36-month follow-up period than would be available if a new sample was selected. Drawing from the Short-term Follow-up Survey sample also allows the evaluation team to build upon the rapport established with study participants during the follow-up period. Finally, using a subsample of the Short-term Follow-up Survey sample will allow more powerful adjustments for nonresponse to the Intermediate Follow-up Survey since the Short-term Follow-up information can be used both to study the potential for nonresponse bias and to make adjustments in the event that evidence for nonresponse bias in unadjusted statistics is found. However, in the selected randomization cohorts we will attempt to interview all participants selected for the short-term follow-up as part of the Intermediate Follow-up Survey. That is we will not exclude participants who were included in the Short-term Follow-up Survey sample, but not interviewed.

  3. Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? The Short-term Follow-up Survey sample included participants enrolled over 12 monthly cohorts—March 2017-February 2018. We want to select a compact set—or subset—of cohorts because of the substantial time and cost efficiencies associated with larger workloads for interviewers over a compressed field period. We plan to select four or five of the 12 monthly cohorts included in the Short-term Follow-up Survey for inclusion in the Intermediate Follow-up Survey data collection.

At the conclusion of the Short-Term Follow-up Survey, all study respondents were asked to update their contact information to aid in future data collection efforts. Study participants selected for the Intermediate Follow-up Survey continue to receive periodic contact update requests via the previously approved contact update form (Instrument 5b) every three months between the Short-Term and Intermediate Follow-up Survey efforts.

Phone-based Skills Assessment Pilot (Instrument 19). This assessment was a pilot study. Results from it will not be published as a formal part of the evaluation of HPOG 2.0.7 Rather, the results from this effort were used to identify a narrow set of survey questions that were incorporated into a ten-minute module within the Intermediate Follow-up Survey.8 Given the intended usage, the evaluation team attempted to identify a volunteer sample of 500 HPOG 2.0 participants randomized outside the window for the Short-term Follow-up Survey. The team recruited about 400 participant volunteers with the help of grantees and completed 300 pilot assessments.9 Most grantees were be asked to recruit and refer potential volunteers to the evaluation contractor. Ideal candidates were HPOG 2.0 study participants who met three key criteria:

  1. They were from cohorts that are not part of our short-term survey sample pool (enrolled prior to March 1, 2017 OR after May 31, 2018);

  2. They were nearly ready to start occupational classes or currently taking lower level occupational classes; and

  3. They had complete contact information (address, phone number, and email) in PAGES.

A sample of volunteers was adequate for the purpose of psychometric testing of the draft skills assessment. Thus, the pilot design targeted a particular number of completed interviews as opposed to a certain response rate. The evaluator estimated that 300 completed pilot assessments were needed in order to yield useful results on the reliability and validity of the items. The purpose of the pilot was to sort the relative difficulties of the assessment items. By having grantees recruit participants that met the above criteria and wanted to participate, the evaluation team was able to meet these objectives.

Several national and international surveys have been developed to assess adult numeracy and literacy, but almost all of these rely on face-to-face interviewing (a mode too expensive for most OPRE evaluations) or online administration (a mode infeasible for many OPRE evaluations due to a higher lack of computer access among low-income populations). Since most OPRE evaluations use a mix of methodologies, identifying a short battery of questions that could be administered by phone in about 10 minutes would offer four benefits: (1) it would be more cost effective than in-person or online administration; (2) it would be easily adaptable for in-person or online administration reducing burden on administrators and respondents; (3) the short duration of the module would also reduce burden on respondents—potentially increasing response rates or at least minimizing break-offs, and (4) it could be easily shared across other studies. The pilot assessment data collection was conducted in Fall 2019. The findings are reflected in Section A16 of Supporting Statement A.

Exhibit B-2 presents the sampling methods and target response rates for each of the HPOG 2.0 National and Tribal Evaluation respondent subgroups. The instruments where data collection is complete are labelled as such in the exhibit.

Exhibit B-2: HPOG 2.0 National and Tribal Evaluation Respondents with Status Updates

Respondent Universe

Respondent Subgroup

Sampling Methods and Target Response Rates

Data Collection Strategies

National HPOG 2.0 Evaluation


Grantees, partners, and employers

Grantees

Evaluation team members review the topics of interest with grantees using the HPOG 2.0 Screening Interview to identify appropriate respondent(s) based on who is most knowledgeable about the topics of interest. (See Instrument 2).

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to both rounds of data collection.

Semi-structured telephone interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instruments 13-16) (COMPLETE)

Program Cost Survey (Instrument 20) (COMPLETE)



Managers and staff

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instruments 14-15) (COMPLETE)

Program Cost Survey ( Instrument 20) (COMPLETE)


Partners

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instrument 16) (COMPLETE)


Employers

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured interviews

(Previously approved Instruments 2, 3 and 4) (COMPLETE)

(Instrument 16) (COMPLETE)

Descriptive evaluation participants

Selected treatment group participants

A pool of 45 participants in each of 14 sites will be identified to recruit for the participant interviews.

Up to 15 participants per site were recruited; the team achieved a better than expected response and completed interviews with 72.8 percent of those selected (153 in all.)

Semi-structured participant interview guide administered in-person

(Instrument 17) (COMPLETE)

Impact evaluation participants selected for the Contact Update Sample

A sample of participants (treatment and control groups)

Up to 13, 118 study participants, beginning with those enrolled in March 2017 will be part of the participant contact update efforts.

The team estimated that 35 percent of the respondents will respond to each quarterly participant contact update effort. The contact updates are ongoing. The current return rate is 24 percent.

Contact updates by mail, online portal, or telephone

(Previously approved Instruments 5a (COMPLETE) and 5b)

Impact evaluation participants selected for Short-term Follow-up Survey sample

A sample of participants (treatment and control groups

13.087 study participants, beginning with those enrolled in March 2017 will be part of the Short-term Follow-up survey.

Data collection is over and the evaluation team completed interviews with 74.2 percent of the sample (9,710 interviews in total).

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Previously approved Instrument 12) (COMPLETE)

Impact evaluation participants selected for Intermediate Follow-up Survey sample

A sample of participants (treatment and control groups

Up to 5,000 study participants, from select cohorts of participants randomized between March 2017 and February 2018 will be part of the Intermediate Follow-up survey.

The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 4,000 completes.

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Instrument 18)

Impact evaluation participants selected for the phone-based Skills Assessment Pilot

Treatment group participant volunteers

Up to 500 participants will volunteer to be part of the phone-based Skills Assessment Pilot.

The team achieved its target of 300 completed interviews.

Telephone interviews conducted by local interviewers with CAPI technology

(Instrument 19) (COMPLETE)


Tribal HPOG 2.0 Evaluation

Grantees, partners, and employers

Grantees

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to date and expects a 100 percent response rate going forward.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Management and Staff

A very high response rate (at least 80 percent) is expected among grantee staff. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Partners

Partners have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instruments 6 and 7)


Employers

A very high response rate (at least 80 percent) is expected among HPOG employers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 8)

Participants

Program participants (current)

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team achieved response rates ranging from 25-50 percent from current program participants across sites to date, and expects the same trend to continue.

In-person focus groups or telephone/virtual

(Previously approved Instrument 9)


Program completers

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team expects a 25-50 percent response rate from program completers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 10)


Program non-completers

The tribal evaluation team will work with the grantees to recruit participants during the annual data collection planning period. The team has experienced difficulty recruiting participants for this information collection—achieving closer to 10 percent response in prior rounds. The team still expects a 10-25 percent response rate from program non-completers for the upcoming information collection.

Semi-structured in-person or telephone/virtual interviews

(Previously approved Instrument 11)

HPOG National and Tribal Evaluation Participant Accomplishment and Grantee Evaluation System (PAGES)

Participants

National Evaluation (Non-Tribal) HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 52,000

Baseline and ongoing participant level data

(Previously approved Instrument 1)


Tribal HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 2,663

Baseline and ongoing participant level data

(Previously approved Instrument 1)



1 Response rate expectations are based on a variety of factors. Grantees have agreed to participate in the evaluation as a condition of receiving HPOG funding, so grantee, partner, and employer response rates are expected to be very high. Participation in the evaluation studies is voluntary for HPOG participants, so response rates are expected to be lower. Previous experience with similar populations indicates that response rates are expected to be lower for participants who do not complete the program than those who do.

2 The projected response rate for the contact update form is based on prior experience with similar approaches on studies of comparable populations—primarily the PACE and HPOG 1.0 Impact study samples (OMB No. 0970-0397 and 0970-0394 respectively).

3 Although it is a five year grant, enrollment did not start until four to six months into the grant period, and we expect enrollment to slow toward the end of the grant period to allow participants ample time to take advantage of the HPOG services. Thus, we view the enrollment period as four and a half years.

4 Systems study data collection is complete, but analysis is still underway.

5 Where there are insufficient participants who have dropped out of the training program within the last six months, we extended the time period to 12 months since dropping out of the program.

6 We expected to complete interviews with 10 of the 15 participants selected – a 67% response rate. While we had expected to be able to adjust the recruitment strategy to adjust for differences in response rate by site, we found that the response rate varied quite considerably from site to site – some sites had only 8 completed interviews out of the 15 scheduled, and others had 13 or 14 completed interviews. As a result it was hard for us to predict whether we would get the full sample of 140 interviews, until the last 3 site visits. We decided to complete interviews at these last site visits with the respondents who showed up. These sites had a relatively high completion rate. As a result we ended up completing 153 interviews, an average response rate of 72% across all sites.



7 The evaluation team will prepare a short methods report on the pilot assessment study that might be published as a white paper or serve as the basis for a journal paper—explaining the process followed to develop the short skills pilot and incorporate it into the Intermediate Follow-up Survey. The results will not be analyzed as part of the impact study findings.

8 The draft Intermediate Follow-up Survey included all of the items from the pilot assessment, to ensure that we had OMB approval for each item. Based on the findings from the pilot, we retained the questions that best meet the needs of the assessment—11 numeracy and 11 literacy questions, with varying degrees of difficulty.

9 In the event that fewer than 300 volunteers respond to the initial assessment pilot outreach effort, the team will reach out to grantees to identify additional volunteers. Since the sample is based on volunteers, we do not expect a second recruitment effort will be necessary.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDebi McInnis
File Modified0000-00-00
File Created2022-07-11

© 2024 OMB.report | Privacy Policy