Supporting Statement for OMB Clearance Request
Part B
National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants
0970-0462
Revised April 2019
Revised July 2019
Revised April 2020
Submitted by:
Office of Planning,
Research & Evaluation
Administration for Children & Families
U.S. Department of
Health
and Human Services
Federal Project Officers:
Hilary Bruck
Nicole Constance
Amelia Popham
B.1 Respondent Universe and Sampling Methods 2
B.2 Procedures for Collection of Information 10
B.3 Methods to Maximize Response Rates and Deal with Nonresponse 13
B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 17
Instruments
Previously Approved Instruments
Instrument 1: PAGES Grantee- and Participant-Level Data Items List
Instrument 2: HPOG 2.0 National Evaluation Screening Interview
Instrument 3: HPOG 2.0 National Evaluation first-round telephone interview Protocol
Instrument 4: HPOG 2.0 National Evaluation in-person implementation interviews
Instrument 4A HPOG 2.0 National Evaluation In-Person Implementation Interview
Instrument 4B HPOG 2.0 National Evaluation In-Person Implementation Interviews Basic Skills Training
Instrument 4C HPOG 2.0 National Evaluation In-Person Implementation Interviews Career Pathways
Instrument 4D HPOG 2.0 National Evaluation In-Person Implementation Interviews Work-Readiness
Instrument 4E HPOG 2.0 National Evaluation In-Person Implementation Interviews Sustainability
Instrument 5: HPOG 2.0 National Evaluation welcome packet and participant contact update forms
Instrument 5a: HPOG 2.0 National Evaluation welcome packet and contact update form
Instrument 5b: HPOG 2.0 National Evaluation participant contact update letter and form
Instrument 6: HPOG 2.0 Tribal Evaluation grantee and partner administrative staff interviews
Instrument 7: HPOG 2.0 Tribal Evaluation program implementation staff interviews
Instrument 8: HPOG 2.0 Tribal Evaluation employer interviews
Instrument 9: HPOG 2.0 Tribal Evaluation program participant focus groups
Instrument 10: HPOG 2.0 Tribal Evaluation program participant completer interviews
Instrument 11: HPOG 2.0 Tribal Evaluation program participant non-completer interviews
Instrument 12: HPOG 2.0 National Evaluation Short-term Follow-up Survey
Instrument 13:HPOG 2.0 Screening Interview Second Round
Instrument 14:HPOG 2.0 Second Round Telephone Interview Guide
Instrument 15:HPOG 2.0 Program Operator Interview Guide for Systems Study
Instrument 16:HPOG 2.0 Partner Interview Guide for Systems Study
Instrument 17:HPOG 2.0 Participant In-depth Interview Guide
Instrument 18:HPOG 2.0 Intermediate Follow-up Survey
Instrument 19:HPOG 2.0 Phone-based Skills Assessment Pilot Study Instrument
Instrument 20:HPOG 2.0 Program Cost Survey
Attachments
New Attachments Included in this Change Request
Attachment B: New Informed Consent Forms, Updated for Verbal Consent
Attachment B: National Evaluation Informed Consent Form C (Lottery Required)_Verbal
Attachment B: National Evaluation Informed Consent Form D (Lottery Not Required)_Verbal
Attachment B2: Tribal Evaluation Informed Consent Form C(SSNs)_Verbal
Attachment B3: Tribal Evaluation Informed Consent Form D (Unique identifiers)_Verbal
Previously Approved Attachments
Attachment A: References
Attachment B: Previously Approved Informed Consent Forms
Attachment B: National Evaluation Informed Consent Form A (Lottery Required)
Attachment B: National Evaluation Informed Consent Form B (Lottery Not Required)
Attachment B2: Tribal Evaluation Informed Consent Form A (SSNs)
Attachment B3: Tribal Evaluation Informed Consent Form B (Unique identifiers)
Attachment C: 60 Day Federal Register Notice
Attachment D: Previously Approved Sources and Justification for PAGES Grantee- and Participant-Level Data Items
Attachment E: Previously Approved Final Updated Attachment E PPR Data List and Mockup
Attachment F: First Round of HPOG Grantees Research Portfolio
Attachment G: Previously Approved Participant Contact Information Update Letter and Form (Obsolete, replaced by Instrument 5a and 5b)
Attachment H: HPOG Logic Model
Attachment I: Previously Approved Focus group participant consent form
Attachment J: Previously Approved Interview Verbal Informed Consent Form
Attachment K: HPOG 2.0 National Evaluation Short-term Follow-up Survey Advance Letter
Attachment L: HPOG 2.0 National Evaluation Short-term Follow-up Survey Sources
Attachment M: HPOG 2.0 National Evaluation Short-term Follow-up Survey Trying to Reach You Flyer
Attachment N: HPOG 2.0 National Evaluation Short-term Follow-up Survey Email Reminder
Attachment O: Research Questions for Previously Approved Data Collection Efforts (National Evaluation and Tribal Evaluation)
Attachment P: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Advance Letter
Attachment Q: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Sources
Attachment R: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Trying to Reach You Flyer
Attachment S: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Email Reminder
Attachment T: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot flyer
Attachment U: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot grantee letter
Attachment V: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot participant letter
Attachment W: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot recruitment script
Attachment X: Complete list of previously approved data collection instruments
Attachment Y: 60-day Federal Register Notice
Attachment Z: Participant Interview Recruitment Materials
This document serves as Part B of the Supporting Statement for the third revision of data collection for The Health Profession Opportunity Grants 2.0 (HPOG 2.0) National and Tribal Evaluation (OMB Control No. 0970-0462), approved in July 2019. The HPOG 2.0 National and Tribal Evaluation is sponsored by the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS). The federal evaluations of the HPOG 2.0 National and Tribal grantees will evaluate postsecondary career pathway programs focused on the healthcare sector that target Temporary Assistance for Needy Families (TANF) recipients and other low-income individuals. The intended use of the resulting data is to improve ACF’s research, evaluation, and program support of the HPOG 2.0 program and others like it. The revisions in this Supporting Statement reflect changes requested in a non-substantive change request memo, submitted to OMB in April 2020. An overview of these non-substantive changes can be found in the supplementary document OMB#_0970-0462_NonSub Change Request_Temporary Use of Verbal Consent Forms_April 2020.docx.
Exhibit B-1 reviews the original information collection request submission and approval dates along with those for the two prior revisions, and a summary of the eight new instruments for which approval is currently sought.
Exhibit B-1: Clearance Requests and Instruments for HPOG 2.0 (OMB Clearance No. 0970-0462)
Request |
Instrument(s) |
Request Date |
Approval Date |
Link to Supporting Statement |
Original |
Participant Accomplishment and Grant Evaluation System (PAGES) (Instrument #1) |
5/13/15 |
8/6/15 |
https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201505-0970-002 |
1st Rev. |
Various baseline, process and contact update forms (Instruments #2-5b for the National Evaluation; #6-11 for the Tribal Evaluation) |
10/26/16 |
6/27/17 |
https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201610-0970-012 |
2nd Rev. |
National Evaluation Short-term Follow-Up Survey (Instrument #12) |
2/5/18 |
6/8/18 |
https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201802-0970-001 |
3rd Rev. |
Additional National Evaluation data collection tools: Descriptive evaluation protocols (Instruments #13-17); Intermediate Follow-up Survey (Instrument #18); Phone-based Skills Assessment Pilot (Instrument #19); and Program Cost Survey (Instrument #20). |
4/23/2019 |
7/24/19 |
https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201904-0970-006
|
Non-substantive Change Request-Post July 2019 Approval of Revision #3 (this submission) |
Non-substantive Change Request for a new set of alternative consent forms and procedures allowing for verbal consent temporarily during COVID-19
|
April 2020 |
|
|
All of the information collections requested under the third revised submission and approved in July 2019 were discussed in Part A of this supporting package. Part B of this supporting statement focused on the five approved collections of information that involved complex sampling and/or analysis procedures.1 Three of the five instruments are for the descriptive evaluation:
Program Operator Interview Guide for systems study (Instrument 15)
Partner Interview Guide for systems study (Instrument 16); and
Participant In-depth Interview guide (Instrument 17).
The remaining two instruments are part of the impact evaluation:
Intermediate Follow-up Survey (Instrument 18); and
Phone-based Skills Assessment Pilot (Instrument 19).
OMB approved the third revised submission in July 2019 and data collection is complete for all but Instrument 18.
The original consent forms were previously approved under this OMB Control Number in August 2015, revised in January 2016, and renewed most recently in July 2019 (see previously approved Attachments B, B2 and B3). Due to the COVID-19 outbreak, most HPOG programs have changed to operate their programs virtually, including asking staff to work remotely and offering participant trainings online. These restrictions have brought the face-to-face intake and enrollment sessions to a halt; thus programs do not have the ability to obtain written consent face-to-face. This request seeks approval for a new set of alternative informed consent forms to allow eligible participants to verbally consent to be in the study. These revised consent forms will only be used until grantees can return to their worksites and resume normal program operations. At that time, grantees will return to using the previously approved consent forms for written consent. Justification for this non-substantive changes can be found in the supplementary document OMB#_0970-0462_Non-substantive Change Request_ Temporary Use of Verbal Consent Forms_April 2020.docx.
This non-substantive change request does not require any changes to the respondent universe nor to the sampling methods. Thirty-two HPOG 2.0 grants were awarded to government agencies, community-based organizations, post-secondary educational institutions, and tribal-affiliated organizations in September 2015. Of these, 27 were awarded to non-tribal entities and five were awarded to tribal organizations. The 27 non-tribal grantees operate 38 unique HPOG 2.0 programs. The instruments approved in July 2019 under the third revised submission concern only the 27 non-tribal grantees participating in the National Evaluation. Sampling procedures for the three instruments to support the National Evaluation descriptive evaluation are described below, followed by a discussion of the sampling procedures for the two National Evaluation impact evaluation instruments.
Descriptive evaluation. This section describes the sampling methods for the three information collection requests under the National Evaluation descriptive evaluation that involved complex sampling and/or analysis procedures: Program Operator Interview Guide, Partner Interview Guide, and Participant In-Depth Interview Guide.
Program Operator and Partner Organization Interview Guides. The systems study component of the descriptive evaluation included interviews with two respondent groups: Program Operators (Instrument 15) and Partner Organizations (Instrument 16). The evaluation team purposively selected 15 HPOG 2.0 programs (out of 38 programs) and 2 to 7 partner organizations from each selected program for inclusion in the HPOG 2.0 Systems Study. Selection focused on their experiences and perspectives on the local service delivery system over the course of the HPOG grant—with the goal of identifying programs that range in the types and intensity of systems activities that could influence how the system works rather than exploring collaboration across all HPOG programs. Purposive sampling allowed for the exploration of a range of experiences and perspectives on activities and partnerships that may contribute to or hinder systems development and improvement. It also provided opportunities to understand variations in service delivery systems across HPOG. Because selected programs offer a range of types and intensity of systems activities, the research team expects to gain perspectives on both positive and negative experiences with conducting systems activities.2
As part of the selection process, the evaluation team reviewed PAGES data to identify the prevalence of training in various healthcare occupations (e.g., nursing assistant versus health care information technology). This allowed the evaluation team to better understand variation in networks of partners and experiences with those partners across types of training programs. During the process of selecting programs for the systems study the evaluation team took into consideration the degree to which selected programs overlapped with those selected for the previously approved focus area site visits and with other data collection activities to minimize burden on any one program.
Program Selection
The evaluation team drew from information collected during the first-round telephone interviews (previously approved in June 2017 under this OMB Control Number), and information available in other documents (such as grant applications and evaluation design documents, and the PAGES system) to help with the program selection. To select programs, the evaluation team used a purposive selection strategy based on information on the types and intensity of system activities under the local service delivery systems and HPOG 2.0, geographic area, lead organization type, whether or not the grantee was an HPOG 1.0 grantee/program operator, occupation(s) of training, new or enhanced programs, program enrollment, and target population to ensure the sample includes variation in experiences and perspectives by different types of programs. A total of 87 respondents participated in the systems study—15 program operators (one operator per program for the 15 programs selected) and 72 partner organization staff for the 15 programs selected).
Partner Organization Selection
Purposive sampling was also used to select partner organizations. The strategy allowed the evaluation team to examine a range of experiences and perspectives on systems activities and partnerships. Partner organizations that did not engage at all in the HPOG program were excluded from the sample as respondents should have some knowledge of the program. The evaluation team used several sources of information to select partners.
First, for each selected program, the team used data from the First-Round Telephone Interviews to develop a list of partners and their involvement in the HPOG program operations.
Second, during the program operator interview, the team asked respondents to discuss partners that were highly involved and those that were less involved. Program operators were asked to recommend a mix of both highly and less involved partners for interviews.
Three to seven partners per program were selected based on program operators’ recommendations as to which partners represent different partner organization types (e.g., nonprofit organization, government agency, employer, and education and training provider) and were best suited to answer questions. For each program, the evaluation team created a matrix of partners that grouped partners by whether they were highly or less involved in HPOG operations and by organization type. The team selected a range of organization types, typically avoiding the same organization type as the program operator unless, in the program operator’s opinion, the partner had a useful perspective on systems activities. The evaluation team sought to include employers and employer representatives, such as industry associations, to ensure we gather perspectives on employer and industry engagement, an important component of the HPOG 2.0 Program.
Participant In-depth Interviews. The study team conducted in-depth interviews with 153 participants across 14 programs using the Participant Interview Guide (Instrument 17). Researchers first selected programs and then participants. Researchers used data from the first-round telephone interviews with programs to select 14 programs for inclusion in the participant interviews. In consultation with ACF, the evaluation team selected programs that represented a range of locations, program size and structure, grantee organizational types, and program characteristics. The purposive sampling strategy maximized variation in participant and program characteristics as much as possible. Interviewers travelled to conduct the interviews with selected participants over a four-day period. The interviews were conducted in a central area—at the program offices or another centrally located quiet place such as a local library or community center. If those were not feasible, interviews took place at the respondent’s home. The purposive sampling strategy took into account where program participants reside—to look at how geographically dispersed they were and ensure that program participants’ geographic locations are practical for conducting site visits. For example, some programs did not have sufficient participants located in a geographically central location to facilitate a successful data collection site visit.
Once the 14 programs were selected, the evaluator selected participants. The goal in sampling was to recruit roughly equal numbers of participants who completed their training and who were still in the training program, as well as some who had dropped out before completing training. The evaluation team attempted to select an equal number of participants to attempt to interview across the selected programs. Researchers reviewed the participant data available in PAGES to select an initial pool of 45 treatment group members in each program according to the following criteria:
Participant Stage in the Training Program to ensure a mixture of participants who have successfully completed their training (approximately 40 percent), participants who are still in a training program (approximately 40 percent), and participants who have dropped out of a training program (approximately 20 percent).
Demographic and Socio-Economic Characteristics to interview a sample representative of the demographic and socio-economic characteristics of that particular program’s participant population.
To select the 45 treatment group members, the evaluation team chose: the most recent 25 participants who successfully completed their training; 25 participants who were currently at least four months into their training program but not yet completed; and 12 participants who had dropped out of the training program within the last six months. Participants were selected randomly within each group.3 From this selection of participants, the evaluation team looked at demographic and socio-economic characteristics of the group and selected participants to create a sample with variation similar to the demographic and socio-economic characteristics of the program’s overall participant population.
The evaluation team used that pool of 45 participants per program to select 15 participants in each program using stratified sampling to ensure representation from each group of interest. Evaluation team members attempted to recruit selected participants to conduct an interview. The expected overall response rate was 67 percent which would result in 140 completed interviews across all selected programs (10 completed interviews at each of the 14 programs).4 The response was slightly better than expected—72.8 percent—resulting in 153 completed interviews.
Impact evaluation. This section describes the sampling methods for the two information collection requests under the National Evaluation impact evaluation: the Intermediate Follow-up Survey and the Phone-based Skills Assessment Pilot.
Intermediate Follow-up Survey (Instrument 18). The evaluation team in collaboration with ACF selected 13,118 study participants—all of the participants enrolled between March 2017 and February 2018—for inclusion in the Short-term Follow-up Survey sample (previously approved under this OMB Control Number in June 2018). A subset—up to 5,000—of those participants, from a compact set of randomization cohorts, will be included in the Intermediate Follow-up Survey sample. The evaluation team estimates an 80 percent completion rate (4,000 completed interviews).
Several aspects of this sampling plan deserve attention: (1) How was the subsample size chosen?; (2) Why do we want to select a subsample of those interviewed in the Short-term Follow-up Survey?; and (3) Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? Each of these questions is answered below.
How was the subsample size chosen? The subsample size of 5,000 was chosen because it allows reasonable power to detect national pooled impacts. The much larger sample size for the Short-term Follow-up Survey was chosen because of the need to measure variation in program implementation from the student perspective and to measure variation in effects on education outcomes. These activities are not planned for the Intermediate Follow-up Survey.
Why do we want to select a subsample of those selected for participation in the Short-term Follow-up Survey? We want to select a subsample of those selected for the Short-term Follow-up Survey for several reasons. First, selecting from those who participated in the Short-term Follow-up Survey will allow the construction of longer case histories as we will have thirty-six months of employment and training history instead of just fifteen months. Second, it will reduce nonresponse and cost because the continuous updating of contact information will provide the evaluation team with a more robust history of contact information over the 36-month follow-up period than would be available if a new sample was selected. Drawing from the Short-term Follow-up Survey sample also allows the evaluation team to build upon the rapport established with study participants during the follow-up period. Finally, using a subsample of the Short-term Follow-up Survey sample will allow more powerful adjustments for nonresponse to the Intermediate Follow-up Survey since the Short-term Follow-up information can be used both to study the potential for nonresponse bias and to make adjustments in the event that evidence for nonresponse bias in unadjusted statistics is found. However, in the selected randomization cohorts we will attempt to interview all participants selected for the short-term follow-up as part of the Intermediate Follow-up Survey. That is we will not exclude participants who were included in the Short-term Follow-up Survey sample, but not interviewed.
Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? The Short-term Follow-up Survey sample included participants enrolled over 12 monthly cohorts—March 2017-February 2018. We want to select a compact set—or subset—of cohorts because of the substantial time and cost efficiencies associated with larger workloads for interviewers over a compressed field period. We plan to select four or five of the 12 monthly cohorts included in the Short-term Follow-up Survey for inclusion in the Intermediate Follow-up Survey data collection.
At the conclusion of the Short-Term Follow-up Survey, all study respondents were sked to update their contact information to aid in future data collection efforts. Study participants selected for the Intermediate Follow-up Survey will also continue to receive periodic contact update requests via the previously approved contact update form (Instrument 5b) every three months between the Short-Term and Intermediate Follow-up Survey efforts.
Phone-based Skills Assessment Pilot (Instrument 19). This assessment was a pilot study. Results from it will not be published as a formal part of the evaluation of HPOG 2.0.5 Rather, the results from this effort were used to identify a narrow set of survey questions that will be incorporated into a ten-minute module within the Intermediate Follow-up Survey.6 Given the intended usage, the evaluation team attempted to identify a volunteer sample of 500 HPOG 2.0 participants randomized outside the window for the Short-term Follow-up Survey. The team recruited about 400 participant volunteers with the help of grantees and completed 300 pilot assessments.7 Most grantees were be asked to recruit and refer potential volunteers to the evaluation contractor. Ideal candidates are HPOG 2.0 study participants who meet three key criteria:
They are from cohorts that are not part of our short-term survey sample pool (enrolled prior to March 1, 2017 OR after May 31, 2018);
They are nearly ready to start occupational classes or currently taking lower level occupational classes; and
They have complete contact information (address, phone number, and email) in PAGES.
A sample of volunteers was adequate for the purpose of psychometric testing of the draft skills assessment. Thus, the pilot design targeted a particular number of completed interviews as opposed to a certain response rate. The evaluator estimated that 300 completed pilot assessments were needed in order to yield useful results on the reliability and validity of the items. The purpose of the pilot is to sort the relative difficulties of the assessment items. By having grantees recruit participants that meet the above criteria and want to participate, the evaluation team was able to meet these objectives.
Several national and international surveys have been developed to assess adult numeracy and literacy, but almost all of these rely on face-to-face interviewing (a mode too expensive for most OPRE evaluations) or online administration (a mode infeasible for many OPRE evaluations due to a higher lack of computer access among low-income populations). Since most OPRE evaluations use a mix of methodologies, identifying a short battery of questions that could be administered by phone in about 10 minutes would offer four benefits: (1) it would be more cost effective than in-person or online administration; (2) it would be easily adaptable for in-person or online administration reducing burden on administrators and respondents; (3) the short duration of the module would also reduce burden on respondents—potentially increasing response rates or at least minimizing break-offs, and (4) it could be easily shared across other studies.
Exhibit B-2 presents the sampling methods and target response rates for each of the HPOG 2.0 National and Tribal Evaluation respondent subgroups. The instruments where data collection is complete are labelled as such in the exhibit.
Exhibit B-2: HPOG 2.0 National and Tribal Evaluation Respondents with Status Updates
Respondent Universe |
Respondent Subgroup |
Sampling Methods and Target Response Rates |
Data Collection Strategies |
National HPOG 2.0 Evaluation |
|
||
Grantees, partners, and employers |
Grantees |
Evaluation team members review the topics of interest with grantees using the HPOG 2.0 Screening Interview to identify appropriate respondent(s) based on who is most knowledgeable about the topics of interest. (See Instrument 2). Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to both rounds of data collection. |
Semi-structured telephone interviews (Previously approved Instruments 2, 3 and 4) (COMPLETE) (Instruments 13-16) (COMPLETE) Program Cost Survey (Instrument 20) (COMPLETE)
|
|
Managers and staff |
All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured interviews (Previously approved Instruments 2, 3 and 4) (COMPLETE) (Instruments 14-15) (COMPLETE) Program Cost Survey ( Instrument 20) (COMPLETE) |
|
Partners |
All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured interviews (Previously approved Instruments 2, 3 and 4) (COMPLETE) (Instrument 16) (COMPLETE) |
|
Employers |
All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured interviews (Previously approved Instruments 2, 3 and 4) (COMPLETE) (Instrument 16) (COMPLETE) |
Descriptive evaluation participants |
Selected treatment group participants |
A pool of 45 participants in each of 14 sites will be identified to recruit for the participant interviews. Up to 15 participants per site were recruited; the team achieved a better than expected response and completed interviews with 72.8 percent of those selected (153 in all.) |
Semi-structured participant interview guide administered in-person (Instrument 17) (COMPLETE) |
Impact evaluation participants selected for the Contact Update Sample |
A sample of participants (treatment and control groups) |
Up to 13, 118 study participants, beginning with those enrolled in March 2017 will be part of the participant contact update efforts. The team estimated that 35 percent of the respondents will respond to each quarterly participant contact update effort. The contact updates are ongoing. The current return rate is 24 percent. |
Contact updates by mail, online portal, or telephone (Previously approved Instruments 5a (COMPLETE) and 5b) |
Impact evaluation participants selected for Short-term Follow-up Survey sample |
A sample of participants (treatment and control groups |
13.087study participants, beginning with those enrolled in March 2017 will be part of the Short-term Follow-up survey. Data collection is over and the evaluation team completed interviews with 74.2 percent of the sample (9,710 interviews in total). |
Telephone or in-person interviews conducted by local interviewers with CAPI technology (Previously approved Instrument 12) (COMPLETE) |
Impact evaluation participants selected for Intermediate Follow-up Survey sample |
A sample of participants (treatment and control groups |
Up to 5,000 study participants, from select cohorts of participants randomized between March 2017 and February 2018 will be part of the Intermediate Follow-up survey. The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 4,000 completes. |
Telephone or in-person interviews conducted by local interviewers with CAPI technology (Instrument 18) |
Impact evaluation participants selected for the phone-based Skills Assessment Pilot |
Treatment group participant volunteers |
Up to 500 participants will volunteer to be part of the phone-based Skills Assessment Pilot. The team achieved its target of 300 completed interviews. |
Telephone interviews conducted by local interviewers with CAPI technology (Instrument 19) (COMPLETE)
|
Tribal HPOG 2.0 Evaluation |
|||
Grantees, partners, and employers |
Grantees |
Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to date and expects a 100 percent response rate going forward. |
Semi-structured in-person interviews (Previously approved Instruments 6 and 7) |
|
Management and Staff |
A very high response rate (at least 80 percent) is expected among grantee staff. All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured in-person interviews (Previously approved Instruments 6 and 7) |
|
Partners |
Partners have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate. All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured in-person interviews (Previously approved Instruments 6 and 7) |
|
Employers |
A very high response rate (at least 80 percent) is expected among HPOG employers. All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured in-person interviews (Previously approved Instrument 8) |
Participants |
Program participants (current) |
The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team achieved response rates ranging from 25-50 percent from current program participants across sites to date, and expects the same trend to continue. |
In-person focus groups (Previously approved Instrument 9) |
|
Program completers |
The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team expects a 25-50 percent response rate from program completers. All interviews using the previously approved instruments that were attempted were completed. |
Semi-structured in-person interviews (Previously approved Instrument 10) |
|
Program non-completers |
The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team has experienced difficulty recruiting participants for this information collection—achieving closer to 10 percent response in prior rounds. The team still expects a 10-25 percent response rate from program non-completers for the upcoming information collection. |
Semi-structured in-person interviews (Previously approved Instrument 11) |
HPOG National and Tribal Evaluation Participant Accomplishment and Grantee Evaluation System (PAGES) |
|||
Participants |
National Evaluation (Non-Tribal) HPOG Participants |
No sampling techniques will be employed for PAGES data collection. A 100 percent response rate is expected. Estimated enrollment expected to be 41,500 |
Baseline and ongoing participant level data (Previously approved Instrument 1) |
|
Tribal HPOG Participants |
No sampling techniques will be employed for PAGES data collection. A 100 percent response rate is expected. Estimated enrollment expected to be 2,663 |
Baseline and ongoing participant level data (Previously approved Instrument 1) |
ACF is requesting approval for a new set of informed consent forms (See verbal versions of Attachments B, B2, and B3). These new consent forms will temporarily allow eligible participants to verbally consent into the study until grantees can return to their worksites and resume normal program operations. Once normal operations resume, grantees will return to using the previously approved consent forms and require written consent to join the study. The remainder of this section describes the procedures for obtaining verbal consent using the new consent forms. It then describes the procedures for conducting the now complete participant interviews data collection under the descriptive evaluation, the upcoming Intermediate Follow-up Survey, and the now complete skills assessment pilot data collection under the impact evaluation previously approved in July 2019 under the third revised information collection submission.
This non-substantive change requests approval for a new set of informed consent forms that will, on a temporary basis, allow participants to verbally consent into the study. This will allow grantee staff to continue to enroll new participants in healthcare training and serve their communities without disruption due to the COVID-19 outbreak. Under these revised procedures, grantee staff will email participants a copy of the new verbal consent forms at least one day prior to their intake meeting. This will serve as the participant’s copy of the consent forum. Advance receipt of the form will also give a prospective participant time to review the form in advance, identify anything they have questions about, and follow along as grantee staff review the consent form with them during the intake and consent appointment. During the intake meeting grantee staff will review the form with participants over the phone and address any questions as needed. Once all questions are addressed, the grantee staff member will ask participants to describe the study and other parts of consent form to check for comprehension. The grantee staff member will clarify any information as needed and then read the consent statement to participants and ask for verbal consent to participate. The grantee staff member will indicate in the evaluation’s management information system (known as Participant Accomplishment and Grant Evaluation System, or PAGES) that consent was obtained and proceed through the remainder of enrollment normally.
It is unclear whether grantee staff will have access to printers during the period of remote work, which may make it difficult for the staff member to sign the attestation of consent on a physical form. Depending on their printing capabilities, grantee staff members will follow one of two steps:
If the grantee staff member is able to print consent forms, they will enter the participant’s unique study identifier on the form, then sign and date the attestation indicating that the participant provided verbal consent. The grantee staff member will retain the signed form in a safe location until they return to their worksite when normal HPOG program operations resume.
For grantee staff members who are not able to print forms while working remotely, the evaluation team will provide a list of study identifier numbers for the participants enrolled during this period after staff return to their worksites. Grantee staff members will then print and sign a consent form following the procedures above. (If the signature happens after grantee staff return to the worksite, they will enter the date of consent.)
Once normal operations resume, the grantee staff member will make a copy of the signed form for program records and send the original form, with the attestation signed by the grantee staff member, to the evaluation contractor for the evaluation’s records. Verbal consent will substitute for written consent for these participants; there will be no need for the grantee to ask these participants to sign a consent form. Grantee staff will return to written consent procedures using the previously approved informed consent forms once normal operations resume.
The primary data collection approach for the descriptive evaluation is two rounds of semi-structured interviews conducted by telephone and one round of in-person site visits with program directors, case managers, and other relevant grantee staff. The first round of telephone interviews (now complete) focused on early implementation efforts. The second round (also now complete) updated the earlier round and collected information to help lay the groundwork for the systems and cost studies. Site visits (now complete) were to programs implementing promising approaches to program components of specific interest to ACF. Telephone and site visit data collection are supplemented with data from PAGES and other existing site-specific materials developed earlier by the National Evaluation team.
The data collection procedures for the previously approved descriptive evaluation instruments can be found in the first revision to OMB Control # 0970-0462, approved in June 2017. The same procedures used in the first-round interviews were followed for the second-round telephone interviews (Instruments 13 and 14). The procedures used to conduct the now completed descriptive evaluation data collection components—the systems study interviews and the participant in-depth interviews—are discussed here.
The descriptive evaluation systems study will describe how local service delivery systems (i.e., the economic and service delivery environment in which specific HPOG programs operate) may have influenced HPOG program design and implementation and how HPOG implementation may have influenced these local systems, based on the perspectives of program operators (i.e., the lead organization directly responsible for the administration of an HPOG program) and partners engaged in systems activities. The systems study partner interviews included interviews with partners outside of HPOG. Telephone interviews for the systems study focused on coordination of grantees and their partners within the local service delivery system (Instruments 15 and 16). Two-person teams administered the semi-structured interviews—one acted as the lead interviewer and the other as the note taker. Each interview took approximately 60 minutes based on the depth of knowledge of the respondent. The interviewers spent an additional 15 minutes with the program operator respondent to identify partners for interviews and obtain contact information. The primary mode for the interviews was telephone but the interview team offered videoconferencing (via Skype, Zoom, Go-To Meeting, or other technology) if the respondent preferred a more visual interaction.
The descriptive evaluation also includes one round of in-person interviews with HPOG program participants. The National Evaluation team conducted in-depth interviews with HPOG 2.0 program participants to gain insight into their motivations, decision making, expectations, and experiences (Instrument 17). The team worked with the PAGES data to identify up to 45 study participants in the treatment group, in each of the 14 selected programs for recruitment. Interviewers sent a letter to the selected sample to explain the participant interview requirements (See Attachment Z).
One interviewer conducted all of the interviews at a given site during a five-day visit. (No interviewer traveled to more than three sites.) Interviews were completed in-person either at the program office or at another agreed upon location. Each interviewer conducted the participant interviews and recorded—with the participant’s permission—the interviews for later transcription and analysis.
The impact evaluation participant-level data collection efforts include the previously approved informed consent documents (Attachment B Informed Consent Form A and Informed Consent Form B approved in January 2016), a Welcome to the Study packet (Instrument 5a, approved in June 2017—and now complete), the collection of quarterly contact updates (also previously approved under this OMB Control Number in June 2017 and still ongoing), the Short-term Follow-up Survey (previously approved in June 2018, and now complete), as well as the data collection procedures for the Intermediate Follow-up Survey and phone-based Skills Assessment Pilot (Instrument 18 and the now complete Instrument 19 of previously approved third revised submission). The procedures for conducting the ongoing data collection using previously approved contact update forms (Instrument 5b) and the Short-term Follow-up Survey (Instrument 12) are described in the first and second revisions to OMB Control Number 0970-0462 approved in June 2017 and June 2018 respectively.
As noted in the revisions to Supporting Statement A under this OMB Control Number, ACF seeks approval for a new set of alternative informed consent forms that will allow grantee staff to obtain verbal consent from participants enrolling in the evaluation. In instances where face-to-face enrollment is not feasible due to issues related to COVID-19, flexibility in how consent is obtained will ensure that participant enrollment can continue and that grantees are able to serve participants and sustain program operations during this time. Verbal consent is vital to maintaining the rigor of the evaluation in order to ensure that participants enrolled during this period can be included in the impact and descriptive evaluations of the National Evaluation and in the Tribal Evaluation. These participants are not part of any survey data collection efforts at this time. In particular, the administrative data from the National Directory of New Hires (NDNH) is crucial for measuring employment and earnings outcomes for this group of participants.8
The data collection procedures for the Intermediate Follow-up Survey (Instrument 18) will be identical to those approved for use in the Short-term Follow-up Survey (Instrument 12)—local interviewers will attempt to interview respondents first by telephone and then in-person, using computer assisted personal interviewing (CAPI) technology. Since the procedures are the same, the specific details of how the data collection will be done are not repeated here. Please refer to the second revision to OMB Control No. 0970-0462, approved June 8, 2018, for a full description of the survey procedures. The third revised submission, approved in July 2019, focused on the procedures for the phone-based Skills Assessment Pilot because they were not covered by earlier OMB approvals.
The purpose of the phone-based Skills Assessment Pilot is to narrow a set of 45 potential survey questions intended to assess literacy and numeracy skills down to a set that can be used in a short module within the Intermediate Follow-up Survey. Because the follow-up survey can be conducted either by phone or in person, administration of the assessment module has to “work” in either mode. There is a long history of successful skills assessments for in-person data collection, but very little history of skills assessment administration over the phone. For this reason, all of the pilot assessments were conducted by phone.
The phone interviewers were drawn from the same staff of local interviewers used for the Short-term Follow-up Survey. This ensured that the interviewers were fully trained on the HPOG 2.0 program, the goals of the evaluation and have experience working with the HPOG 2.0 participant population. Evaluation site team liaisons worked with the grantees to identify a pool of HPOG 2.0 participants that wanted to volunteer to complete the skills assessment pilot. Once identified, interviewers reached out to volunteer participants to explain more about the pilot, obtain their consent, conduct the pilot, and capture respondent feedback on the process. The evaluation team identified about 400 volunteers with the help of grantees, and completed 300 interviews. The approach to data collection did not include a specific response rate target; rather the plans for this pilot were based on a target number of completed interviews. No estimates about skill levels for any population will be published based on this pilot.9 Furthermore, the goals for the pilot do not include a demonstration of what response rate is achievable—only an assessment of whether it is possible to conduct a brief skills assessment by phone. The evaluation team expected that since the sample will consist of volunteers recruited by grantees that they would be easier to locate and still interested in participating. Assessment interviews were completed using CAPI technology. Interviewers were encouraged to quickly close out cases that were difficult to contact and move on to the next case in order to expediently complete this assessment pilot process. Ultimately, they reached the target number of completes. The revisions to Instrument 18 that result from the completion of the pilot analysis will be reflected in a forthcoming non-substantive change request.
The Program Cost Survey (Instrument 20) was administered to each of the staff at all 27 non-tribal grantees—to capture cost data for each of the 38 HPOG 2.0 programs. The survey captured data on costs for staff, overhead, direct provision of training, and provision of support services. The evaluation team asked grant managers from each of the 38 HPOG 2.0 programs to determine which staff members were the most knowledgeable about cost information. Selected staff members attended an informational webinar to introduce the cost-benefit analysis (CBA), learn about the concepts used in the survey, and ask preliminary questions.10 Upon request, CBA staff called individual programs to discuss any questions before the survey. Such guidance can improve accuracy because each program has its own structure and service offerings and so may need specific information on different survey components. Program staff completed the survey using web-based software. The evaluation team reviewed the submitted documents and followed-up on missing data items as needed.
The data collection procedures for the Tribal Evaluation data collection instruments are described in revision number 1 of OMB Control Number 0970-0462, previously approved in June 2017.
The data collection procedures for the previously grantee-level and ongoing participant-level data collection done under the PAGES system are described in original submission of OMB Control Number 0970-0462, previously approved in June 2017.
The study documents—including the recruitment materials, advance letters, and flyers developed for the National Evaluation participant level data collection efforts—were designed at an 8th-grade readability level. This ensures that the materials can be understood by most study participants. The Intermediate Follow-up Survey will be administered in both English and Spanish.
The procedures used to ensure that special populations can understand the various instruments that were previously approved are described in the information collection requests approved in June 2017 and June 2018.
This non-substantive change request does not require any changes to the previously approved methods to maximize response rates nor to deal with nonresponse. The remainder of this section first describes the methods used to maximize response rates for the Intermediate Follow-up Survey then the descriptive study participant interviews, previously approved under the third revised submission in July 2019.
The methods used for the Intermediate Follow-up Survey will be nearly identical to those approved for use in the Short-term Follow-up Survey. Specifically, the evaluation team will use the following methods to maximize response to the Intermediate Follow-up Survey effort:
Participant contact updates and locating;
Incentives; and
Sample control during the data collection period.
(See the second revision to OMB Control No. 0970-0462, approved in June 2018 for more details on these methods.) Using those same procedures, the evaluation team anticipates being able to achieve the targeted 80 percent response rate for the Intermediate Follow-up Survey.
The HPOG 2.0 National Evaluation impact evaluation team will continue participant contact update efforts (previously approved in June 2017) between the Short-term and Intermediate Follow-up Survey efforts only for those participants who will be part of the Intermediate Follow-up Survey data collection. The evaluation team intends to include in the target sample all study participants within the enrollment cohorts selected for the Intermediate Follow-up Survey, regardless of whether or not they responded to the Short-term Follow-up Survey.11This is consistent with how the evaluation team has handled non-respondents on similar Career Pathways Studies (PACE, OMB Control No. 0970-0397 and HPOG 1.0 Impact OMB Control No. 0970-0394). The evaluation team plans to maintain the same incentive structure—gift certificates provided via an email link or a physical gift card—for the Intermediate Follow-up Survey as used for Short-term Follow-up Survey. As described further in Supporting Statement A, Section A9, this request for clearance includes a modest increase in the incentive amount—from $40 to $45.
As discussed above, because the Phone-based Skills Assessment is a pilot, the intent is not to maximize the response rate, but rather complete a target number of interviews (300). The evaluation team plans to offer incentives to participants that complete the skills assessment as well.
The evaluation team will offer an incentive valued at $25 for each participant that responds to the phone-based Skills Assessment Pilot. The incentive is a way to thank the participant for their help in ensuring that the assessment instrument is feasible to administer by phone and to identify which items are most useful in assessing literacy and numeracy skills. The incentive also helps to offset any costs incurred as a result of their participation such as cell phone minutes or child care costs. The proposed incentive for this pilot is smaller than the incentives for some other instruments in the HPOG 2.0 impact evaluation because of both the lower burden on respondents and the fact that this is a single administration: that is, we do not repeat the skills assessment pilot data collection with respondents at a later date.
The evaluation team will also offer an incentive for completion of the Intermediate Follow-up Survey. Respondents will receive a $45 gift certificate. The following factors helped determine the amount of the incentive: the target response rate of 80 percent, the projected 60 minute length of the survey, the smaller sample size (only 5,000 of the 13,000 selected for the Short-term Follow-up Survey), and the duration of the follow-up period. The team also took into account the incentive amounts approved for previous rounds of data collection on OPRE’s prior Career Pathways studies (PACE and HPOG 1.0 Impact, OMB control numbers 0970-0397 and 0970-0394 respectively), to ensure that the planned amount is comparable. As with the contact update forms and Short-term Follow-up Survey, respondents will receive an email with customized instructions showing them how to log in to a secure study portal where they can redeem a $45 gift card from their choice of approved vendors.
Without an incentive of this magnitude, the impact evaluation study is unlikely to meet the quality targets defined by OMB and the Information Quality Act12 (see Supporting Statement A, Section A9 for more information).
Incentives at one or more phases of data collection have been used successfully on a number of similar federally-sponsored surveys such as PACE (OMB control number 0970-03970) and the HPOG 1.0 Impact Study (OMB control number 0970-0394.) These two studies are similar in nature to HPOG 2.0 both programmatically and in terms of respondent characteristics. We cite these two previously approved studies not to justify the use of incentives, but rather our choice of the proposed amount of the incentive. The planned incentive amount is comparable to what was offered for the follow-up survey efforts for both of those studies.
Finally, the team does not rely solely on the contact updates or the use of incentives to maximize response rates and reduce nonresponse bias. The evaluation team will use the same sample control procedures for monitoring survey production—well-trained interviewers, clear disposition codes, Spanish-language options, a variety of communication tools for interviewers that were approved for the Short-term Follow-up Survey. (See the ICR approved under OMB Control No. 0970-0462 approved in June 2018 for more details on these sample control procedures.)
The descriptive study participant interview data are not intended to be representative in a statistical sense, in that they will not be used to make statements about the prevalence of experiences for the full HPOG 2.0 population, nor the broader TANF-eligible population. However, it was important to secure participants from a wide range of programs, with a range of background characteristics, to capture as diverse a set of possible experiences with HPOG 2.0 experiences as possible. Given the time required for an in-person interview, incentives for participation were a useful tool in helping to maximize response rates. Those who completed the in-depth participant interview received a non-cash honorarium valued at $40, via email. Participants received an email with instructions to log in to a secure study portal where they redeemed the gift certificate to one of the approved vendors (see procedures for redeeming procedures described under the Intermediate Follow-up Survey). OMB previously approved similar use of incentives for the HPOG 2.0 Tribal Evaluation (participant focus groups and interviews) in June 2017 under this OMB Control Number (0970-0462) and the Pathways for Advancing Careers and Education (PACE) study (OMB Control Number 0970-0397). Without the use of incentives, we are concerned that we will not reach the target number of completed interviews in each category, which could jeopardize the utility of the participant interview data.
If interviewers achieve a response rate below 80 percent for the Intermediate Follow-up Survey, the research team will conduct a nonresponse bias analysis and, if necessary, create nonresponse weighting adjustments using the same protocols approved for the Short-term Follow-up Survey.
The phone-based Skills Assessments Pilot effort is not subject to non-response bias. The effort targets a certain number of completes—as opposed to the standard 80 percent response rate—with no restrictions on things such as site or intervention group. As described in section B1 above, the target sample for this effort is comprised of volunteers. If the target number of completed assessments is not reached with the first batch of volunteer sample, the evaluation team can work with grantees to obtain additional volunteers.
This non-substantive change request does not require any tests of procedures. The remainder of this section discusses the tests of procedures planned or already conducted for the instruments that were included in the third revised submission, approved in July 2019.The section first addresses the HPOG 2.0 National Evaluation descriptive evaluation, then the impact evaluation, followed by the cost-benefit analysis study. The section then provides references to the previously approved information collection requests under this OMB Control Number (0970-0462) should reviewers want to learn more about the tests of procedures for previously approved instruments.
The evaluation team conducted pretest interviews with fewer than ten study participants to ensure that the participant interview guide was working as intended. The participant interview guide (Instrument 17) that is part of this information collection request reflects changes needed based on that pretest feedback.
The other new descriptive study instruments are similar in content and structure to the previously approved descriptive study instruments. We are relying on the pretests done for those original instruments this time. See previously approved information collection request under this OMB Control No. approved in 2017 for more on the tests of procedures for the other descriptive study instruments.
This section discusses the two instruments for the impact evaluation that are the subject of the information request.
In designing the Intermediate Follow-up Survey, the evaluation team included items used successfully in other national surveys, particularly the HPOG 2.0 Short-term Follow-up Survey (OMB Control No. 0970-0462), and the PACE and first round of HPOG impact follow-up surveys (OMB control numbers 0970-0397 and 0970-0394) respectively. Consequently, many of the survey questions have been thoroughly tested on large samples.
If time allows post OMB approval of this data collection, the instrument will be programmed prior to pretesting and a sample of 15 to 20 participants will be used to ensure that the length of the instrument is consistent with the burden estimate. Otherwise the evaluator will pretest the survey with up to nine participants, using paper forms rather than CAPI. During internal pretesting, all instruments are closely examined to eliminate unnecessary respondent burden and questions deemed unnecessary were eliminated.
The phone-based Skills Assessment Pilot is itself a test. The pilot is being used to sort items by difficulty. Assuming that the pilot is successful, the intent is to make the assessment module for the intermediate survey adaptive. That is, the CAPI program being used to conduct the surveys will dynamically vary the set of items presented to the respondent based on prior responses. One possible plan is to have three item difficulty groups for the vocabulary items and three for the math items. The software might present four medium difficulty items and then follow up with a set of four easy or four hard items based on the respondent’s performance on the four medium difficulty items. The pilot will provide sufficient information to refine these broad plans. Critical information that will be provided by the pilot includes the time required for each item. Because the instrument also includes information about earned credentials and use of basic skills in everyday life, the evaluator will also be able to select the items that correlate better with these measures. The draft Intermediate Follow-up Survey includes all of the items from the pilot assessment. Based on the findings from the pilot, we will drop the questions that do not prove successful in the pilot and will submit the final instrument to OMB as a non-substantive change request.
The cost-benefit analysis study requires detailed information from grantees and stakeholders. It is possible that multiple people at each organization will need to provide the information. The evaluation team reached out to one individual at five grantee programs to ask them to review the data collection items of interest to the evaluation team and provide an assessment of the feasibility of collecting that information and the level of effort required to do so. The evaluation team reviewed the feedback from grantees and adjusted protocols accordingly. The instruments in this information collection request reflect the changes that resulted from that feedback.
See previously approved information collection request under this OMB Control No approved in 2017 for more on the tests of procedures for the Tribal Evaluation.
See previously approved information collection request under this OMB Control No approved in 2015 for more on the tests of procedures for the PAGES system.
With ACF oversight, Abt and its partners MEF Associates, the Urban Institute and Insight Policy Research are responsible for conducting the HPOG 2.0 National Evaluation. This team has drafted an Impact Evaluation Design Plan with considerable detail on planned analytic procedures. It will be published in 2019. Prior to analyses, an even more detailed analysis plan will be prepared and published.
The individuals listed in Exhibit B-3 below made a contribution to this information collection request.
Exhibit B-3: Contributors
Name |
Role in HPOG 2.0 National and Tribal Evaluation |
Organization/Affiliation |
Gretchen Locke |
National Evaluation Project Director |
Abt Associates |
Jacob Klerman |
National Evaluation Co-Principal Investigator |
Abt Associates |
Bob Konrad |
National Evaluation Co-Principal Investigator |
Abt Associates |
Robin Koralek |
National Evaluation Deputy Project Director |
Abt Associates |
Larry Buron |
National Evaluation Project Quality Advisor |
Abt Associates |
David Judkins |
National Evaluation Director of Impact Analysis |
Abt Associates |
Debi McInnis |
National Evaluation Site Coordinator |
Abt Associates |
Inquiries regarding the statistical aspects of the HPOG 2.0 National Evaluation design should be directed to:
Gretchen Locke, Project Director
Abt Associates
10 Fawcett Street, Suite 5
Cambridge, MA 02138
(617) 349-2373
The following HHS staff—including the HHS project officers Hilary Bruck, Nicole Constance, and Amelia Popham—have overseen the design process and can be contacted at:
Hilary Bruck
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
330 C Street S.W., 4th Floor, Washington, D.C. 20201
(202) 619-1790
Nicole Constance
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
330 C Street S.W., 4th Floor, Washington, D.C. 20201
(202) 401-7260
Amelia Popham
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
330 C Street S.W., 4th Floor, Washington, D.C. 20201
(202) 401-5322
1 Two of the instruments for the descriptive study associated with the second round of telephone interviews (Instrument 13 and 14) and the program cost survey (Instrument 20) for the cost-benefit analysis study do not require any sampling.
2 Systems study data collection is complete, but analysis is still underway.
3 Where there are insufficient participants who have dropped out of the training program within the last six months, we extended the time period to 12 months since dropping out of the program.
4 We expected to complete interviews with 10 of the 15 participants selected – a 67% response rate. While we had expected to be able to adjust the recruitment strategy to adjust for differences in response rate by site, we found that the response rate varied quite considerably from site to site – some sites had only 8 completed interviews out of the 15 scheduled, and others had 13 or 14 completed interviews. As a result it was hard for us to predict whether we would get the full sample of 140 interviews, until the last 3 site visits. We decided to complete interviews at these last site visits with the respondents who showed up. These sites had a relatively high completion rate. As a result we ended up completing 153 interviews, an average response rate of 72% across all sites.
5 The evaluation team will prepare a short methods report on the pilot assessment study that might be published as a white paper or serve as the basis for a journal paper—explaining the process followed to develop the short skills pilot and incorporate it into the Intermediate Follow-up Survey. The results will not be analyzed as part of the impact study findings.
6 The draft Intermediate Follow-up Survey includes all of the items from the pilot assessment, to ensure that we had OMB approval for each item. Based on the findings from the pilot, we will drop the questions that do not prove successful in the pilot.
7 In the event that fewer than 300 volunteers respond to the initial assessment pilot outreach effort, the team will reach out to grantees to identify additional volunteers. Since the sample is based on volunteers, we do not expect a second recruitment effort will be necessary.
8 The Office of Child Support Enforcement (OCSE), which maintains the NDNH data, typically requires written consent to allow the evaluation contractor to match study participant identifiers to the NDNH data. OCSE has agreed to amend the study’s Memorandum of Understanding to temporarily allow the collection of NDNH data with verbal consent.
9 If there are any papers published based on the pilot, they would only concern the psychometric properties of the assessment.
10 Four HPOG 2.0 grantees provided the evaluation team with feedback on an earlier draft of the instrument. The team introduced the CBA and the program cost survey at the annual meeting for HPOG grantees in August 2018.
11 The only exceptions will be those who were confirmed deceased or asked to withdraw from future data collection efforts.
12 Please refer to the updated Information Quality Act guidelines (https://www.whitehouse.gov/wp-content/uploads/2019/04/M-19-15.pdf) and previous IQA guidance (https://www.federalregister.gov/documents/2002/02/22/R2-59/guidelines-for-ensuring-and-maximizing-the-quality-objectivity-utility-and-integrity-of-information).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Abt Single-Sided Body Template |
Author | Katheleen Linton |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |