Part B: Statistical Methods for the Collection of Follow-up Survey Data- Pregnancy Assistance Fund Study
April 2015 (Revised July 2015)
Contents
Part B Introduction
B1. Respondent Universe and Sampling Methods 2
B2. Procedures for Collection of Information 3
B3. Methods to Maximize Response Rates and Deal with Non-Response 4
B4. Test of Procedures or Methods to be Undertaken 5
TABLES
Table B1.1. Minimum Detectible Impacts for California 3
ATTACHMENTS
ATTACHMENT A: OVERVIEW OF THE PAF EVALUATION
ATTACHMENT b: QUESTION BY QUESTION SOURCE LIST FOR THE FOLLOW-UP SURVEY
ATTACHMENT C: SOURCES REFERENCED FOR THE FOLLOW-UP SURVEY
ATTACHMENT D: PERSONS CONSULTED ON INSTRUMENT DEVELOPMENT AND/OR ANALYSIS OF THE PAF follow-up SURVEY
ATTACHMENT E: CONFIDENTIALITY PLEDGE
ATTACHMENT F: ANALYSIS PLAN
ATTACHMENT G: PRETEST MEMO
ATTACHMENT H: PAF 12-MONTH FOLLOW UP 60-DAY FRN
INSTRUMENTS
Instrument 1: PAF 12-MONTH FOLLOW-UP sURVEY: cALIFORNIA
Instrument 2: PAF 12-MONTH FOLLOW-UP sURVEY: TEXAS
PART B INTRODUCTION
In March 2010, Congress authorized the Pregnancy Assistance Fund Competitive Grants Program as part of the Patient Protection and Affordable Care Act (ACA). The grants program is a key element of the federal strategy to support youth and young adults who are having or raising a child. Administered by the Office of Adolescent Health (OAH), the grants program funded a second cohort of 17 grantees—states, tribes, and tribal entities—in summer 2013 to develop and implement programs focused on an array of outcomes, including increasing access to and completion of secondary and postsecondary education, improving child and maternal health, reducing the likelihood of repeat teen pregnancies, increasing parenting and co-parenting skills, decreasing intimate partner violence, and raising awareness of available resources. To promote positive outcomes, grantees may implement a wide variety of services for expectant and parenting youth, women, fathers, and their families. OAH’s continued investment in programs for expectant and parenting youth has led to their request for a rigorous impact and implementation study of such programs, and they have contracted with Mathematica Policy Research to conduct the Pregnancy Assistance Fund Study.
Preliminary PAF Study efforts, including study design and instrument development, are being conducted through a Feasibility and Design Study (FADS). The purpose of the FADS is to design rigorous impact evaluations in three sites that serve pregnant and parenting youth (including Pregnancy Assistance Fund grantees), develop data collection materials for all aspects of an evaluation, and conduct telephone interviews with grantees about the program design decisions and early implementation experiences. Information collected through the FADS will also be used to provide funding agencies with information to inform the structure and components of programs for expectant and parenting youth and their families, so that the five-year PAF Study will be possible.
The objective of the FADS is to establish a foundation for the PAF Study’s rigorous impact and implementation evaluation. Specifically, FADS will: (1) assess design options for implementation and impact evaluation, (2) document how programs are operationalized in the field, (3) identify and enter into agreements with three sites for the evaluation, (4) provide assistance to sites to support a rigorous evaluation framework, (5) develop all evaluation instruments and obtain clearance, and (6) pilot baseline data collection. Attachment A provides an overview of the components of the PAF Study, which the FADS work is supporting. Attachment A contains a description of the three sites: experimental design studies in California and Texas and a quasi-experimental study relying on extant administrative records in Washington, DC.
Previous Information Clearance Requests Approved by OMB. OMB has previously approved one ICR related to this evaluation (ICR #201406-0990-001):
August 30, 2014 – OMB approved the instruments associated with two data collection efforts: (1) telephone interviews with all 17 current Pregnancy Assistance Fund grantees; and (2) collection of baseline data for the impact study in two sites through a baseline survey (OMB Control # 0990-0424).
Current Information Clearance Request. In this submission, OAH is requesting a revision to the existing approval to add the 12-month follow-up survey instruments to be used in the two impact sites: (1) Pregnancy Assistance Fund 12-Month Follow Up Survey – California (Instrument 1), and (2) Pregnancy Assistance Fund 12-Month Follow Up Survey – Texas (Instrument 2). These surveys are very similar to the baseline survey approved for this evaluation, and the two are nearly identical, except for some minor differences to reflect differences in the interventions. The California survey contains additional items to measure changes in youth resiliency, a primary focus of the program in California. The Texas survey does not contain such resiliency items, but does contain items measuring parenting and relationship skills, a focus of the program in Texas.
B1. Respondent Universe and Sampling Methods
There are three sites participating in the PAF Study. Two of these sites (California and Texas) will use an experimental design and primary data collection through surveys of youth, including the 12-month follow-up survey which is the focus of this ICR. OAH has selected two program sites to participate in an experimental impact study, one of which is a current Pregnancy Assistance Fund grantee. OAH has selected a third program site to participate in a quasi-experimental design evaluation. This third site, in Washington, DC, will use a quasi-experimental design and rely on administrative data provided through data use agreements with three local public agencies. Youth will not be surveyed in Washington, DC. The sites are not meant to be representative of all programs for expectant and parenting youth. Site selection has focused on programs that (1) are large enough to support an impact study, (2) are implementing programs in a way that is amenable to random assignment or a quasi-experimental design, and (3) address priority gaps in the existing research literature on evidence-based approaches to assist pregnant and parenting youth. The three study sites are described in detail in Appendix A, Overview of the PAF Study. The sample size and statistical power for each site is described below. The sites will be analyzed separately, therefore statistical power analyses are reported separately.
California. The evaluation will involve 12 program providers across the state. Within two of the larger providers, approximately 750 expectant or parenting females will be randomly assigned as individuals to either AFLP (the business as usual condition) or AFLP-PYD (the enhanced treatment condition). Across the remaining 10 providers, we will assign clusters to either AFLP or AFLP-PYD. A cluster may be an entire provider (for example, among the smallest providers), or specific geographic locations served by larger providers. We expect to randomize a total of 14 clusters, and enroll approximately another 800 expectant and parenting females across them. Sample enrollment will occur over an 18-month period.
Youth will be surveyed three times – at the time of study enrollment (baseline survey, previously approved under OMB Control # 0990-0424), 12-months later, and 24-months later.1 The primary mode of survey completion for the 12-month follow-up survey, the focus of this ICR, will be a web survey. Nonrespondents to the web survey will be given an opportunity to complete the survey using CATI.
An overall impact will be calculated as a weighted average of the impacts from the two designs. We will use inverse variance weights in our benchmark analysis and sample size weights as a sensitivity analysis. At the time of the 12-month follow-up, we expect to retain 90 percent of the sample. For a prevalence rate of 25 percent (such as a subsequent pregnancy during the follow-up period), we can detect a 6.7 percentage point difference between the two groups; and, for a prevalence rate of 50 percent (such as receiving a diploma during the follow-up period), we can detect a 8.3 percentage point difference between the two groups. Examining impacts by particular sub-groups (such as whether expecting or parenting at program enrollment, or whether primary language is English or Spanish) will be considered exploratory, as the study is not considered sufficiently powered to detect impacts on those samples. Given the risk profile of the population, the findings from this study will have policy relevance for the field without sub-group analysis.
Table B1.1 reports minimum detectible impacts on two illustrative outcomes—one with 50 percent prevalence and one with 25 percent prevalence. Separate estimates are presented for the two components of the evaluation (individual randomization and cluster randomization) as well as for the overall study (in which the overall impact is calculated as a weighted average of the impacts from the two study components).
Table B1.1. Minimum Detectible Impacts for California
|
Percentage Point Impacts for Illustrative Binary Outcomes |
|
Study Component |
50 percent prevalence rate |
25 percent prevalence rate |
Individual Randomization (2 sites; 675 youth) |
9.6 |
7.8 |
Cluster Randomization (14 sites; 720 youth) |
17.7 |
14 |
Full Study |
8.3 |
6.7 |
Notes: Sample sizes account for survey nonresponse. Figures assume that the sample is evenly divided between the program and control groups and that covariates explain 20 percent of the variance at the individual level and 40 percent at the cluster level. We assume an ICC of 0.06. The figures also assume a two-tailed t-test with 80 percent power and a 95 percent confidence interval.
Texas. We expect to enroll and randomize approximately 575 young mothers over a 24-30 month period. Youth will be surveyed three times – at the time of study enrollment (baseline survey, previously approved under OMB Control # 0990-0424), 12-months later, and 24-months later.2 The primary mode of survey completion for the 12-month follow-up survey, the focus of this ICR, will use a web survey. Nonrespondents to the web survey will be given an opportunity to complete the survey using CATI.
At the time of the 24-month follow-up, we expect to retain 90 percent of the sample, or 518 youth. For a prevalence rate of 25 percent (such as a subsequent pregnancy during the follow-up period), we can detect a 9 percentage point difference between the two groups; and, for a prevalence rate of 50 percent (such as receiving a diploma during the follow-up period), we can detect a 11 percentage point difference between the two groups. Given the small sample size, we do not anticipate conducting any subgroup analyses.
B2. Procedures for Collection of Information
In each of the two sites selected for the experimental impact study (California and Texas), all eligible youth will be considered for enrollment in the study (discussed in Section B.1). Each site will be responsible for providing the evaluation team with a list of eligible youth. The evaluation team will then work collaboratively with each site to identify youth for the study and obtain consent.
Mathematica will thoroughly and efficiently train staff to ensure they can properly inform study participants. In California, study intake will be performed by program staff trained in person on data collection procedures by Mathematica. In Texas, study intake will be performed by professional data collectors working for a subcontractor to Mathematica (Decision Information Resources) and trained by Mathematica. We will create a brief study description to ensure that accurate and consistent information is available, and train staff on explaining the study, reviewing the study description, answering questions about the study, and administering consent and the baseline survey. This process and consent forms have been approved by OMB on August 30, 2014 (OMB Control # 0990-0424).
The follow-up survey will be administered to all consented sample members 12 months after study enrollment and completion of the baseline survey. The data collection plan for the follow-up survey is the same across the two sites (California and Texas) and also reflects sensitivity to issues of efficiency, accuracy, and respondent burden. As discussed in Part A of this ICR, we will offer two modes for completing the follow-up survey. These modes will be a web-survey that will be smart phone compatible and computer-assisted telephone interviewing (CATI). We will use email and text messages with links to the web survey and toll-free telephone number should respondents prefer to complete the survey by telephone or have any issues with the web survey.
For those opting to complete the survey over the web, respondents will be provided a unique PIN and password to access the survey from either type of device. We will advise respondents to complete the survey in a private location. We will also provide them with a toll-free number to call should they prefer to complete the survey by telephone or have any issues with the web survey. The web survey will also include a link to email the project team with questions or issues.
For those who do not call in or complete the web survey, we will make outbound calls from Mathematica’s Survey Operations Center (SOC). When a respondent is reached, a SOC telephone interviewer will use computer assisted telephone interviewing (CATI) to complete the survey. If a respondent is not reached, the SOC telephone interviewer will leave a message whenever possible and provide a toll-free number the respondents can use to call and complete the CATI survey. When completing the survey through CATI, the interviewer will direct the respondent to be in a secure, private place to respond to the survey questions.
Instruments 1 and 2 contain the 12-month survey for each site separately – California and Texas. These surveys are very similar to the baseline survey approved for this evaluation, and the two are nearly identical, except for some minor differences to reflect differences in the interventions. The California survey contains additional items to measure changes in youth resiliency, a primary focus of the program in California. The Texas survey does not contain such resiliency items, but does contain items measuring parenting and relationship skills, a focus of the program in Texas. A question by question list of sources for the follow up survey is found in Attachment B, and a description of the sources referenced is found in Attachment C.
B3. Methods to Maximize Response Rates and Deal with Non-Response
OAH expects to achieve a response rate of 90 percent for the 12-month follow-up survey. We can expect to achieve these completion rates for several reasons. The first follow-up survey administration will occur at 12 months after study enrollment and the baseline survey. This timing will ensure contact data are quite current, which should minimize location problems.
Due to the rolling basis of sample enrollment, group-based administration of the 12-month follow-up survey is not possible. Therefore, an advance letter will be sent to sample members, notifying them of the data collection and providing them with the information necessary to complete the survey over the web. Additional telephone, email and text prompts to youth and parents will be conducted as needed.
In addition, sites have willingly offered assistance towards maximizing the response rate; we will invest significant effort in gaining respondents’ cooperation from the beginning of the study and minimizing burden on sites during the study enrollment and baseline survey period. If necessary, at the time of the 12-month follow-up, sites have offered their assistance by providing addresses of youth whom have been most difficult to locate. By applying identical methods for maximizing the response rates of the treatment and control groups, the evaluation team does not anticipate differences in response rates across research groups.
Additionally, $25 gift cards will be provided to respondents to encourage participation in the survey. This is consistent with other evaluations, such as the Personal Responsibility Education Program Multi-Component Evaluation (PREP), in which respondents are using phone to complete a survey, and is consistent with the amount approved for PAF on the consent forms approved by OMB on August 30, 2014 (OMB Control # 0990-0424).
As discussed above, the evaluation team anticipates high response rates to follow-up surveys. Even so, the team will take steps to understand the nature of any non-response and to account for the threat that it may pose for the validity of the study’s impact estimates. Using data from the baseline survey, evaluation team members will first test for statistically significant differences across demographic and baseline outcome variables between respondents and nonrespondents. Any such differences will be controlled for in the analyses by using non-response weights. The team will also test for differences between the research groups in their baseline characteristics and control for these differences using covariates when estimating program impacts (see Attachment F).
B4. Test of Procedures or Methods to be Undertaken
OAH and other offices within HHS (OPRE, ASPE) have made it a priority to align measures in the baseline and follow-up surveys across evaluations of similar programs and populations. As discussed in Part A of this information collection request, many of the items included on the 12-month PAF follow up survey are taken from the approved baseline survey and from similar surveys OMB has already approved for use in the ongoing Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), the Teen Pregnancy Prevention Replication Study, and the Personal Responsibility Education Program (PREP) Multi-Component Evaluation3. To date, 10,183 PPA follow-up surveys have been administered, 2,061 to expectant and parenting young women; the Replication Study first follow-up surveys has been administered to 2,014 adolescents; and the PREP follow-up survey has been administered to 1,483 youth, including 155 expectant and parenting young women.
The California follow up survey was pretested with a sample of six youth participating in a program for parenting teens in California. Five youth pretested the Texas version of the instrument. The pretest resulted in revision to our burden estimates from 30 minutes to 35 minutes, and resulted in minor wording changes. The pretest respondents had little trouble completing the instruments and following directions as instructed. Attachment G includes a copy of the pretest memo, which details the pretest procedures and summarizes adjustments made to the follow up survey as a result of the pretest.
1 The current ICR only pertains to the 12-month follow-up survey.
2 The current ICR only pertains to the 12-month follow-up survey.
3 ACF received initial OMB approval for the PPA baseline survey on July 26, 2010 (OMB Control Number 0970-0360). In summer 2011, oversight of PPA was transferred to the Office of Adolescent Health (OAH) within the Office of the Assistant Secretary, and the project is now tracked with a different OMB Control Number (0990-0382). The OMB Control Number for the Teen Pregnancy Prevention Replication Study is 0990-0394. OMB approval for the PREP follow-up survey was received on May 8, 2013 (OMB Control Number 0970-0398).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | BCollette |
File Modified | 0000-00-00 |
File Created | 2021-01-25 |